Network Working Group                                          I. Cooper
Request for Comments: 3143                                 Equinix, Inc.
Category: Informational                                        J. Dilley
                                               Akamai Technologies, Inc.
                                                               June 2001
                   Known HTTP Proxy/Caching Problems

Status of this Memo


This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.


Copyright Notice


Copyright (C) The Internet Society (2001). All Rights Reserved.




This document catalogs a number of known problems with World Wide Web (WWW) (caching) proxies and cache servers. The goal of the document is to provide a discussion of the problems and proposed workarounds, and ultimately to improve conditions by illustrating problems. The construction of this document is a joint effort of the Web caching community.

この文書は、World Wide Web(WWW)(キャッシング)プロキシとキャッシュサーバに関する既知の問題の数をカタログ。文書の目標は、問題と提案した回避策の議論を提供するために、そして最終的に問題を説明することにより、条件を改善することです。このドキュメントの構造はWebキャッシングコミュニティの共同の努力です。

Table of Contents


   1.    Introduction . . . . . . . . . . . . . . . . . . . . . . . .  2
   1.1   Problem Template . . . . . . . . . . . . . . . . . . . . . .  2
   2.    Known Problems . . . . . . . . . . . . . . . . . . . . . . .  4
   2.1   Known Specification Problems . . . . . . . . . . . . . . . .  5
   2.1.1 Vary header is underspecified and/or misleading  . . . . . .  5
   2.1.2 Client Chaining Loses Valuable Length Meta-Data  . . . . . .  9
   2.2   Known Architectural Problems . . . . . . . . . . . . . . . . 10
   2.2.1 Interception proxies break client cache directives . . . . . 10
   2.2.2 Interception proxies prevent introduction of new HTTP
            methods  . . . . . . . . . . . . . . . . . . . . . . . .  11
   2.2.3 Interception proxies break IP address-based authentication . 12
   2.2.4 Caching proxy peer selection in heterogeneous networks . . . 13
   2.2.5 ICP Performance  . . . . . . . . . . . . . . . . . . . . . . 15
   2.2.6 Caching proxy meshes can break HTTP serialization of content 16
   2.3   Known Implementation Problems  . . . . . . . . . . . . . . . 17
   2.3.1 User agent/proxy failover  . . . . . . . . . . . . . . . . . 17
   2.3.2 Some servers send bad Content-Length headers for files that
            contain CR . . . . . . . . . . . . . . . . . . . . . . .  18
   3.    Security Considerations  . . . . . . . . . . . . . . . . . . 18
         References . . . . . . . . . . . . . . . . . . . . . . . . . 19
         Authors' Addresses . . . . . . . . . . . . . . . . . . . . . 20
   A.    Archived Known Problems  . . . . . . . . . . . . . . . . . . 21
   A.1   Architectural  . . . . . . . . . . . . . . . . . . . . . . . 21
   A.1.1 Cannot specify multiple URIs for replicated resources  . . . 21
   A.1.2 Replica distance is unknown  . . . . . . . . . . . . . . . . 22
   A.1.3 Proxy resource location  . . . . . . . . . . . . . . . . . . 23
   A.2   Implementation . . . . . . . . . . . . . . . . . . . . . . . 23
   A.2.1 Use of Cache-Control headers . . . . . . . . . . . . . . . . 23
   A.2.2 Lack of HTTP/1.1 compliance for caching proxies  . . . . . . 24
   A.2.3 ETag support . . . . . . . . . . . . . . . . . . . . . . . . 25
   A.2.4 Servers and content should be optimized for caching  . . . . 26
   A.3   Administration . . . . . . . . . . . . . . . . . . . . . . . 27
   A.3.1 Lack of fine-grained, standardized hierarchy controls  . . . 27
   A.3.2 Proxy/Server exhaustive log format standard for analysis . . 27
   A.3.3 Trace log timestamps . . . . . . . . . . . . . . . . . . . . 28
   A.3.4 Exchange format for log summaries  . . . . . . . . . . . . . 29
         Full Copyright Statement . . . . . . . . . . . . . . . . . . 32
1. Introduction
1. はじめに

This memo discusses problems with proxies - which act as application-level intermediaries for Web requests - and more specifically with caching proxies, which retain copies of previously requested resources in the hope of improving overall quality of service by serving the content locally. Commonly used terminology in this memo can be found in the "Internet Web Replication and Caching Taxonomy"[2].

ローカルコンテンツを提供することによって、サービスの全体的な品質を向上させることを期待して、以前に要求されたリソースのコピーを保持キャッシングプロキシ、をさらに具体的かつ - Web要求のためのアプリケーションレベルの仲介として機能 - このメモは、プロキシの問題を論じています。このメモで一般的に使用される用語は、「インターネットのWebレプリケーションおよびキャッシング分類学」に記載されています[2]。

No individual or organization has complete knowledge of the known problems in Web caching, and the editors are grateful to the contributors to this document.


1.1 Problem Template

A common problem template is used within the following sections. We gratefully acknowledge RFC2525 [1] which helped define an initial format for this known problems list. The template format is summarized in the following table and described in more detail below.

一般的な問題のテンプレートには、次のセクション内で使用されます。私たちは感謝RFC2525 [1]は、この既知の問題のリストについては、初期フォーマットを定義役立っている認めます。テンプレートのフォーマットは、次の表に要約され、以下により詳細に記載されています。

Name: short, descriptive name of the problem (3-5 words) Classification: classifies the problem: performance, security, etc Description: describes the problem succinctly Significance: magnitude of problem, environments where it exists Implications: the impact of the problem on systems and networks See Also: a reference to a related known problem Indications: states how to detect the presence of this problem


Solution(s): describe the solution(s) to this problem, if any Workaround: practical workaround for the problem References: information about the problem or solution Contact: contact name and email address for this section


Name A short, descriptive, name (3-5 words) name associated with the problem.


Classification Problems are grouped into categories of similar problems for ease of reading of this memo. Choose the category that best describes the problem. The suggested categories include three general categories and several more specific categories.


* Architecture: the fundamental design is incomplete, or incorrect


* Specification: the spec is ambiguous, incomplete, or incorrect.


* Implementation: the implementation of the spec is incorrect.


* Performance: perceived page response at the client is excessive; network bandwidth consumption is excessive; demand on origin or proxy servers exceed reasonable bounds.


* Administration: care and feeding of caches is, or causes, a problem.


* Security: privacy, integrity, or authentication concerns.


Description A definition of the problem, succinct but including necessary background information.


Significance (High, Medium, Low) May include a brief summary of the environments for which the problem is significant.


Implications Why the problem is viewed as a problem. What inappropriate behavior results from it? This section should substantiate the magnitude of any problem indicated with High significance.


See Also Optional. List of other known problems that are related to this one.


Indications How to detect the presence of the problem. This may include references to one or more substantiating documents that demonstrate the problem. This should include the network configuration that led to the problem such that it can be reproduced. Problems that are not reproducible will not appear in this memo.


Solution(s) Solutions that permanently fix the problem, if such are known. For example, what version of the software does not exhibit the problem? Indicate if the solution is accepted by the community, one of several solutions pending agreement, or open possibly with experimental solutions.


Workaround Practical workaround if no solution is available or usable. The workaround should have sufficient detail for someone experiencing the problem to get around it.


References References to related information in technical publications or on the web. Where can someone interested in learning more go to find out more about this problem, its solution, or workarounds?


Contact Contact name and email address of the person who supplied the information for this section. The editors are listed as contacts for anonymous submissions.


2. Known Problems

The remaining sections of this document present the currently documented known problems. The problems are ordered by classification and significance. Issues with protocol specification or architecture are first, followed by implementation issues. Issues of high significance are first, followed by lower significance.


Some of the problems initially identified in the previous versions of this document have been moved to Appendix A since they discuss issues where resolution primarily involves education rather than protocol work.


A full list of the problems is available in the table of contents.


2.1 Known Specification Problems
2.1.1 Vary header is underspecified and/or misleading

Name The "Vary" header is underspecified and/or misleading


Classification Specification


Description The Vary header in HTTP/1.1 was designed to allow a caching proxy to safely cache responses even if the server's choice of variants is not entirely understood. As RFC 2616 says:

ザ・はHTTP / 1.1にVaryヘッダ説明は、変異型のサーバの選択は完全には理解されていない場合でも、安全にキャッシュ応答にキャッシュプロキシを許可するように設計されました。 RFC 2616が言うように:

         The Vary header field can be used to express the parameters the
         server uses to select a representation that is subject to
         server-driven negotiation.

One might expect that this mechanism is useful in general for extensions that change the response message based on some aspects of the request. However, that is not true.


During the design of the HTTP delta encoding specification[9] it was realized that an HTTP/1.1 proxy that does not understand delta encoding might cache a delta-encoded response and then later deliver it to a non-delta-capable client, unless the extension included some mechanism to prevent this. Initially, it was thought that Vary would suffice, but the following scenario proves this wrong.

HTTPデルタエンコーディング仕様の設計中に[9]それは、デルタエンコーディングを理解していないHTTP / 1.1プロキシがない限り、デルタエンコードされた応答をキャッシュし、後で非デルタ対応のクライアントに配信する可能性があることに気づきました。拡張子は、これを防ぐためにいくつかのメカニズムが含まれています。最初は、それが十分で異なりますが、次のシナリオは、この間違ったことを証明と考えられていました。

NOTE: It is likely that other scenarios exhibiting the same basic problem with "Vary" could be devised, without reference to delta encoding. This is simply a concrete scenario used to explain the problem.


A complete description of the IM and A-IM headers may be found in the "Delta encoding in HTTP" specification. For the purpose of this problem description, the relevant details are:


1. The concept of an "instance manipulation" is introduced. In some ways, this is similar to a content-coding, but there are differences. One example of an instance manipulation name is "vcdiff".


2. A client signals its willingness to accept one or more instance-manipulations using the A-IM header.


3. A server indicates which instance-manipulations are used to encode the body of a response using the IM header.


4. Existing implementations will ignore the A-IM and IM headers, following the usual HTTP rules for handling unknown headers.


5. Responses encoded with an instance-manipulation are sent using the (proposed) 226 status code, "IM Used".


6. In response to a conditional request that carries an IM header, if the request-URI has been modified then a server may transmit a compact encoding of the modifications using a delta-encoding instead of a status-200 response. The encoded response cannot be understood by an implementation that does not support delta encodings.


This summary omits many details.


Suppose client A sends this request via proxy P:


GET HTTP/1.1 Host: If-None-Match: "abc" A-IM: vcdiff

GET HTTP / 1.1ホストなし - 一致した場合: "ABC"-IM:vcdiff

and the origin server returns, via P, this response:


HTTP/1.1 226 IM Used Etag: "def" Date: Wed, 19 Apr 2000 18:46:13 GMT IM: vcdiff Cache-Control: max-age-60 Vary: A-IM, If-None-Match

HTTP / 1.1 226 IMが使用されるのEtag: "DEF" 日:水曜日、2000年4月19日夜06時46分13秒GMT IM:vcdiffののCache-Control:最大エージング-60は変更される場合:-IMを、場合-なしマッチ

the body of which is a delta-encoded response (it encodes the difference between the Etag "abc" instance of foo.html, and the "def" instance). Assume that P stores this response in its cache, and that P does not understand the vcdiff encoding.


Later, client B, also ignorant of delta-encoding, sends this request via P:


GET HTTP/1.1 Host:

GET HTTP / 1.1ホスト

What can P do now? According to the specification for the Vary header in RFC2616,

Pは今何ができますか? RFC2616で変化するヘッダの仕様によれば、

The Vary field value indicates the set of request-header fields that fully determines, while the response is fresh, whether a cache is permitted to use the response to reply to a subsequent request without revalidation.


Implicitly, however, the cache would be allowed to use the stored response in response to client B WITH "revalidation". This is the potential bug.


An obvious implementation of the proxy would send this request to test whether its cache entry is fresh (i.e., to revalidate the entry):


GET /foo.html HTTP/1.1 Host: If-None-Match: "def"

GET /foo.html HTTP / 1.1ホスト:example.comの場合 - なし - マッチ: "DEF"

That is, the proxy simply forwards the new request, after doing the usual transformation on the URL and tacking on the "obvious" If-None-Match header.


If the origin server's Etag for the current instance is still "def", it would naturally respond:


HTTP/1.1 304 Not Modified Etag: "def" Date: Wed, 19 Apr 2000 18:46:14 GMT

"DEF" 日時:水曜日、2000年4月19日午後06時46分14秒GMT HTTP / 1.1 304のEtagを変更していません

thus telling the proxy P that it can use its stored response. But this cache response actually involves a delta-encoding that would not be sensible to client B, signaled by a header field that would be ignored by B, and so the client displays garbage.


The problem here is that the original request (from client A) generated a response that is not sensible to client B, not merely one that is not "the appropriate representation" (as the result of server-driven negotiation).


One might argue that the proxy P shouldn't be storing status-226 responses in the first place. True in theory, perhaps, but unfortunately RFC2616, section 13.4, says:


A response received with any [status code other than 200, 203, 206, 300, 301 or 410] MUST NOT be returned in a reply to a subsequent request unless there are cache-control directives or another header(s) that explicitly allow it. For example, these include the following: an Expires header (section 14.21); a "max-age", "s-maxage", "must-revalidate", "proxy-revalidate", "public" or "private" cache-control directive (section 14.9).

キャッシュ制御ディレクティブまたは明示的に許可し、別のヘッダ(単数または複数)がない限り、任意の[200、203、206、300、301または410以外のステータスコード]と、受信した応答は、後続の要求に対する応答で返されてはいけません。例えば、これらは、以下を含む:Expiresヘッダ(セクション14.21)を、 "MAXAGE"、 "S-MAXAGE"、 "MUST-再検証"、 "プロキシ再検証"、 "パブリック" または "プライベート" キャッシュ制御ディレクティブ(セクション14.9)。

In other words, the specification allows caching of responses with yet-to-be-defined status codes if the response carries a plausible Cache-Control directive. So unless we ban servers implementing this kind of extension from using these Cache-Control directives at all, the Vary header just won't work.


Significance Medium


Implications Certain plausible extensions to the HTTP/1.1 protocol might not interoperate correctly with older HTTP/1.1 caches, if the extensions depend on an interpretation of Vary that is not the same as is used by the cache implementer.

それはキャッシュ実装によって使用されていると同じではありませんヴァリの拡張子は解釈に依存している場合、HTTP / 1.1プロトコルへのインプリケーション一定の説得力の拡張は、古いHTTP / 1.1キャッシュと正しく相互運用しない場合があります。

This would have the effect either of causing hard-to-debug cache transparency failures, or of discouraging the deployment of such extensions, or of encouraging the implementers of such extensions to disable caching entirely.


Indications The problem is visible when hand-simulating plausible message exchanges, especially when using the proposed delta encoding extension. It probably has not been visible in practice yet.




1. Section 13.4 of the HTTP/1.1 specification should probably be changed to prohibit caching of responses with status codes that the cache doesn't understand, whether or not they include Expires headers and the like. (It might require some care to define what "understands" means, leaving room for future extensions with new status codes.) The behavior in this case needs to be defined as equivalent to "Cache-Control: no-store" rather than "no-cache", since the latter allows revalidation.

HTTPの1節13.4 / 1.1仕様では、おそらく彼らは、ヘッダなどを含むExpiresかどうかに関係なく、キャッシュは理解していないステータスコードとレスポンスのキャッシュを禁止するように変更する必要があります。ノー」ではなく、「無店舗をのCache-Control」(。それは、新しいステータスコードと将来の拡張の余地を残して、手段を「理解」かを定義するためにいくつかの注意が必要になる場合があります)、この場合の動作はと同等と定義する必要はありません-cache」、後者は再確認を可能にするからです。

Possibly the specification of Vary should require that it be treated as "Cache-Control: no-store" whenever the status code is unknown - that should solve the problem in the scenario given here.

ここでは特定のシナリオで問題を解決する必要があります - ステータスコードが不明であるときは常に:おそらくの仕様は、それが「無店舗のCache-Control」として扱われることを要求すべきであるヴァリ。

2. Designers of HTTP/1.1 extensions should consider using mechanisms other than Vary to prevent false caching.

HTTP / 1.1の拡張機能の2設計者は、偽のキャッシュを防ぐために変化以外のメカニズムを使用して検討すべきです。

It is not clear whether the Vary mechanism is widely implemented in caches; if not, this favors solution #1.


Workaround A cache could treat the presence of a Vary header in a response as an implicit "Cache-control: no-store", except for "known" status codes, even though this is not required by RFC 2616. This would avoid any transparency failures. "Known status codes" for basic HTTP/1.1 caches probably include: 200, 203, 206, 300, 301, 410 (although this list should be re-evaluated in light of the problem discussed here).

これは、任意の透明性を回避するRFC 2616でこれが必要とされていないにもかかわらず、「既知」のステータスコードを除いて、:キャッシュは暗黙の「無店舗のCache-Control」として応じて変化ヘッダーの存在を扱うことができ回避策失敗。基本的なHTTP / 1.1キャッシュの「既知のステータスコード」おそらく含む:200、203、206、300、301、410(このリストはここで説明問題に照らして再評価されるべきです)。

References See [9] for the specification of the delta encoding extension, as well as for an example of the use of a Cache-Control extension instead of "Vary."


Contact Jeff Mogul <>


2.1.2 Client Chaining Loses Valuable Length Meta-Data

Name Client Chaining Loses Valuable Length Meta-Data


Classification Performance


Description HTTP/1.1[3] implementations are prohibited from sending Content-Length headers with any message whose body has been Transfer-Encoded. Because 1.0 clients cannot accept chunked Transfer-Encodings, receiving 1.1 implementations must forward the body to 1.0 clients must do so without the benefit of information that was discarded earlier in the chain.

説明HTTP / 1.1 [3]の実装は、その本体転送符号化された任意のメッセージとのContent-Lengthヘッダを送信することを禁止されています。 1.0クライアントはチャンクされた転送エンコーディングを受け入れることができないので、1.1の実装を受信することは、以前のチェーンで廃棄された情報の恩恵なしにそれを行う必要があります1.0のクライアントに身体を転送する必要があります。

Significance Low


Implications Lacking either a chunked transfer encoding or Content-Length indication creates negative performance implications for how the proxy must forward the message body.


In the case of response bodies, the server may either forward the response while closing the connection to indicate the end of the response or must utilize store and forward semantics to buffer the entire response in order to calculate a Content-Length. The former option defeats the performance benefits of persistent connections in HTTP/1.1 (and their Keep-Alive cousin in HTTP/1.0) as well as creating some ambiguously lengthed responses. The latter store and forward option may not even be feasible given the size of the resource and it will always introduce increased latency.

反応の終了を指示するか、コンテンツの長さを計算するために全体の応答をバッファリングするためにストアアンドフォワードセマンティクスを利用しなければならない接続を閉じている間、応答体の場合には、サーバは、応答を転送することができるいずれか。かつてのオプションは、HTTP / 1.1(およびHTTP / 1.0でのキープアライブいとこ)に持続的接続のパフォーマンス上の利点を倒すだけでなく、いくつかのあいまいlengthed応答を作成します。後者のストアアンドフォワードオプションでも、リソースのサイズ与えられた実現可能ではないかもしれない、それは常に増加し、待ち時間をご紹介します。

Request bodies must undertake the store and forward process as 1.0 request bodies must be delimited by Content-Length headers. As with response bodies this may place unacceptable resource constraints on the proxy and the request may not be able to be satisfied.


Indications The lack of HTTP/1.0 style persistent connections between 1.0 clients and 1.1 proxies, only when accessing 1.1 servers, is a strong indication of this problem.

適応症1.1サーバにアクセスするだけで1.0クライアントと1.1プロキシ間HTTP / 1.0スタイルの持続的な接続の欠如は、この問題の強力な指標です。

Solution(s) An HTTP specification clarification that would allow origin known identity document Content-Lengths to be carried end to end would alleviate this issue.


Workaround None.


Contact Patrick McManus <>


2.2 Known Architectural Problems
2.2.1 Interception proxies break client cache directives

Name Interception proxies break client cache directives


Classification Architecture


Description HTTP[3] is designed for the user agent to be aware if it is connected to an origin server or to a proxy. User agents believing they are transacting with an origin server but which are really in a connection with an interception proxy may fail to send critical cache-control information they would have otherwise included in their request.

それはオリジンサーバまたはプロキシに接続されている場合、ユーザエージェントが認識するための説明HTTP [3]に設計されています。彼らは迎撃プロキシに関連して、実際にされて、オリジンサーバーで取引されますが、信じユーザーエージェントは、彼らがそうでなければ、その要求に含まれていた重要なキャッシュ制御情報を送信するために失敗することがあります。

Significance High


Implications Clients may receive data that is not synchronized with the origin even when they request an end to end refresh, because of the lack of inclusion of either a "Cache-control: no-cache" or "must-revalidate" header. These headers have no impact on origin server behavior so may not be included by the browser if it believes it is connected to that resource. Other related data implications are possible as well. For instance, data security may be compromised by the lack of inclusion of "private" or "no-store" clauses of the Cache-control header under similar conditions.


Indications Easily detected by placing fresh (un-expired) content on a caching proxy while changing the authoritative copy, then requesting an end-to-end reload of the data through a proxy in both interception and explicit modes.


Solution(s) Eliminate the need for interception proxies and IP spoofing, which will return correct context awareness to the client.


Workaround Include relevant Cache-Control directives in every request at the cost of increased bandwidth and CPU requirements.


Contact Patrick McManus <>


2.2.2 Interception proxies prevent introduction of new HTTP methods

Name Interception proxies prevent introduction of new HTTP methods


Classification Architecture


Description A proxy that receives a request with a method unknown to it is required to generate an HTTP 501 Error as a response. HTTP methods are designed to be extensible so there may be applications deployed with initial support just for the user agent and origin server. An interception proxy that hijacks requests which include new methods destined for servers that have implemented those methods creates a de-facto firewall where none may be intended.

それに対して未知の方法で要求を受信する説明Aプロキシは、応答として、HTTP 501エラーを生成する必要があります。 HTTPメソッドは、単にユーザーエージェントとオリジンサーバの初期サポートでデプロイされたアプリケーションがあるかもしれないので、拡張できるように設計されています。これらのメソッドを実装したサーバ宛ての新しいメソッドを含むリクエストをハイジャック傍受プロキシは何も意図していないことも事実上のファイアウォールを作成します。

Significance Medium within interception proxy environments.


Implications Renders new compliant applications useless unless modifications are made to proxy software. Because new methods are not required to be globally standardized it is impossible to keep up to date in the general case.


Solution(s) Eliminate the need for interception proxies. A client receiving a 501 in a traditional HTTP environment may either choose to repeat the request to the origin server directly, or perhaps be configured to use a different proxy.


Workaround Level 5 switches (sometimes called Level 7 or application layer switches) can be used to keep HTTP traffic with unknown methods out of the proxy. However, these devices have heavy buffering responsibilities, still require TCP sequence number spoofing, and do not interact well with persistent connections.


The HTTP/1.1 specification allows a proxy to switch over to tunnel mode when it receives a request with a method or HTTP version it does not understand how to handle.

HTTP / 1.1仕様は、それがどのように処理するかを理解しない方法やHTTPバージョンの要求を受信すると、プロキシがトンネルモードに切り替えることを可能にします。

Contact Patrick McManus <> Henrik Nordstrom <> (HTTP/1.1 clarification)

連絡先パトリック・マクマナス<>ヘンリック・ノードストローム<>(HTTP / 1.1の明確化)

2.2.3 Interception proxies break IP address-based authentication

Name Interception proxies break IP address-based authentication


Classification Architecture


Description Some web servers are not open for public access, but restrict themselves to accept only requests from certain IP address ranges for security reasons. Interception proxies alter the source (client) IP addresses to that of the proxy itself, without the knowledge of the client/user. This breaks such authentication mechanisms and prohibits otherwise allowed clients access to the servers.


Significance Medium


Implications Creates end user confusion and frustration.


Indications Users may start to see refused connections to servers after interception proxies are deployed.


Solution(s) Use user-based authentication instead of (IP) address-based authentication.


Workaround Using IP filters at the intercepting device (L4 switch) and bypass all requests to such servers concerned.


Contact Keith K. Chau <>


2.2.4 Caching proxy peer selection in heterogeneous networks

Name Caching proxy peer selection in heterogeneous networks


Classification Architecture


Description ICP[4] based caching proxy peer selection in networks with large variance in latency and bandwidth between peers can lead to non-optimal peer selection. For example take Proxy C with two siblings, Sib1 and Sib2, and the following network topology (summarized).

ピア間の遅延と帯域幅の大きな分散を有するネットワークに説明ICP [4]に基づいて、キャッシングプロキシピア選択が最適でないピア選択につながり得ます。例えば2人の兄弟、SIB1及びSIB2、および次のネットワークトポロジ(要約)とプロキシCを取ります。

* Cache C's link to Sib1, 2 Mbit/sec with 300 msec latency

300ミリ秒のレイテンシを持つ* SIB1にキャッシュCのリンク、2メガビット/秒

* Cache C's link to Sib2, 64 Kbit/sec with 10 msec latency.


ICP[4] does not work well in this context. If a user submits a request to Proxy C for page P that results in a miss, C will send an ICP request to Sib1 and Sib2. Assume both siblings have the requested object P. The ICP_HIT reply will always come from Sib2 before Sib1. However, it is clear that the retrieval of large objects will be faster from Sib1, rather than Sib2.


The problem is more complex because Sib1 and Sib2 can't have a 100% hit ratio. With a hit rate of 10%, it is more efficient to use Sib1 with resources larger than 48K. The best choice depends on at least the hit rate and link characteristics; maybe other parameters as well.

SIB1及びSIB2が100%のヒット率を持つことができないため、問題はより複雑です。 10%のヒット率で、48Kより大きなリソースとSIB1を使用する方が効率的です。最良の選択は、少なくともヒット率及びリンク特性に依存します。同様に、多分他のパラメータ。

Significance Medium


Implications By using the first peer to respond, peer selection algorithms are not optimizing retrieval latency to end users. Furthermore they are causing more work for the high-latency peer since it must respond to such requests but will never be chosen to serve content if the lower latency peer has a copy.


Indications Inherent in design of ICP v1, ICP v2, and any cache mesh protocol that selects peers based upon first response.

ICP V1を、ICPのV2、及び最初の応答に基づいてピアを選択する任意のキャッシュメッシュプロトコルの設計に固有の指標。

This problem is not exhibited by cache digest or other protocols which (attempt to) maintain knowledge of peer contents and only hit peers that are believed to have a copy of the requested page.


Solution(s) This problem is architectural with the peer selection protocols.


Workaround Cache mesh design when using such a protocol should be done in such a way that there is not a high latency variance among peers. In the example presented in the above description the high latency high bandwidth peer could be used as a parent, but should not be used as a sibling.


Contact Ivan Lovric <> John Dilley <>


2.2.5 ICP Performance
2.2.5 ICPパフォーマンス

Name ICP performance


Classification Architecture(ICP), Performance


Description ICP[4] exhibits O(n^2) scaling properties, where n is the number of participating peer proxies. This can lead ICP traffic to dominate HTTP traffic within a network.

説明ICPは、[4]、nは参加ピアプロキシの数であり、O(N ^ 2)スケーリング特性を示します。これは、ネットワーク内のHTTPトラフィックを支配するICPトラフィックを導くことができます。

Significance Medium


Implications If a proxy has many ICP peers the bandwidth demand of ICP can be excessive. System managers must carefully regulate ICP peering. ICP also leads proxies to become homogeneous in what they serve; if your proxy does not have a document it is unlikely your peers will have it either. Therefore, ICP traffic requests are largely unable to locate a local copy of an object (see [6]).

プロキシは、多くのICPピアを持っている場合は影響がICPの帯域幅の需要が過剰になることができます。システム管理者は、慎重にICPピアリングを調整する必要があります。 ICPはまた、彼らが提供するもので均質になるためにプロキシをリード。プロキシは、それがそうである文書を持っていない場合、あなたのピアはどちらかそれを持っています。したがって、ICPのトラフィック要求がオブジェクトのローカルコピーを見つけること大きくできない(参照[6])。

Indications Inherent in design of ICP v1, ICP v2.

ICP v1の、ICP V2の設計に固有の適応症。

Solution(s) This problem is architectural - protocol redesign or replacement is required to solve it if ICP is to continue to be used.

溶液(S)この問題は、建築である - プロトコルの再設計や交換はICPを使用することを続けるのであればそれを解決するために必要とされます。

Workaround Implementation workarounds exist, for example to turn off use of ICP, to carefully regulate peering, or to use another mechanism if available, such as cache digests. A cache digest protocol shares a summary of cache contents using a Bloom Filter technique. This allows a cache to estimate whether a peer has a document. Filters are updated regularly but are not always up-to-date so cannot help when a spike in popularity occurs. They also increase traffic but not as much as ICP.


Proxy clustering protocols organize proxies into a mesh provide another alternative solution. There is ongoing research on this topic.


Contact John Dilley <>


2.2.6 Caching proxy meshes can break HTTP serialization of content

Name Caching proxy meshes can break HTTP serialization of content


Classification Architecture (HTTP protocol)


Description A caching proxy mesh where a request may travel different paths, depending on the state of the mesh and associated caches, can break HTTP content serialization, possibly causing the end user to receive older content than seen on an earlier request, where the request traversed another path in the mesh.


Significance Medium


Implications Can cause end user confusion. May in some situations (sibling cache hit, object has changed state from cacheable to uncacheable) be close to impossible to get the caches properly updated with the new content.


Indications Older content is unexpectedly returned from a caching proxy mesh after some time.


Solutions(s) Work with caching proxy vendors and researchers to find a suitable protocol for maintaining proxy relations and object state in a mesh.


Workaround When designing a hierarchy/mesh, make sure that for each end-user/URL combination there is only one single path in the mesh during normal operation.

回避策階層/メッシュを設計するとき、各エンドユーザー/ URLの組み合わせのために、通常動作時に、メッシュ内のただ1つの経路があることを確認してください。

Contact Henrik Nordstrom <>


2.3 Known Implementation Problems
2.3.1 User agent/proxy failover

Name User agent/proxy failover


Classification Implementation


Description Failover between proxies at the user agent (using a proxy.pac[8] file) is erratic and no standard behavior is defined. Additionally, behavior is hard-coded into the browser, so that proxy administrators cannot use failover at the user agent effectively.

(はproxy.pac [8]ファイルを使用して)ユーザエージェントでプロキシ間説明failoverは不安定であり、標準的な動作は定義されていません。プロキシ管理者が効果的にユーザーエージェントでフェイルオーバーを使用することはできませんように、また、動作は、ブラウザの中にハードコーディングされています。

Significance Medium


Implications Architects are forced to implement failover at the proxy itself, when it may be more appropriate and economical to do it within the user agent.


Indications If a browser detects that its primary proxy is down, it will wait n minutes before trying the next one it is configured to use. It will then wait y minutes before asking the user if they'd like to try the original proxy again. This is very confusing for end users.


Solution(s) Work with browser vendors to establish standard extensions to JavaScript proxy.pac libraries that will allow configuration of these timeouts.


Workaround User education; redundancy at the proxy level.


Contact Mark Nottingham <>


2.3.2 Some servers send bad Content-Length headers for files that contain CR


Name Some servers send bad Content-Length headers for files that contain CR


Classification Implementation


Description Certain web servers send a Content-length value that is larger than number of bytes in the HTTP message body. This happens when the server strips off CR characters from text files with lines terminated with CRLF as the file is written to the client. The server probably uses the stat() system call to get the file size for the Content-Length header. Servers that exhibit this behavior include the GN Web server (version 2.14 at least).

説明特定のWebサーバーは、HTTPメッセージ本文のバイト数よりも大きいコンテンツ長の値を送信します。ファイルがクライアントに書き込まれると、サーバーがCRLFで終了行でテキストファイルからCR文字を取り除き場合に発生します。サーバーはおそらく、Content-Lengthヘッダのファイルサイズを取得するためのstat()システムコールを使用しています。この挙動を示すサーバーは、(少なくとも、バージョン2.14)GN Webサーバーが含まれています。

Significance Low. Surveys indicate only a small number of sites run faulty servers.


Implications In this case, an HTTP client (e.g., user agent or proxy) may believe it received a partial response. HTTP/1.1 [3] advises that caches MAY store partial responses.

この場合、影響は、HTTPクライアント(例えば、ユーザエージェントまたはプロキシ)は、部分的な応答を受信したと信じていてもよいです。 HTTP / 1.1 [3]キャッシュが部分的応答を格納してもよいことをお勧めします。

Indications Count the number of bytes in the message body and compare to the Content-length value. If they differ the server exhibits this problem.


Solutions Upgrade or replace the buggy server.


Workaround Some browsers and proxies use one TCP connection per object and ignore the Content-Length. The document end of file is identified by the close of the TCP socket.


Contact Duane Wessels <>


3. Security Considerations

This memo does not raise security considerations in itself. See the individual submissions for details of security concerns and issues.




[1] Paxson, V., Allman, M., Dawson, S., Fenner, W., Griner, J., Heavens, I., Lahey, K., Semke, J. and B. Volz, "Known TCP Implementation Problems", RFC 2525, March 1999.

[1]パクソン、V.、オールマン、M.、ドーソン、S.、フェナー、W.、Griner、J.、天、I.、レイヒー、K.、Semke、J.およびB.フォルツ、「既知のTCP実装の問題」、RFC 2525、1999年3月。

[2] Cooper, I., Melve, I. and G. Tomlinson, "Internet Web Replication and Caching Taxonomy", RFC 3040, January 2001.

[2]クーパー、I.、Melve、I.およびG.トムリンソン、 "インターネットのWebレプリケーションおよびキャッシング分類学"、RFC 3040、2001年1月。

[3] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach, P. and T. Berners-Lee, "Hypertext Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999.

[3]フィールディング、R.、ゲティス、J.、モーグル、J.、Frystyk、H.、Masinter、L.、リーチ、P.、およびT.バーナーズ - リー、 "ハイパーテキスト転送プロトコル - HTTP / 1.1"、 RFC 2616、1999年6月。

[4] Wessels, D. and K. Claffy, "Internet Cache Protocol (ICP), Version 2", RFC 2186, September 1997.

[4]ウェッセル、D.及びK. Claffy、 "インターネットキャッシュプロトコル(ICP)、バージョン2"、RFC 2186、1997年9月。

[5] Davison, B., "Web Traffic Logs: An Imperfect Resource for Evaluation", in Proceedings of the Ninth Annual Conference of the Internet Society (INET'99), July 1999.


[6] Melve, I., "Relation Analysis, Cache Meshes", in Proceedings of the 3rd International WWW Caching Workshop, June 1998, <>.

[6] Melve、I.、 "関係の分析、キャッシュメッシュ"、第3回国際WWWキャッシュワークショップの議事録、1998年6月、<> 。

[7] Krishnamurthy, B. and M. Arlett, "PRO-COW: Protocol Compliance on the Web", AT&T Labs Technical Report #990803-05-TM, August 1999, <>.

[7] Krishnamurthy、B.とM. Arlett、 "PRO-COW:Web上のプロトコルコンプライアンス"、AT&T Labsの技術レポート#990803から05-TM、1999年8月、< /~bala/papers/>。

[8] Netscape, Inc., "Navigator Proxy Auto-Config File Format", March 1996,


[9] Mogul, J., Krishnamurthy, B., Douglis, F., Feldmann, A., Goland, Y., van Hoff, A. and D. Hellerstein, "HTTP Delta in HTTP", Work in Progress.

[9]モーグル、J.、Krishnamurthy、B.、Douglis、F.、フェルドマン、A.、Goland、Y.、バンホフ、A.及びD. Hellerstein、 "HTTPでHTTPデルタ"、ProgressのWork。

Authors' Addresses


Ian Cooper Equinix, Inc. 2450 Bayshore Parkway Mountain View, CA 94043 USA

イアン・クーパーエクイニクス社2450ベイショアパークウェイマウンテンビュー、CA 94043 USA

Phone: +1 650 316 6065 EMail:

電話:+1 650 316 6065 Eメール

John Dilley Akamai Technologies, Inc. 1400 Fashion Island Blvd Suite 703 San Mateo, CA 94404 USA

ジョン・ディリーアカマイ・テクノロジーズ株式会社1400ファッションアイランドブルバードスイート703サンマテオ、CA 94404 USA

Phone: +1 650 627 5244 EMail:

電話:+1 650 627 5244 Eメール

Appendix A. Archived Known Problems


The following sub-sections are an archive of problems identified in the initial production of this memo. These are typically problems requiring further work/research, or user education. They are included here for reference purposes only.


A.1 Architectural


A.1.1 Cannot specify multiple URIs for replicated resources


Name Cannot specify multiple URIs for replicated resources


Classification Architecture


Description There is no way to specify that multiple URIs may be used for a single resource, one for each replica of the resource. Similarly, there is no way to say that some set of proxies (each identified by a URI) may be used to resolve a URI.


Significance Medium


Implications Forces users to understand the replication model and mechanism. Makes it difficult to create a replication framework without protocol support for replication and naming.


Indications Inherent in HTTP/1.0, HTTP/1.1.

HTTP / 1.0、HTTP / 1.1に内在する兆候。

Solution(s) Architectural - protocol design is necessary.

溶液(S)建築 - プロトコルの設計が必要です。

Workaround Replication mechanisms force users to locate a replica or mirror site for replicated content.


Contact Daniel LaLiberte <>


A.1.2 Replica distance is unknown


Name Replica distance is unknown


Classification Architecture


Description There is no recommended way to find out which of several servers or proxies is closer either to the requesting client or to another machine, either geographically or in the network topology.


Significance Medium


Implications Clients must guess which replica is closer to them when requesting a copy of a document that may be served from multiple locations. Users must know the set of servers that can serve a particular object. This in general is hard to determine and maintain. Users must understand network topology in order to choose the closest copy. Note that the closest copy is not always the one that will result in quickest service. A nearby but heavily loaded server may be slower than a more distant but lightly loaded server.


Indications Inherent in HTTP/1.0, HTTP/1.1.

HTTP / 1.0、HTTP / 1.1に内在する兆候。

Solution(s) Architectural - protocol work is necessary. This is a specific instance of a general problem in widely distributed systems. A general solution is unlikely, however a specific solution in the web context is possible.

溶液(S)建築 - プロトコルの作業が必要です。これは、広く分散システムにおける一般的な問題の特定のインスタンスです。一般的な解決策は、しかし、Webコンテキスト内の特定のソリューションが可能であり、そうです。

Workaround Servers can (many do) provide location hints in a replica selection web page. Users choose one based upon their location. Users can learn which replica server gives them best performance. Note that the closest replica geographically is not necessarily the closest in terms of network topology. Expecting users to understand network topology is unreasonable.


Contact Daniel LaLiberte <>


A.1.3 Proxy resource location

A.1.3 Proxyリソースの場所

Name Proxy resource location


Classification Architecture


Description There is no way for a client or server (including another proxy) to inform a proxy of an alternate address (perhaps including the proxy to use to reach that address) to use to fetch a resource. If the client does not trust where the redirected resource came from, it may need to validate it or validate where it came from.


Significance Medium


Implications Proxies have no systematic way to locate resources within other proxies or origin servers. This makes it more difficult to share information among proxies. Information sharing would improve global efficiency.


Indications Inherent in HTTP/1.0, HTTP/1.1.

HTTP / 1.0、HTTP / 1.1に内在する兆候。

Solution(s) Architectural - protocol design is necessary.

溶液(S)建築 - プロトコルの設計が必要です。

Workaround Certain proxies share location hints in the form of summary digests of their contents (e.g., Squid). Certain proxy protocols enable a proxy query another for its contents (e.g., ICP). (See however "ICP Performance" issue (Section 2.2.5).)

その内容(例えば、イカ)の要約ダイジェストの形でこの問題を回避する特定のプロキシ共有の場所のヒント。特定のプロキシプロトコルは、その内容(例えば、ICP)用に別のプロキシクエリを可能にします。 (但し、 "ICPパフォーマンス" 問題(セクション2.2.5)を参照してください。)

Contact Daniel LaLiberte <>


A.2 Implementation


A.2.1 Use of Cache-Control headers


Name Use of Cache-Control headers


Classification Implementation


Description Many (if not most) implementations incorrectly interpret Cache-Control response headers.


Significance High


Implications Cache-Control headers will be spurned by end users if there are conflicting or non-standard implementations.


Indications -

適応症 -

Solution(s) Work with vendors and others to assure proper application


Workaround None.


Contact Mark Nottingham <>


A.2.2 Lack of HTTP/1.1 compliance for caching proxies

キャッシングプロキシのHTTP / 1.1準拠のA.2.2欠如

Name Lack of HTTP/1.1 compliance for caching proxies

キャッシングプロキシのHTTP / 1.1準拠の名前欠如

Classification Implementation


Description Although performance benchmarking of caches is starting to be explored, protocol compliance is just as important.


Significance High


Implications Caching proxy vendors implement their interpretation of the specification; because the specification is very large, sometimes vague and ambiguous, this can lead to inconsistent behavior between caching proxies.


Caching proxies need to comply to the specification (or the specification needs to change).


Indications There is no currently known compliance test being used.


There is work underway to quantify how closely servers comply with the current specification. A joint technical report between AT&T and HP Labs [7] describes the compliance testing. This report examines how well each of a set of top traffic-producing sites support certain HTTP/1.1 features.

サーバは、現在の仕様に準拠してどのように密接に定量化する進行中の仕事があります。 AT&TとHP研究所との共同技術レポートは、[7]コンプライアンス・テストについて説明します。このレポートは、トップトラフィック生産拠点の各セットは、特定のHTTP / 1.1の機能をサポートしてどれだけ調べます。

The Measurement Factory (formerly IRCache) is working to develop protocol compliance testing software. Running such a conformance test suite against caching proxy products would measure compliance and ultimately would help assure they comply to the specification.


Solution(s) Testing should commence and be reported in an open industry forum. Proxy implementations should conform to the specification.


Workaround There is no workaround for non-compliance.


Contact Mark Nottingham <> Duane Wessels <>


A.2.3 ETag support


Name ETag support


Classification Implementation


Description Available caching proxies appear not to support ETag (strong) validation.


Significance Medium


Implications Last-Modified/If-Modified-Since validation is inappropriate for many requirements, both because of its weakness and its use of dates. Lack of a usable, strong coherency protocol leads developers and end users not to trust caches.

インプリケーション/最終-Modifiedの場合 - 変更-ので検証は、両方とも、その弱さと日付のその使用の多くの要件には不適切です。使用可能な、強力なコヒーレンシ・プロトコルの欠如は、キャッシュを信頼していない開発者やエンドユーザーをリードしています。

Indications -

適応症 -

Solution(s) Work with vendors to implement ETags; work for better validation protocols.


Workaround Use Last-Modified/If-Modified-Since validation.

回避策のLast-Modified /場合 - 変更-ので検証。

Contact Mark Nottingham <>


A.2.4 Servers and content should be optimized for caching


Name Servers and content should be optimized for caching


Classification Implementation (Performance)


Description Many web servers and much web content could be implemented to be more conducive to caching, reducing bandwidth demand and page load delay.


Significance Medium


Implications By making poor use of caches, origin servers encourage longer load times, greater load on caching proxies, and increased network demand.


Indications The problem is most apparent for pages that have low or zero expires time, yet do not change.


Solution(s) -

溶液(A) -

Workaround Servers could start using unique object identifiers for write-only content: if an object changes it gets a new name, otherwise it is considered to be immutable and therefore have an infinite expire age. Certain hosting providers do this already.


Contact Peter Danzig


A.3 Administration


A.3.1 Lack of fine-grained, standardized hierarchy controls


Name Lack of fine-grained, standardized hierarchy controls


Classification Administration


Description There is no standard for instructing a proxy as to how it should resolve the parent to fetch a given object from. Implementations therefore vary greatly, and it can be difficult to make them interoperate correctly in a complex environment.


Significance Medium


Implications Complications in deployment of caches in a complex network (especially corporate networks)


Indications Inability of some proxies to be configured to direct traffic based on domain name, reverse lookup IP address, raw IP address, in normal operation and in failover mode. Inability in some proxies to set a preferred parent / backup parent configuration.


Solution(s) -

溶液(A) -

Workaround Work with vendors to establish an acceptable configuration within the limits of their product; standardize on one product.


Contact Mark Nottingham <>


A.3.2 Proxy/Server exhaustive log format standard for analysis


Name Proxy/Server exhaustive log format standard for analysis


Classification Administration


Description Most proxy or origin server logs used for characterization or evaluation do not provide sufficient detail to determine cacheability of responses.


Significance Low (for operationality; high significance for research efforts)


Implications Characterizations and simulations are based on non-representative workloads.


See Also W3C Web Characterization Activity, since they are also concerned with collecting high quality logs and building characterizations from them.


Indications -

適応症 -

Solution(s) To properly clean and to accurately determine cacheability of responses, a complete log is required (including all request headers as well as all response headers such as "User-agent" [for removal of spiders] and "Expires", "max-age", "Set-cookie", "no-cache", etc.)

溶液(S)適切にきれいにかつ正確に応答のキャッシュ可能性を決定するために、完全なログが[クモの除去のために全てのリクエストヘッダならびに「ユーザエージェント」としてすべての応答ヘッダを含む(必要とされ、「有効期限」、「最大エージング」、 『セットクッキー』、 『キャッシュなし』など)

Workaround -

回避策 -

References See "Web Traffic Logs: An Imperfect Resource for Evaluation"[5] for some discussion of this.


Contact Brian D. Davison <> Terence Kelly <>


A.3.3 Trace log timestamps


Name Trace log timestamps


Classification Administration


Description Some proxies/servers log requests without sufficient timing detail. Millisecond resolution is often too small to preserve request ordering and either the servers should record request reception time in addition to completion time, or elapsed time plus either one.


Significance Low (for operationality; medium significance for research efforts)


Implications Characterization and simulation fidelity is improved with accurate timing and ordering information. Since logs are generally written in order of request completion, these logs cannot be re-played without knowing request generation times and reordering accordingly.


See Also -

また、参照してください -

Indications Timestamps can be identical for multiple entries (when only millisecond resolution is used). Request orderings can be jumbled when clients open additional connections for embedded objects while still receiving the container object.


Solution(s) Since request completion time is common (e.g., Squid), recommend continuing to use it (with microsecond resolution if possible) plus recording elapsed time since request reception.


Workaround -

回避策 -

References See "Web Traffic Logs: An Imperfect Resource for Evaluation"[5] for some discussion of this.


Contact Brian D. Davison <>


A.3.4 Exchange format for log summaries

ログサマリーのためA.3.4 Exchange形式

Name Exchange format for log summaries


Classification Administration/Analysis?


Description Although we have (more or less) a standard log file format for proxies (plain vanilla Common Logfile and Squid), there isn't a commonly accepted format for summaries of those log files. Summaries could be generated by the cache itself, or by post-processing existing log file formats such as Squid's.


Significance High, since it means that each log file summarizing/analysis tool is essentially reinventing the wheel (un-necessary repetition of code), and the cost of processing a large number of large log files through a variety of analysis tools is (again for no good reason) excessive.


Implications In order to perform a meaningful analysis (e.g., to measure performance in relation to loading/configuration over time) the access logs from multiple busy caches, it's often necessary to run first one tool then another, each against the entire log file (or a significantly large subset of the log). With log files running into hundreds of MB even after compression (for a cache dealing with millions of transactions per day) this is a non-trivial task.


See Also IP packet/header sniffing - it may be that individual transactions are at a level of granularity which simply isn't sensible to be attempting on extremely busy caches. There may also be legal implications in some countries, e.g., if this analysis identifies individuals.

スニッフィングIPパケット/ヘッダ参照 - それは個々のトランザクションは、単に非常に多忙なキャッシュを上しようとすることが賢明でない粒度のレベルであることかもしれません。この分析は、個人を特定した場合にも、例えば、いくつかの国では法的意味があるかもしれません。

Indications Disks/memory full(!) Stats (using multiple programs) take too long to run. Stats crunching must be distributed out to multiple machines because of its high computational cost.


Solution(s) Have the proxy produce a standardized summary of its activity either automatically or via an external (e.g., third party) tool, in a commonly agreed format. The format could be something like XML or the Extended Common Logfile, but the format and contents are subjects for discussion. Ideally this approach would permit individual cache server products to supply subsets of the possible summary info, since it may not be feasible for all servers to provide all of the information which people would like to see.


Workaround Devise a private summary format for your own personal use - but this complicates or even precludes the exchange of summary info with other interested parties.

回避策は、あなた自身の個人的な使用のためのプライベート要約形式を工夫 - しかし、これは複雑に、あるいはその他の利害関係者との要約情報の交換を妨げます。

References See the web pages for the commonly used cache stats analysis programs, e.g., Calamaris, squidtimes, squidclients, etc.


Contact Martin Hamilton <>


Full Copyright Statement


Copyright (C) The Internet Society (2001). All Rights Reserved.


This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.


The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.






Funding for the RFC Editor function is currently provided by the Internet Society.

RFC Editor機能のための基金は現在、インターネット協会によって提供されます。