memcache 二进制协议:
http://code.google.com/p/memcached/wiki/MemcacheBinaryProtocol
Memcache client:
DevOps
1. 折叠方式
可用选项 ‘foldmethod’ 来设定折叠方式:set fdm=*****。
有 6 种方法来选定折叠:
manual 手工定义折叠
indent 更多的缩进表示更高级别的折叠
expr 用表达式来定义折叠
syntax 用语法高亮来定义折叠
diff 对没有更改的文本进行折叠
marker 对文中的标志折叠
注意,每一种折叠方式不兼容,如不能即用expr又用marker方式,我主要轮流使用indent和marker方式进行折叠。
使用时,用:set fdm=marker 命令来设置成marker折叠方式(fdm是foldmethod的缩写)。
要使每次打开vim时折叠都生效,则在.vimrc文件中添加设置,如添加:set fdm=syntax,就像添加其它的初始化设置一样。
2. 折叠命令
选取了折叠方式后,我们就可以对某些代码实施我们需要的折叠了,由于我使用indent和marker稍微多一些,故以它们的使用为例:
如果使用了indent方式,vim会自动的对大括号的中间部分进行折叠,我们可以直接使用这些现成的折叠成果。
在可折叠处(大括号中间):
zc 折叠
zC 对所在范围内所有嵌套的折叠点进行折叠
zo 展开折叠
zO 对所在范围内所有嵌套的折叠点展开
[z 到当前打开的折叠的开始处。
]z 到当前打开的折叠的末尾处。
zj 向下移动。到达下一个折叠的开始处。关闭的折叠也被计入。
zk 向上移动到前一折叠的结束处。关闭的折叠也被计入。
当使用marker方式时,需要用标计来标识代码的折叠,系统默认是{{{和}}},最好不要改动之:)
我们可以使用下面的命令来创建和删除折叠:
zf 创建折叠,比如在marker方式下:
zf56G,创建从当前行起到56行的代码折叠;
10zf或10zf+或zf10↓,创建从当前行起到后10行的代码折叠。
10zf-或zf10↑,创建从当前行起到之前10行的代码折叠。
在括号处zf%,创建从当前行起到对应的匹配的括号上去((),{},[],<>等)。
zd 删除 (delete) 在光标下的折叠。仅当 ‘foldmethod’ 设为 "manual" 或 "marker" 时有效。
zD 循环删除 (Delete) 光标下的折叠,即嵌套删除折叠。
仅当 ‘foldmethod’ 设为 "manual" 或 "marker" 时有效。
zE 除去 (Eliminate) 窗口里“所有”的折叠。
仅当 ‘foldmethod’ 设为 "manual" 或 "marker" 时有效。
关于vim的代码折叠,小弟也是初学,仅做参考。
今天是sina 2008年的年会,这是我在sina的第二个年会,转眼快两年了;想回忆一下却不知道从何说起;想总结一下也不知道自己取得了哪些成绩;
曾经测试过file_get_contents函数的post实现,现在又想知道file_get_contents 能否实现http的长连接,测试代码:
<?php
$url = "http://phpor.net/";
$opts = array(
‘http’=>array(
‘protocol_version’=>"1.1",
)
);
$context = stream_context_create($opts);
echo strlen( file_get_contents($url,false,$context))."\n";
echo strlen( file_get_contents($url,false,$context))."\n";
?>
该脚本将产生异常,在Linux上执行将是段错误。
下面脚本可以正常输出:
<?php
$url = "http://phpor.net/";
$opts = array(
‘http’=>array(
‘protocol_version’=>"1.1",
‘connection’=>‘close’
)
);
$context = stream_context_create($opts);
echo file_get_contents($url,false,$context)."\n";
?>
但是这时我们将发现输出的东西比网页的内容多一些,那是trunked编码,这说明file_get_contents更本就不识别chunked编码,所以file_get_contents 根本就没法实现长连接。
当然可以用curl实现:
<?php
$url = "http://phpor.net/";
$ch = curl_init();
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_HTTP_VERSION,CURL_HTTP_VERSION_1_0);
curl_setopt($ch, CURLOPT_URL, $request);
$res = curl_exec($ch);
echo $res;
curl_close($ch);
?>
请看下面两段代码有什么区别:
1.==============a1.html==========================
<script >
window.location.href="http://phpor.net";
alert(‘aa’);
</script>
2.=============a2.html===========================
<script >
window.location.href="http://phpor.net";
</script>
<script >
alert(‘aa’);
</script>
访问a1.html ,看不到alert
访问a2.html ,能看到alert
这就可以说明一些script标签的作用
1. 显示当前的时间戳
date +%s
2. 修改系统时间
date -s 2008-08-08 08:08:08
3. 转化时间字符串为时间戳
date +%s -d "2008-08-08 08:08:08"
注意: 不要无意将 -d 写成-s ,那样你就无意修改了系统时间了
4. 按照指定格式输出时间
date +%F // 输出: 2009-09-22
date +%F –date=’1 days ago’ // 输出: 2009-09-21
对于一些非长连接的应用,可以通过计算连接的持续时间来知道处理时间的长短,首先通过tcpdump抓包,通过tcptrace可以知道连接的开始和结束时间,再用awk就可以统计了,基本命令如下:
tcpdump -i eth1 -w -| tcptrace -xrealtime stdin | awk ‘{ link = $2 "-" $3;if($4=="new")sum[link] = $1; else if($4 =="connection" && sum[link] > 0){ print link "\t" ($1-sum[link]) "\t" $6 " " $7 " " $8; delete sum[link];}}’
好像tcptrace本身是可以计算连接时间的长短的,应该是我还没有发现。
找见了:
tcptrace -l a.tcpdump | grep "elapsed time"
一、建立连接
在发送SYN报文段后,如果在75秒没有收到相应相应,连接建立将中止。这个时间也就是阻塞connect系统调用的超时时间。
二、保活
SO_KEEPALIVE选项保持连接检测对方主机是否崩溃,避免(服务器)永远阻塞于TCP连接的输入。设置该选项后,如果2小时内在此套接口的任一方向都没有数据交换,TCP就自动给对方 发一个保持存活探测分节(keepalive probe)。这是一个对方必须响应的TCP分节.它会导致以下三种情况:对方接收一切正常:以期望的ACK响应。2小时后,TCP将发出另一个探测分节。对方已崩溃且已重新启动:以RST响应。套接口的待处理错误被置为ECONNRESET,套接口本身则被关闭。对方无任何响应:源自berkeley的TCP发送另外8个探测分节,相隔75秒一个,试图得到一个响应。在发出第一个探测分节11分钟 15秒后若仍无响应就放弃。套接口的待处理错误被置为ETIMEOUT,套接口本身则被关闭。如ICMP错误是“host unreachable(主机不可达)”,说明对方主机并没有崩溃,但是不可达,这种情况下待处理错误被置为 EHOSTUNREACH。
注意这里面有三个数字:
1.开始首次KeepAlive探测前的TCP空闭时间,2小时
2.两次KeepAlive探测间的时间间隔,75秒
3.判定断开前的KeepAlive探测次数,10次
也就是说,总共需要2小时+75秒*10次 = 7950秒
三、重传
重传定时器在发送数据时设定,超时值是计算出来的,取决于RTT和报文已被重传次数(并没有包括在以下公式内,但是BSD的实现确实用到了这个数据)。
RTT估计其,R是RTT,M是测量到的RTT,a推荐值为0.9
R = a * R + (1-a) * M
重传超时时间RTO, b推荐值为2
RTO = R * b
例外:快速重传,收到3个重复的ACK会立即重传
四、延时确认,捎带确认
TCP协议栈收到数据后并不立即发送ACK,而是等待200ms(推荐值,但是这个值不能高于500ms),如果在这段时间有用户数据需要发送则一同随着这个ACK发送。
五、FIN_WAIT_2定时器
某个连接准备关闭连接时,调用close函数,发送FIN报文,状态从SYN_RCVD迁移到FIN_WAIT_1,收到这个FIN的ACK后,迁移到FIN_WAIT_2状态。
为了防止对端一直不发送FIN,在等待10分钟后再等待75秒(定时器确实被设置了两次,所以分开说),超时,连接关闭。
六、2MSL定时器
MSL是最大报文段生存时间,RFC1122建议是2分钟,但BSD传统实现采用了30秒。当连接转移到TIME_WAIT状态时,即连接主动关闭时,定时器启动,为两倍的MSL。定时器超时,这时才能重新使用之前连接使用的端口号。这也是为了避免一些意想不到的边界情况,TCPIP详解第一卷的18.6.1给出了一个极端的例子。
七、平静时间
对于来自某个连接的较早替身的迟到报文段, 2 M S L等待可防止将它解释成使用相同插口对的新连接的一部分。但这只有在处于2 M S L等待连接中的主机处于正常工作状态时才有效。
如果使用处于2 M S L等待端口的主机出现故障,它会在M S L秒内重新启动,并立即使用故障前仍处于2 M S L的插口对来建立一个新的连接吗?如果是这样,在故障前从这个连接发出而迟到的报文段会被错误地当作属于重启后新连接的报文段。无论如何选择重启后新连接的初始序号,都会发生这种情况。
为了防止这种情况,RFC 793指出TCP在重启动后的MSL秒内不能建立任何连接。这就称为平静时间(quiet time)。
八、SO_LINGER的时间
这个选项会影响close的行为。如果linger结构中的l_onoff域设为非零,并设置了零超时间隔,则close不被阻塞立即执行,不论是否有排队数据未发送或未被确认。这种关闭方式会发送RST报文段,且丢失了未发送的数据。在远端的读调用将以ECONNRESET出错。
若设置了SO_LINGER并确定了非零的超时间隔,则closes调用阻塞进程,直到所剩数据发送完毕或超时。这种关闭称为“优雅的”关闭。如果套接口置为非阻塞且SO_LINGER设为非零超时,则close调用将以EWOULDBLOCK错误返回。
若在一个流类套接口上设置了SO_DONTLINGER(等效于SO_LINGER中将linger结构的l_onoff域设为零),则close调用立即返回。但是,如果可能,排队的数据将在套接口关闭前发送。
下面是一个常见的获取用户ip的函数:
<?php
function getip()
{
if(getenv("HTTP_CLIENT_IP") && getenv("HTTP_CLIENT_IP")!="unknown")
$ip = getenv("HTTP_CLIENT_IP");
elseif(getenv("HTTP_X_FORWARDED_FOR") && getenv("HTTP_X_FORWARDED_FOR")!="unknown")
$ip = getenv("HTTP_X_FORWARDED_FOR");
elseif(getenv("REMOTE_ADDR") && getenv("REMOTE_ADDR")!="unknown")
$ip = getenv("REMOTE_ADDR");
else
$ip = "none";
return $ip;
}
?>
下面给出一个测试脚本:
<?php
$ch = curl_init();
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_HTTP_VERSION,CURL_HTTP_VERSION_1_1);
curl_setopt($ch,CURLOPT_HTTPHEADER,array(‘X_FORWARDED_FOR: 158.10.10.10’));
curl_setopt($ch,CURLOPT_HTTPHEADER,array(‘CLIENT_IP: 159.10.10.10’));
curl_setopt($ch, CURLOPT_URL,"http://10.10.10.21/test.php");
echo curl_exec($ch);
curl_close($ch);
exit;
?>
http1.0: RFC 1945.
http1.1: RFC 2068
The Warning header
HTTP status codes indicate the success or failure of a request. For a successful response, the status code cannot provide additional advisory information, in part because the placement of the status code in the Status-Line, instead of in a header field, prevents the use of multiple status codes.
HTTP/1.1 introduces a Warning header, which may carry any number of subsidiary status indications. The intent is to allow a sender to advise the recipient that something may be unsatisfactory about an ostensibly successful response.
HTTP/1.1 defines an initial set of Warning codes, mostly related to the actions of caches along the response path. For example, a Warning can mark a response as having been returned by a cache during disconnected operation, when it is not possible to validate the cache entry with the origin server.
The Warning codes are divided into two classes, based on the first digit of the 3-digit code. One class of warnings must be deleted after a successful revalidation of a cache entry; the other class must be retained with a revalidated cache entry. Because this distinction is made based on the first digit of the code, rather than through an exhaustive listing of the codes, it is extensible to Warning codes defined in the future.
Other new status codes
There are 24 new status codes in HTTP/1.1; we have discussed 100 (Continue), 206 (Partial Content), and 300 (Multiple Choices) elsewhere in this paper. A few of the more notable additions include
* 409 (Conflict), returned when a request would conflict with the current state of the resource. For example, a PUT request might violate a versioning policy.
* 410 (Gone), used when a resource has been removed permanently from a server, and to aid in the deletion of any links to the resource.
Most of the other new status codes are minor extensions.
HTTP1.0
HTTP1.1
HTTP/1.0 does not define any 1xx status
codes and they are not a valid response to a HTTP/1.0 request.
100 Continue
101 Switching Protocols
Undifined
203 Non-Authoritative Information
Undifined
205 Reset Content
Undifined
206 Partial Content
Undifined
303 See Other
Undifined
305 Use Proxy
Undifined
402 Payment Required
Undifined
405 Method Not Allowed
Undifined
406 Not Acceptable
Undifined
407 Proxy Authentication Required
Undifined
408 Request Timeout
Undifined
409 Conflict
Undifined
410 Gone
Undifined
411 Length Required
Undifined
412 Precondition Failed
Undifined
413 Request Entity Too Large
Undifined
414 Request-URI Too Long
Undifined
415 Unsupported Media Type
Undifined
504 Gateway Timeout
Undifined
505 HTTP Version Not Supported
Changes to Simplify Multi-homed Web Servers and Conserve IP
Addresses
The requirements that clients and servers support the Host request-
header, report an error if the Host request-header (section 14.23) is
missing from an HTTP/1.1 request, and accept absolute URIs (section
5.1.2) are among the most important changes defined by this
specification.
Older HTTP/1.0 clients assumed a one-to-one relationship of IP
addresses and servers; there was no other established mechanism for
distinguishing the intended server of a request than the IP address
to which that request was directed. The changes outlined above will
allow the Internet, once older HTTP clients are no longer common, to
support multiple Web sites from a single IP address, greatly
simplifying large operational Web servers, where allocation of many
IP addresses to a single host has created serious problems. The
Internet will also be able to recover the IP addresses that have been
allocated for the sole purpose of allowing special-purpose domain
names to be used in root-level HTTP URLs. Given the rate of growth of
the Web, and the number of servers already deployed, it is extremely
important that all implementations of HTTP (including updates to
existing HTTP/1.0 applications) correctly implement these
requirements:
o Both clients and servers MUST support the Host request-header.
o Host request-headers are required in HTTP/1.1 requests.
o Servers MUST report a 400 (Bad Request) error if an HTTP/1.1
request does not include a Host request-header.
o Servers MUST accept absolute URIs.
HTTP/1.0
experimental implementations of persistent connections are faulty,
and the new facilities in HTTP/1.1 are designed to rectify these
problems. The problem was that some existing 1.0 clients may be
sending Keep-Alive to a proxy server that doesn’t understand
Connection, which would then erroneously forward it to the next
inbound server, which would establish the Keep-Alive connection and
result in a hung HTTP/1.0 proxy waiting for the close on the
response. The result is that HTTP/1.0 clients must be prevented from
using Keep-Alive when talking to proxies.
original HTTP/1.0 form of persistent
connections.
When it connects to an origin server, an HTTP client MAY send the
Keep-Alive connection-token in addition to the Persist connection-
token:
Connection: Keep-Alive
An HTTP/1.0 server would then respond with the Keep-Alive connection
token and the client may proceed with an HTTP/1.0 (or Keep-Alive)
persistent connection.
An HTTP/1.1 server may also establish persistent connections with
HTTP/1.0 clients upon receipt of a Keep-Alive connection token.
However, a persistent connection with an HTTP/1.0 client cannot make
use of the chunked transfer-coding, and therefore MUST use a
Content-Length for marking the ending boundary of each message.
A client MUST NOT send the Keep-Alive connection token to a proxy
server as HTTP/1.0 proxy servers do not obey the rules of HTTP/1.1
for parsing the Connection header field.
In HTTP/1.0, most implementations used a new connection for each
request/response exchange. In HTTP/1.1, a connection may be used for
one or more request/response exchanges, although connections may be
closed for a variety of reasons .
Web users speak many languages and use many character sets. Some Web resources are available in several variants to satisfy this multiplicity. HTTP/1.0 included the notion of content negotiation, a mechanism by which a client can inform the server which language(s) and/or character set(s) are acceptable to the user.
Content negotiation has proved to be a contentious and confusing area. Some aspects that appeared simple at first turned out to be quite difficult to resolve. For example, although current IETF practice is to insist on explicit character set labeling in all relevant contexts, the existing HTTP practice has been to use a default character set in most contexts, but not all implementations chose the same default. The use of unlabeled defaults greatly complicates the problem of internationalizing the Web.
HTTP/1.0 provided a few features to support content negotiation, but RFC1945 never uses that term and devotes less than a page to the relevant protocol features. The HTTP/1.1 specification specifies these features with far greater care, and introduces a number of new concepts.
The goal of the content negotiation mechanism is to choose the best available representation of a resource. HTTP/1.1 provides two orthogonal forms of content negotiation, differing in where the choice is made:
1.
In server-driven negotiation, the more mature form, the client sends hints about the user’s preferences to the server, using headers such as Accept-Language, Accept-Charset, etc. The server then chooses the representation that best matches the preferences expressed in these headers.
2.
In agent-driven negotiation, when the client requests a varying resource, the server replies with a 300 (Multiple Choices) response that contains a list of the available representations and a description of each representation’s properties (such as its language and character set). The client (agent) then chooses one representation, either automatically or with user intervention, and resubmits the request, specifying the chosen variant.
Although the HTTP/1.1 specification reserves the Alternates header name for use in agent-driven negotiation, the HTTP working group never completed a specification of this header, and server-driven negotiation remains the only usable form.
Some users may speak multiple languages, but with varying degrees of fluency. Similarly, a Web resource might be available in its original language, and in several translations of varying faithfulness. HTTP introduces the use of quality values to express the importance or degree of acceptability of various negotiable parameters. A quality value (or qvalue) is a fixed-point number between 0.0 and 1.0. For example, a native speaker of English with some fluency in French, and who can impose on a Danish-speaking office-mate, might configure a browser to generate requests including
Accept-Language: en, fr;q=0.5, da;q=0.1
Because the content-negotiation mechanism allows qvalues and wildcards, and expresses variation across many dimensions (language, character-set, content-type, and content-encoding) the automated choice of the best available” variant can be complex and might generate unexpected outcomes. These choices can interact with caching in subtle ways.
Content negotiation promises to be a fertile area for additional protocol evolution. For example, the HTTP working group recognized the utility of automatic negotiation regarding client implementation features, such as screen size, resolution, and color depth. The IETF has created the Content Negotiation working group to carry forward with work in the area.
A server MUST NOT send
transfer-codings to an HTTP/1.0 client.