引言
1. tcp是有read buffer和write buffer的,如何才能确保数据不会死在buffer里,虽然tcp是面向连接的可靠的协议,但是tcp没有可靠到这等程度,有些事情还是需要考应用层来保证的
正文
关于shutdown和close的一些说明:
DevOps
1. tcp是有read buffer和write buffer的,如何才能确保数据不会死在buffer里,虽然tcp是面向连接的可靠的协议,但是tcp没有可靠到这等程度,有些事情还是需要考应用层来保证的
关于shutdown和close的一些说明:
symlinks -d
symlinks: scan/change symbolic links – v1.2 – by Mark Lord
Usage: symlinks [-crsv] dirlist
Flags: -c == change absolute/messy links to relative
-d == delete dangling links
-r == recurse into subdirs
-s == shorten lengthy links (only displayed if -c not specified)
-v == verbose (show all symlinks)
人说rsyslogd丢日志,却未言起因,今好奇而查之。
近几天不知缘何想起一个问题: tcp是有buffer的,在发送端和接收端都有,我们可以通过kill -19 serverpid使得数据阻塞在server端,也可以通过iptable在server端drop掉数据包,而使得数据阻塞在client端,总之,client app并不知道这些事情的发生,那么,如果现在server意外死掉,则buffer中的数据岂不丢失。(应该是吧,未曾验证)
换言之,对于tcp传输大量数据而言,如果是无格式的数据,在传输部分数据之后,接收方不想再接收数据了(可能是磁盘要满了),只想把buffer中的数据处理完就Ok了,如何告知发送方不要再发送数据了?
http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html rsyslogd的作者写过这么一篇文章来解释了rsyslogd不可考的一些原因,其中一点刚好是“闲话不闲”说的那种情况.
1. redis的lru的使用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
robj *lookupKey(redisDb *db, robj *key) { dictEntry *de = dictFind(db->dict,key->ptr); if (de) { robj *val = dictGetVal(de); /* Update the access time for the ageing algorithm. * Don't do it if we have a saving child, as this will trigger * a copy on write madness. */ if (server.rdb_child_pid == -1 && server.aof_child_pid == -1) val->lru = server.lruclock; return val; } else { return NULL; } } |
其中:
1 |
val->lru = server.lruclock; |
这个赋值是怎么体现lru的工作机制的?
2. 按照下面的调用过程来看redis对每个命令的处理过程:
1 2 3 4 5 6 7 8 9 |
(gdb) bt #0 delCommand (c=0xb6cd7000) at db.c:235 #1 0x0805e018 in call (c=c@entry=0xb6cd7000, flags=flags@entry=7) at redis.c:1599 #2 0x080604b0 in processCommand (c=c@entry=0xb6cd7000) at redis.c:1774 #3 0x08068b47 in processInputBuffer (c=c@entry=0xb6cd7000) at networking.c:1013 #4 0x08068c59 in readQueryFromClient (el=0xb6c0a0d0, fd=5, privdata=0xb6cd7000, mask=1) at networking.c:1076 #5 0x08059d19 in aeProcessEvents (eventLoop=eventLoop@entry=0xb6c0a0d0, flags=flags@entry=3) at ae.c:382 #6 0x08059ffc in aeMain (eventLoop=0xb6c0a0d0) at ae.c:425 #7 0x08058dac in main (argc=1, argv=0xbfc82d54) at redis.c:2721 |
至于redis协议中的每条指定对应server端的那一个函数,不会看到一个if..else.. 或者 switch结构的, 从 redisCommandTable 中来找就足够了(当然,quit命令找不到,似乎也只有quit命令)
3. 今天还了解一下细节
1) key和value都是作为redis的object来存储的,相同的value是重复使用的
2) 过期机制是通过将要过期的key写到一个过期table中来做的,这个是和memcache不同的
4. redis是支持认证机制的,当然,这样会多一次请求,而且认证请求不能和其它请求一起打包执行的
5. 关于Redis的LRU:
Redis的LRU是不精确的,不是在allkey中LRU的,而是在allkey中随机选取 maxmemory-samples 来LRU的,这一点要考虑一下是否满足需求。
(了解一下概率的计算方法吧)
摘录: http://oldblog.antirez.com/post/redis-as-LRU-cache.html
maxmemory-samples number_of_samples
The last config option is used to tune the algorithms precision. In order to save memory Redis just adds a 22 bits field to every object for LRU. When we need to remove a key we sample N keys, and remove the one that was idle for longer time. For default three keys are sampled, that is a reasonable approximation of LRU in the long run, but you can get more precision at the cost of some more CPU time changing the number of keys to sample.
Redis 2.2 的配置文件关于LRU的相关说明:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
################################### LIMITS #################################### # Set the max number of connected clients at the same time. By default there # is no limit, and it's up to the number of file descriptors the Redis process # is able to open. The special value '0' means no limits. # Once the limit is reached Redis will close all the new connections sending # an error 'max number of clients reached'. # # maxclients 128 # Don't use more memory than the specified amount of bytes. # When the memory limit is reached Redis will try to remove keys with an # EXPIRE set. It will try to start freeing keys that are going to expire # in little time and preserve keys with a longer time to live. # Redis will also try to remove objects from free lists if possible. # # If all this fails, Redis will start to reply with errors to commands # that will use more memory, like SET, LPUSH, and so on, and will continue # to reply to most read-only commands like GET. # # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a # 'state' server or cache, not as a real DB. When Redis is used as a real # database the memory usage will grow over the weeks, it will be obvious if # it is going to use too much memory in the long run, and you'll have the time # to upgrade. With maxmemory after the limit is reached you'll start to get # errors for write operations, and this may even lead to DB inconsistency. # # maxmemory <bytes> # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory # is reached? You can select among five behavior: # # volatile-lru -> remove the key with an expire set using an LRU algorithm # allkeys-lru -> remove any key accordingly to the LRU algorithm # volatile-random -> remove a random key with an expire set # allkeys->random -> remove a random key, any key # volatile-ttl -> remove the key with the nearest expire time (minor TTL) # noeviction -> don't expire at all, just return an error on write operations # # Note: with all the kind of policies, Redis will return an error on write # operations, when there are not suitable keys for eviction. # # At the date of writing this commands are: set setnx setex append # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx exec sort # # The default is: # # maxmemory-policy volatile-lru # LRU and minimal TTL algorithms are not precise algorithms but approximated # algorithms (in order to save memory), so you can select as well the sample # size to check. For instance for default Redis will check three keys and # pick the one that was used less recently, you can change the sample size # using the following configuration directive. # # maxmemory-samples 3 |
学习fastcgi的时候,多个fcgi进程的启动和管理是个无法回避的问题;有一个叫做spawn-fcgi的程序,可以启动多个php-cgi进程,并且同时accept同一个socket,因为spawn-fcgi和php-cgi是两个项目中的程序,那么他们是如何配合的如此默契的呢?
资料: http://redmine.lighttpd.net/projects/spawn-fcgi 其中主要是: http://redmine.lighttpd.net/projects/spawn-fcgi/repository/entry/trunk/src/spawn-fcgi.c
从代码来看,spawn-fcgi负责lisent一个socket,然后开始fork指定数量的子进程,每个子进程中先把socket的fd复制到FCGI使用的fd(似乎是约定好的一个fd),然后就把后续的事情通过exec交给外部的进程(如:php-cgi)了
下面看一下PHP中fastcgi的相关逻辑, 文件 sapi/cgi/fastcgi.c 中的 int fcgi_init(void);函数体现了如何识别是否fastcgi的逻辑,大致意思为: 如果标准输入为一个socket(不是fileno),则认为是fastcgi,代码片段如下:
427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 |
#ifdef _WIN32 if ((GetStdHandle(STD_OUTPUT_HANDLE) == INVALID_HANDLE_VALUE) && (GetStdHandle(STD_ERROR_HANDLE) == INVALID_HANDLE_VALUE) && (GetStdHandle(STD_INPUT_HANDLE) != INVALID_HANDLE_VALUE)) { char *str; DWORD pipe_mode = PIPE_READMODE_BYTE | PIPE_WAIT; HANDLE pipe = GetStdHandle(STD_INPUT_HANDLE); SetNamedPipeHandleState(pipe, &pipe_mode, NULL, NULL); str = getenv("_FCGI_SHUTDOWN_EVENT_"); if (str != NULL) { HANDLE shutdown_event = (HANDLE) atoi(str); if (!CreateThread(NULL, 0, fcgi_shutdown_thread, shutdown_event, 0, NULL)) { return -1; } } str = getenv("_FCGI_MUTEX_"); if (str != NULL) { if (str != NULL) { HANDLE shutdown_event = (HANDLE) atoi(str); if (!CreateThread(NULL, 0, fcgi_shutdown_thread, shutdown_event, 0, NULL)) { return -1; } } str = getenv("_FCGI_MUTEX_"); if (str != NULL) { fcgi_accept_mutex = (HANDLE) atoi(str); } return is_fastcgi = 1; } else { return is_fastcgi = 0; } #else errno = 0; if (getpeername(0, (struct sockaddr *)&sa, &len) != 0 && errno == ENOTCONN) { fcgi_setup_signals(); return is_fastcgi = 1; } else { return is_fastcgi = 0; } #endif |
注意其中 getpeername(…)的使用,如果 errno == ENOTCONN 才可能认为是fastcgi,那么,如果刚好该socket处于连接状态不就不能视为fastcgi环境了吗? 或许吧
工具下载地址: http://www.fastcgi.com/dist/fcgi.tar.gz
工具使用文档: http://www.fastcgi.com/devkit/doc/fcgi-devel-kit.htm
命令:
1 2 3 4 5 6 7 8 9 10 11 12 |
#启动一个fcgi server,其中aUnixSocket是一个unix socket root@phpor-Latitude-E5410:~/Program/fcgi-2.4.1-SNAP-0311112127/examples# ../cgi-fcgi/cgi-fcgi -start -connect aUnixSocket authorizer #另一个terminal, 导出的这两个环境变量是程序 authorizer 中写死的 root@phpor-Latitude-E5410:~/Program/fcgi-2.4.1-SNAP-0311112127/examples# export REMOTE_USER=$USER REMOTE_PASSWD=xxxx #发起一个fcgi请求 root@phpor-Latitude-E5410:~/Program/fcgi-2.4.1-SNAP-0311112127/examples# ../cgi-fcgi/cgi-fcgi -bind -connect aUnixSocket Status: 200 OK Variable-AUTH_TYPE: Basic Variable-REMOTE_PASSWD: Variable-PROCESS_ID: 0 |
命令: 连接一个真是存在的fcgi server
1 2 3 4 5 6 |
root@phpor-Latitude-E5410:~/Program/fcgi-2.4.1-SNAP-0311112127/examples# env -i SCRIPT_FILENAME=/home/phpor/www/a.php REQUEST_METHOD=GET ../cgi-fcgi/cgi-fcgi -bind -connect :9000 X-Powered-By: PHP/5.4.9-4ubuntu2.1 Content-type: text/html 01:22:34 root@phpor-Latitude-E5410:~/Program/fcgi-2.4.1-SNAP-0311112127/examples# |
这里只写了两个环境变量,不过,也至少要有这两个:
1 |
SCRIPT_FILENAME=/home/phpor/www/a.php |
1 |
REQUEST_METHOD=GET |
协议描述: http://www.fastcgi.com/devkit/doc/fcgi-spec.html
工具: wireshark, wireshark中有fastcgi(fcgi)的协议解析器,方便学习
1. 如果使用fastcgi,则没有一个好的fastcgi进程管理程序,lighttpd中有一个Spawn-Fcgi.exe http://en.wlmp-project.net/
2. 当nginx放在windows上,而fastcgi进程在linux上时,需要注意一个问题,root目录可能不对,如下:
1 2 3 4 5 6 7 |
location ~ \.php { root /home/phpor/www; fastcgi_pass 192.168.1.104:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } |
如果nginx是在D: 盘的话,则fastcgi看到的$document_root 为: d:/home/phpor/www ,而且会认为是一个相对目录,这样,自然是找不到要执行的文件的,可修改如下:
1 2 3 4 5 6 7 |
location ~ \.php { root /home/phpor/www; fastcgi_pass 192.168.1.104:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/phpor/www$fastcgi_script_name; include fastcgi_params; } |
3. nginx 天生是不支持cgi的,所以想配个cgi也不成的
4. nginx 放在windows上仅仅学习而用
如果你再安装msn,安装完毕登录时,会要求你更新,更新后的是skype,skype整合了msn: http://skype.tom.com/
微软的outlook配置hotmail邮箱的时候还需要一个单独的OutlookConnector.exe需要安装,下载地址: http://download.microsoft.com/download/0/0/D/00DF5174-E7FB-4F70-B843-83CBDE509491/OutlookConnector.exe
而且,该地址还被墙了,或者说,微软的很多东西都被墙了,晕