PHP session机制点滴

PHP里面管理session需要实现如下函数:
#define PS_OPEN_FUNC(x)     int ps_open_##x(PS_OPEN_ARGS)
#define PS_CLOSE_FUNC(x)     int ps_close_##x(PS_CLOSE_ARGS)
#define PS_READ_FUNC(x)     int ps_read_##x(PS_READ_ARGS)
#define PS_WRITE_FUNC(x)     int ps_write_##x(PS_WRITE_ARGS)
#define PS_DESTROY_FUNC(x)     int ps_delete_##x(PS_DESTROY_ARGS)
#define PS_GC_FUNC(x)         int ps_gc_##x(PS_GC_ARGS)
#define PS_CREATE_SID_FUNC(x)    char *ps_create_sid_##x(PS_CREATE_SID_ARGS)

可参看: ext/session/php_session.h

至于这些函数在什么时候被触发调用,参看: ext/session/session.c

管理session的实例可以参看memcache模块(其它模块也有); 如关闭函数的定义:

 
  1. PS_CLOSE_FUNC(memcache)
  2. {
  3.     mmc_pool_t *pool = PS_GET_MOD_DATA();
  4.     if (pool) {
  5.         mmc_pool_free(pool TSRMLS_CC);
  6.         PS_SET_MOD_DATA(NULL);
  7.     }   
  8.     return SUCCESS;
  9. }

一般情况下,一个请求中session只回写一次,在请求结束时回写;如果要在中间强制回写,则可以使用: session_write_close();函数, 但是一定要注意,回写之后session状态为关闭状态,再给$_SESSION数组赋值是不会回写的,除非重新session_start();

初识 websocket

html5的一个特性就是websocket,以为是多么神奇的东西,弄了个测试例子,发现在协议层上是超简单的。
首先,已类似http协议的方式建立连接。
然后,一个连接可以多次收发消息;消息的格式基本为\0x00开始 \0xff结束

和原来的http请求的区别:
1. 原来的http请求至少在应用层面一次tcp的连接只能完成一个交互
2. 由于websocket一次连接可以交互多次,所以省去了很多http头的信息

测试代码: http://code.google.com/p/phpwebsocket/

参考资料:
http://dev.w3.org/html5/websockets/
http://www.zendstudio.net/archives/websocket-protocol/

协议流实例如下:(红色为请求,蓝色为响应)

重复说VPN

今天访问我的blog的时候,返回如下页面:

以为是我的blog坏了呢,用httpwatch看了一下,返回响应头信息为:
HTTP/1.1 504 Proxy Timeout
Via: 1.1 VPN4

才想起来自己拨了vpn了,估计是vpn不想代理了;虽然我拨了vpn,但是除了用于访问内部网资源也没它用了,访问我自己的blog不需要走vpn的,所以我修改一下本机的路由应该可以搞定。

  1.  nslookup phpor.net   :  66.147.244.189
  2.  ipconfig    
              

    1. PPP 适配器 新浪VPN(PPTP): 默认网关: 0.0.0.0
    2.         

    3. 无线局域网适配器 无线网络连接 2:  默认网关: 192.168.1.1
    4.     

        

  3. 因为我使用的是无线,所以,将phpor.net的网关添加到 192.168.1.1 就行了    
              

    1. route add 66.147.244.189 mask 255.255.255.255 192.168.1.1
    2.     

        

  4.  再次访问phpor.net :  OK

5 Ways to Speed Up Your Site

22 Jun 2006

Throughout the blogosphere I’m always seeing these blogs, that while  they look great, are horribly slow and overburdened. Over the past few  months I have become somewhat of a website optimization specialist,  bringing my own site from an over 250kB homepage to its current 34kB. I  will help you achieve some of the same success with a few, powerful  tips. Most of these are common sense, but I can’t stress their  importance enough. I will concentrate on the website and not  the server in this article, as there are too many things to discuss when  it comes to server optimization.

1) Reduce Overall Latency by Reducing HTTP Requests

Every HTTP request, or loading each item on your website, has an  average round-trip latency of 0.2 seconds. So if your site is loading 20  items, regardless of whether they are stylesheets, images or scripts,  that equates to 4 seconds in latency alone (on your average broadband  connection). If you have a portion on your site with many images within  it, such as your footer, you can reduce the number of HTTP requests with  image maps. I discussed that in more depth at the end of this article.  If you are using K2, you can get rid on one extra HTTP request by using  one stylesheet, style.css, and no schemes (integrate what was in your  scheme in the main stylesheet).

Don’t Rely on Other Sites!

If you have several components on your site loading from other  websites, they are slowing you down. A bunch of HTTP requests from the  same server is bad enough, but having HTTP requests from different  servers has increased latency and can be critical to your site’s loading  time if their server is down. For example, when the Yahoo! ads server  was acting weird one day my site seemingly hesitated to load as it  waited on the Yahoo! server before loading the rest of my content.  Hence, I don’t run Yahoo! ads anymore. I don’t trust anyone else’s  server and neither should you. The only thing on this site served on  another is the FeedBurner counter.

2) Properly Save Your Images

One huge mistake people do is save their image in Photoshop the  regular way. Photoshop has a "save for web" feature for a reason, use  it. But that’s not enough. You must experiment with different settings  and file formats. I’ve found that my header/footers fare well as either  PNGs or GIFs. One major contributor to image size is the palette or  number of colors used in the image. Gradients are pure evil when it  comes to image size. Just changing the way my header text was formatted  and replacing the gradient with a color overlay (or just reducing the  opacity of the gradient) saved a few kilobytes. However, if you must  keep your gradient you can experiment with the websnap feature which  removes similar colors from the palette. But if you get carried away, it  can make your image look horrible. Spend sometime in Photoshop, saving  image for web with different settings. Once you have honed this skill,  you can shave off many kilobytes throughout your site. Also, if you use  the FeedBurner counter chicklet you can save roughly 2.1kB by opting to  use the non-animated, static version.

3) Compression

Along with reducing HTTP requests comes decreasing the size of each  request. We covered this case when it comes to images, but what about  other aspects of the site? You can save a good deal of space by  compressing the CSS, JS and PHP used on your site. Ordinarily  compressing PHP wouldn’t do anything since it’s a server-side scripting  language, but when it’s used to structure your site or blog, as it  commonly is, compressing the PHP in the form of removing all whitespace  can help out. If you run WordPress, you can save 20kB or more by  enabling WP Admin » Options » Reading » WordPress should compress articles (gzip) if browsers ask for them.  Keep in mind, however, that if you receive mass traffic one day you  might want to disable that setting if your webhost gets easily ruffled  with high CPU usage.

The problem with compressing any of your files is that it makes  editing them a pain. That’s why I try to keep two versions of the same  file, a compressed version and an uncompressed version. As for PHP  compression, I generally go through the files by hand and remove any  whitespace. When it comes to CSS, I usually do the same thing but have  found CSS Tweak to be  helpful when dealing with larger files. But do keep in mind that if you  compress your main style.css for WordPress with default CSS Tweak  settings, it will remove the comments at the top that setup the theme.  Be sure to add that piece back after you’ve compressed it or WordPress  won’t recognize your theme. When it comes to compressing JavaScript, this site has you covered. However, use the "crunch" feature as I’ve received weird results using "compress."

Alternatively, you can check out my method of CSS compression utilizing PHP.

4) Avoid JavaScript Where Possible

In addition to adding HTTP requests and size to the site, the  execution of the JavaScript (depends on what it does) can slow your  site. Things like Live Search, Live Comments, Live Archives are tied to  large JS files that like to keep your readers’ browsers busy. The less  the better.

5) Strip Extraneous PHP/MySQL Calls

This step is probably only worth pursuing once you have completely  exhausted the other tips. The K2 theme my site is vaguely based upon  originally comes with support for many plugins and features, many of  which I don’t use. By going through each file and removing the PHP calls  for plugins I’m not using or features I don’t need, I can take some of  the load off of the server. When it comes time to hit the frontpage of  Digg or Slashdot, your server will more than thank you. Some aspects of  this can be exemplified by hardcoding items where feasible. Things in  code that don’t change in your installation such as the name of your  blog or your feed or stylesheet location, can be hardcoded. In K2 these  items rely on a WordPress PHP tag such as bloginfo. It’s hard to explain  what sorts of things you can strip from your website’s PHP framework,  but be on the lookout for things you don’t use on your site. For  example, in the K2 comments file there is a PHP if else that looks to  see if live comments are enabled and utilize them if so. Since I don’t  use live comments, I can completely remove the if part and write it so  that regular comments are always used.

Also, using too many WordPress plugins can be a bad thing, especially  if those plugins are dependent on many MySQL commands which generally  take much, much longer to execute than PHP and can slow a whole page  down.

Miscellaneous Thoughts

Even if you don’t call on a piece of CSS that has an image, it is  still loaded – so you might want to rethink using that one CSS selector  that hardly gets called. When it comes to using a pre-made theme for  your CMS, it’s a good idea to go through the CSS and look for things  that aren’t used. For example, with K2 there was a bit of CSS defined  for styling sub-pages. I don’t have any sub-pages so I removed that  piece of CSS.

If your site is maintained using a CMS of some sort, you likely have  several plugins, if not dozens, running behind the scenes. Going along  with the theme of things, you will want to deactivate any plugins that  aren’t mission critical. They use server resources and add to the PHP  processing load.

 

from: http://paulstamatiou.com/5-ways-to-speed-up-your-site

PHP的几个小测试题

 
  1. <?php
  2. c = 5;
  3. echo $c."\n";
  4. echo m();echo "\n";
  5. echo $c."\n";
  6. function m() {
  7.         static $c = 9;
  8.         return $c++;
  9. }
  10. ?>

输出结果:
5
9
5

关键点:

  1. static 和global的概念要区分好
  2. return 语句也遵循 $c++和++$c的区别
 
  1. <?PHP
  2. $i = 2;
  3. $j = &$i;
  4. unset($j);
  5. echo $i;
  6. ?>

输出结果:
2

关键点:
unset仅仅是解除引用

http:   revalidate

在http头中可能会出现must-revalidate ; 以前没太注意,其大致意思为:
如果服务器端明确指出了资源的过期时间或者是保鲜时间,而且声明了资源的修改时间或者etag之类的标识,那么就有一个问题: 在保鲜时间内,如果用到了该资源,是不是要(根据修改时间或etag)到服务器确认一下资源是否最新的,如果没有明确说明,则agent有自己的默认机制,如果服务器声明了:must-revalidate, 则每次使用该资源就都需要确认资源新鲜性了。

相关参考: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.4

must-revalidate
      Because a cache MAY be configured to ignore a server’s specified       expiration time, and because a client request MAY include a max-       stale directive (which has a similar effect), the protocol also       includes a mechanism for the origin server to require revalidation       of a cache entry on any subsequent use. When the must-revalidate       directive is present in a response received by a cache, that cache       MUST NOT use the entry after it becomes stale to respond to a
      subsequent request without first revalidating it with the origin       server. (I.e., the cache MUST do an end-to-end revalidation every       time, if, based solely on the origin server’s Expires or max-age       value, the cached response is stale.)
      The must-revalidate directive is necessary to support reliable       operation for certain protocol features. In all circumstances an       HTTP/1.1 cache MUST obey the must-revalidate directive; in       particular, if the cache cannot reach the origin server for any       reason, it MUST generate a 504 (Gateway Timeout) response.
      Servers SHOULD send the must-revalidate directive if and only if       failure to revalidate a request on the entity could result in       incorrect operation, such as a silently unexecuted financial       transaction. Recipients MUST NOT take any automated action that       violates this directive, and MUST NOT automatically provide an       unvalidated copy of the entity if revalidation fails.
      Although this is not recommended, user agents operating under       severe connectivity constraints MAY violate this directive but, if       so, MUST explicitly warn the user that an unvalidated response has       been provided. The warning MUST be provided on each unvalidated       access, and SHOULD require explicit user confirmation.
   proxy-revalidate
      The proxy-revalidate directive has the same meaning as the must-       revalidate directive, except that it does not apply to non-shared       user agent caches. It can be used on a response to an       authenticated request to permit the user’s cache to store and       later return the response without needing to revalidate it (since       it has already been authenticated once by that user), while still       requiring proxies that service many users to revalidate each time       (in order to make sure that each user has been authenticated).       Note that such authenticated responses also need the public cache       control directive in order to allow them to be cached at all.

mysql分表理论

mysql在数据量大的时候一般采用分库分表的办法来解决读写慢的问题,为什么分库分表之后读写性能就可以提高了呢?
这里尝试做一下分析:

  • 分库有利于利用更多的硬件来提供服务
  • 分表的依据:
        

              

    1. 单表过大时,索引随之就增大;查询的时候,如果索引不在内存中,大的索引需要更多的载入时间;扫描索引的时间也会变大。写入(增、删、改)的时候,大的索引更新速度也慢。
    2.         

    3. 在对表做写操作时,整个表的query cache将失效,大表失效的cache多,而且失效的几率也大,小表的相反。
    4.     

分库分表的缺点:

  1. 无法做联表查询操作,不过这未尝不是好事,对于大数据量,联表查询基本是不建议使用的。
  2.  

相关资料:
http://zhengdl126.javaeye.com/blog/419850