Docker 存储驱动比较

摘自: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver-option

The Docker daemon has support for several different image layer storage drivers: aufs, devicemapper, btrfs, zfs, overlay and overlay2.

The aufs driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged into the main kernel. These are also known to cause some serious kernel crashes. However, aufs allows containers to share executable and shared library memory, so is a useful choice when running thousands of containers with the same program or libraries.

The devicemapper driver uses thin provisioning and Copy on Write (CoW) snapshots. For each devicemapper graph location – typically/var/lib/docker/devicemapper – a thin pool is created based on two block devices, one for data and one for metadata. By default, these block devices are created automatically by using loopback mounts of automatically created sparse files. Refer to Storage driver optionsbelow for a way how to customize this setup. ~jpetazzo/Resizing Docker containers with the Device Mapper plugin article explains how to tune your existing setup without the use of options.

The btrfs driver is very fast for docker buildbut like devicemapper does not share executable memory between devices. Usedockerd -s btrfs -g /mnt/btrfs_partition.

The zfs driver is probably not as fast as btrfs but has a longer track record on stability. Thanks to Single Copy ARC shared blocks between clones will be cached only once. Use dockerd -s zfs. To select a different zfs filesystem set zfs.fsname option as described in Storage driver options.

The overlay is a very fast union filesystem. It is now merged in the main Linux kernel as of 3.18.0. overlay also supports page cache sharing, this means multiple containers accessing the same file can share a single page cache entry (or entries), it makes overlay as efficient with memory as aufs driver. Call dockerd -s overlay to use it.

Note: As promising as overlay is, the feature is still quite young and should not be used in production. Most notably, using overlay can cause excessive inode consumption (especially as the number of images grows), as well as being incompatible with the use of RPMs.

The overlay2 uses the same fast union filesystem but takes advantage of additional features added in Linux kernel 4.0 to avoid excessive inode consumption. Call dockerd -s overlay2 to use it.

Note: Both overlay and overlay2 are currently unsupported on btrfs or any Copy on Write filesystem and should only be used over ext4 partitions

 

overlay 实现上只支持2层的文件系统,但是overlay2 支持多层的(docker镜像都是多层的)

 

更多参考:

https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/#overlayfs-and-docker-performance

portal认证学习

 

https://wenku.baidu.com/view/3c669b57bcd126fff6050b5b.html

 

https://wenku.baidu.com/view/27c2edf2f90f76c661371a1d.html

 

http://jingyan.baidu.com/article/ca41422ffb521e1eae99eda1.html

 

https://wenku.baidu.com/view/9f742b50f01dc281e53af0aa.html

 

https://wenku.baidu.com/view/9e723734f111f18583d05acb.html?re=view

 

下面这个是服务器上的认证网关软件: (不是网络设备上的)

https://wenku.baidu.com/view/22f2b2d2c1c708a1284a449f.html

PHP 协程示例

在PHP没有协程的时候,我们也玩过并发多线程,但是,对于结果的实时处理就没那么方便了(尽管也可以实现),有了协程之后,代码看起来就会舒服许多,参考下面的multi_cmd.php

 

multi_cmd.php :

我们见到的更多的可能是并发执行多个任务,每个任务都完成后(如: curl异步多请求)在一并处理结果,如果处理结果本身也是需要时间的话,就会比较浪费时间,如果能完成一个处理一个的结果,效果会好一些

 

上面脚本存在瑕疵:

1、为了避免某个流阻塞整个进程,上面使用了非阻塞;但是,后面的死循环却导致了大量cpu的占用,所以,考虑使用stream_select会更好一些

2、为了能控制并发数就更好了,并发太大也可能不是好事

 

改进的脚本如下:

 

注意:

在使用stream_select 的时候, 是否阻塞也就不重要了? 也不完全,加入其中一个比较靠前的任务执行时间很长,就算第一批的大部分任务执行时间都很短,也会因为fread而阻塞在执行时间长的任务身上而无法快速完成其它任务,进而加入更多的任务; 所以,这里可能的办法有:

  1. 办法一:给stderr和stdout添加read timeout限制,但是测试发现,stream_set_timeout 给这两个流添加超时时间是失败的,stream_set_timeout 返回false
  2. 办法二:依然使用非阻塞模式, 依然不行,尽管stream_set_blocking(stdout, false) 返回true,也是无效的
  3. 注意:

    也就是说仅有的两个函数都是不能用的,也就是说,不是函数不行,而是,这事儿就不可行

h3c路由器高级限速

H3C路由器(H3C MSR50-40),支持标准限速、高级限速。

最近,有个高级限速的需求,根据时间段限速,网络工程师搞了1天,没搞定;

我仔细看了看,发现时间段的定义是inactive状态的:

另外,display time-range all    也显示inactive 的

原因: 路由器时间不对,设置的限速时间段是上班时间,然而,路由器时间错了8小时多

 

另外一个问题:

当对限速不太熟悉时,通过界面来配置是比较简单的,但是,特殊的需求未必能在界面上配置的出来

criu 热迁移进程示例

Dump a process

To dump a process, run:

where,
-D : directory to save image files
-t : PID of process to dump

Convert criu images to core dump

Continuing with the examples above where we dumped the process with PID 1234, we can generate the core dump with the crit utility that comes with criu:

where,
-i : input directory with criu images
-o : output directory for the core dump

To find the generate core dump file:

Check the information with readelf:

Start debugging with GDB:

Resume the process

To resume a process from dump files:

where,
-d : detach criu from the process after resume

 

 

测试:

脚本 a.php

启动:

dump:

restore:

截屏:

参考:

Dump, debug, resume process with criu

criu 编译安装

由于yum源中的criu版本是1.6.1 ,而最新版已经是2.x了,所以,学习一下编译安装

编译很简单,主要是依赖: https://criu.org/Installation#Dependencies

  • protobuf-c-compiler
  • libnet-devel
  • protobuf-python
  • protobuf-devel
  • protobuf-c-devel
  • libcap-devel
  • libnl3-devel
  • libaio-devel

yum 安装:

使用过程中,可能用到的python类库: