kernel下载: https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/RPMS/
kernel-lt-* : 为长期支持版本(lt: long term)
kernel-ml-* : 为主线版本(ml: mainline )
DevOps
kernel下载: https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/RPMS/
kernel-lt-* : 为长期支持版本(lt: long term)
kernel-ml-* : 为主线版本(ml: mainline )
docker swarm 居然和–live-restore 不兼容,真恶心
1 2 |
# docker swarm init --advertise-addr 172.16.18.37 Error response from daemon: --live-restore daemon configuration is incompatible with swarm mode |
参考:
摘自: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver-option
The Docker daemon has support for several different image layer storage drivers: aufs
, devicemapper
, btrfs
, zfs
, overlay
and overlay2
.
The aufs
driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged into the main kernel. These are also known to cause some serious kernel crashes. However, aufs
allows containers to share executable and shared library memory, so is a useful choice when running thousands of containers with the same program or libraries.
The devicemapper
driver uses thin provisioning and Copy on Write (CoW) snapshots. For each devicemapper graph location – typically/var/lib/docker/devicemapper
– a thin pool is created based on two block devices, one for data and one for metadata. By default, these block devices are created automatically by using loopback mounts of automatically created sparse files. Refer to Storage driver optionsbelow for a way how to customize this setup. ~jpetazzo/Resizing Docker containers with the Device Mapper plugin article explains how to tune your existing setup without the use of options.
The btrfs
driver is very fast for docker build
– but like devicemapper
does not share executable memory between devices. Usedockerd -s btrfs -g /mnt/btrfs_partition
.
The zfs
driver is probably not as fast as btrfs
but has a longer track record on stability. Thanks to Single Copy ARC
shared blocks between clones will be cached only once. Use dockerd -s zfs
. To select a different zfs filesystem set zfs.fsname
option as described in Storage driver options.
The overlay
is a very fast union filesystem. It is now merged in the main Linux kernel as of 3.18.0. overlay
also supports page cache sharing, this means multiple containers accessing the same file can share a single page cache entry (or entries), it makes overlay
as efficient with memory as aufs
driver. Call dockerd -s overlay
to use it.
Note: As promising as
overlay
is, the feature is still quite young and should not be used in production. Most notably, usingoverlay
can cause excessive inode consumption (especially as the number of images grows), as well as being incompatible with the use of RPMs.
The overlay2
uses the same fast union filesystem but takes advantage of additional features added in Linux kernel 4.0 to avoid excessive inode consumption. Call dockerd -s overlay2
to use it.
Note: Both
overlay
andoverlay2
are currently unsupported onbtrfs
or any Copy on Write filesystem and should only be used overext4
partitions
overlay 实现上只支持2层的文件系统,但是overlay2 支持多层的(docker镜像都是多层的)
更多参考:
https://wenku.baidu.com/view/3c669b57bcd126fff6050b5b.html
https://wenku.baidu.com/view/27c2edf2f90f76c661371a1d.html
http://jingyan.baidu.com/article/ca41422ffb521e1eae99eda1.html
https://wenku.baidu.com/view/9f742b50f01dc281e53af0aa.html
https://wenku.baidu.com/view/9e723734f111f18583d05acb.html?re=view
下面这个是服务器上的认证网关软件: (不是网络设备上的)
在PHP没有协程的时候,我们也玩过并发多线程,但是,对于结果的实时处理就没那么方便了(尽管也可以实现),有了协程之后,代码看起来就会舒服许多,参考下面的multi_cmd.php
multi_cmd.php :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
<?php $arr_task = array( array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 1 && echo -n succ 1"'), array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 2 && echo -n succ 2"'), array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 3 && echo -n succ 3"'), ); $start = microtime(1); $arr_result = multi_cmd($arr_task); foreach($arr_result as $k=>$v) { echo "$k: {$v['cmd']}\n"; echo "\tstdout: ".$v['output']['stdout']."\n"; echo "\tstderr: ".$v['output']['stderr']."\n"; } $end = microtime(1); echo "time_used:". ($end - $start) ."\n"; function multi_cmd($arr_task) { $arr_result = array(); foreach ($arr_task as $k => $arr) { $cmd = $arr['cmd']; $arr_result[$k] = array("cmd"=>$cmd, "result"=>0, "output"=>array("stdout"=>null, "stderr"=>null), "pipes"=>array()); $descriptorspec[$k] = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("pipe", "w") // stderr is a file to write to ); $process[$k] = proc_open($cmd, $descriptorspec[$k], $arr_result[$k]["pipes"]); if (!is_resource($process[$k])) { $arr_result[$k] = 3; continue; } stream_set_blocking($arr_result[$k]["pipes"][1], 0); stream_set_blocking($arr_result[$k]["pipes"][2], 0); } while(!empty($arr_result)) { foreach($arr_result as $k=>&$v) { if ($v['result'] == 3) { yield $k=>$v; unset($arr_result[$k]); continue; } $stdout = $arr_result[$k]["pipes"][1]; $stderr = $arr_result[$k]["pipes"][2]; if ($v['result'] != 1) { $str = @fread($stdout, 1024); $arr_result[$k]["output"]["stdout"] .= $str; if (@feof($stdout)) { $v['result'] += 1; fclose($stdout); } } if ($v['result'] != 2) { $str = @fread($stderr, 1024); $arr_result[$k]["output"]["stderr"] .= $str; if (@feof($stderr)) { $v['result'] += 2; fclose($stderr); } } } } } |
我们见到的更多的可能是并发执行多个任务,每个任务都完成后(如: curl异步多请求)在一并处理结果,如果处理结果本身也是需要时间的话,就会比较浪费时间,如果能完成一个处理一个的结果,效果会好一些
上面脚本存在瑕疵:
1、为了避免某个流阻塞整个进程,上面使用了非阻塞;但是,后面的死循环却导致了大量cpu的占用,所以,考虑使用stream_select会更好一些
2、为了能控制并发数就更好了,并发太大也可能不是好事
改进的脚本如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
<?php $arr_task = array( array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 2 && echo -n succ 1"'), array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 2 && echo -n succ 2"'), array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 3 && echo -n succ 3"'), ); $start = microtime(1); $arr_result = multi_cmd($arr_task, 2); foreach($arr_result as $k=>$v) { echo "$k: {$v['cmd']}\n"; echo "\tretcode: ".$v['retcode']."\n"; echo "\tstdout: ".$v['output']['stdout']."\n"; echo "\tstderr: ".$v['output']['stderr']."\n"; } $end = microtime(1); echo "time_used:". ($end - $start) ."\n"; function add_task(&$arr_task, &$arr_result, $concurent) { if (empty($arr_task)) return; foreach ($arr_task as $k => $arr) { if ($concurent != -1 && count($arr_result) >= $concurent ) return; echo "add task $k\n"; $cmd = $arr['cmd']; unset($arr_task[$k]); $arr_result[$k] = array("cmd"=>$cmd, "result"=>0, "output"=>array("stdout"=>null, "stderr"=>null), "pipes"=>array()); $arr_result[$k]['filedesc'] = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("pipe", "w") // stderr is a file to write to ); $arr_result[$k]['handle'] = proc_open($cmd, $arr_result[$k]['filedesc'], $arr_result[$k]["pipes"]); if (!is_resource($arr_result[$k]['handle'])) { $arr_result[$k] = 3; continue; } //stream_set_blocking($arr_result[$k]["pipes"][1], 0); //stream_set_blocking($arr_result[$k]["pipes"][2], 0); } } function multi_cmd($arr_task, $concurent = -1) { $arr_result = array(); add_task($arr_task, $arr_result, $concurent); while(!empty($arr_result)) { $arr_read_fds = array(); foreach($arr_result as $k=>&$v) { if ($v['result'] == 3) { unset($arr_result[$k]); $v['retcode'] = @proc_close($v['handle']); add_task($arr_task, $arr_result, $concurent); yield $k=>$v; continue; } if (!($v['result'] & 1)) { $arr_read_fds[] = $v["pipes"][1]; } if (!($v['result'] & 2)) { $arr_read_fds[] = $v["pipes"][2]; } } if (empty($arr_read_fds)) break; $arr_expect = array(); $arr_write_fds = $arr_expect_fds = array(); $arr_fds = $arr_read_fds; while( 0 === stream_select($arr_fds, $arr_write_fds, $arr_expect_fds, 0, 500000)) { $arr_fds = $arr_read_fds; } foreach($arr_result as $k=>&$v) { $stdout = $arr_result[$k]["pipes"][1]; $stderr = $arr_result[$k]["pipes"][2]; if (!($v['result'] & 1) && in_array($stdout, $arr_fds)) { $str = @fread($stdout, 1024); $arr_result[$k]["output"]["stdout"] .= $str; if (@feof($stdout)) { $v['result'] |= 1; @fclose($stdout); } } if (!($v['result'] & 2) && in_array($stderr, $arr_fds)) { $str = @fread($stderr, 1024); $arr_result[$k]["output"]["stderr"] .= $str; if (@feof($stderr)) { $v['result'] |= 2; @fclose($stderr); } } } } } |
注意:
在使用stream_select 的时候, 是否阻塞也就不重要了? 也不完全,加入其中一个比较靠前的任务执行时间很长,就算第一批的大部分任务执行时间都很短,也会因为fread而阻塞在执行时间长的任务身上而无法快速完成其它任务,进而加入更多的任务; 所以,这里可能的办法有:
H3C路由器(H3C MSR50-40),支持标准限速、高级限速。
最近,有个高级限速的需求,根据时间段限速,网络工程师搞了1天,没搞定;
我仔细看了看,发现时间段的定义是inactive状态的:
1 2 3 4 |
[MSR5040-JF]display acl 3970 Advanced ACL 3970, named -none-, 1 rule, ACL's step is 5 rule 0 permit ip destination 172.16.161.16 0 time-range tr3970 (Inactive) |
另外,display time-range all 也显示inactive 的
原因: 路由器时间不对,设置的限速时间段是上班时间,然而,路由器时间错了8小时多
另外一个问题:
当对限速不太熟悉时,通过界面来配置是比较简单的,但是,特殊的需求未必能在界面上配置的出来
To dump a process, run:
1 |
$ criu dump -D checkpoint -t 1234 |
where,
-D : directory to save image files
-t : PID of process to dump
Continuing with the examples above where we dumped the process with PID 1234, we can generate the core dump with the crit utility that comes with criu:
1 |
$ crit core-dump -i checkpoint -o checkpoint |
where,
-i : input directory with criu images
-o : output directory for the core dump
To find the generate core dump file:
1 2 |
$ ls checkpoint/core.* core.45678 |
Check the information with readelf:
1 |
$ readelf -a core.45678 |
Start debugging with GDB:
1 |
$ gdb loop core.45678 |
To resume a process from dump files:
1 |
$ criu restore -d -D checkpoint |
where,
-d : detach criu from the process after resume
测试:
脚本 a.php
1 2 3 4 5 6 |
<?php $i = 0; while(1) { echo $i++, "\n"; sleep(1); } |
启动:
1 |
php a.php >/tmp/a.log 2>&1 0>/dev/null |
dump:
1 |
criu dump --shell-job -t 4774 |
restore:
1 |
criu restore -d -D . |
截屏:
参考: