- vim /etc/default/grub
- grub2-mkconfig -o /boot/grub2/grub.cfg (曾经是 update-grub)
(或者直接修改/boot/grub2/grub.cfg)
DevOps
(或者直接修改/boot/grub2/grub.cfg)
当自己制作vagrant的vbox时,才发现其实有很多问题是需要注意的,也或者说,自己才能学到一些东西。
今天,自己随便安装了一个centos7.3,然后就:
1 2 3 |
vagrant package --name centos7.3 --output centos7.3.vbox vagrant box add centos7.3 centos73.vbox |
使用之前用过的vagrantfile,随便修改了几下,当vagrant up的时候,无法正常走完启动流程,也无法vagrant ssh进去。通过其他方式进入虚拟机,发现没有一个网卡是被配置过的,以往的使用来看,至少要有一块网卡配置IP为: 10.0.2.15,而vagrant应该也是通过该网卡进入虚拟机进行其他配置的,那么,问题是:
分析: vagrant有能力为虚拟机安装一块物理网卡,并接入自己的网络,而该网络提供了一个dhcp服务;现在,只需要该网卡能够自动启动,并且配置为dhcp,就可以得到一个IP
尝试: 进入自己虚拟机,发现配置为NAT的那块儿网卡确实是配置为DHCP的,只是ONBOOT=no,将ONBOOT修改为yes如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
TYPE=Ethernet BOOTPROTO=dhcp DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=enp0s3 UUID=f1577ffc-c647-46e9-ab88-74a23335901b DEVICE=enp0s3 ONBOOT=yes |
然后,重新使用vagrant操作该虚拟机,一切顺利
另外: 使用vagrant会发现一个现象,就是,在已经看到机器启动成功的界面的时候,还需要等待很长时间才能连接进去,莫非dhcp操作延迟执行?测试发现dhcp很快就获得ip了
UDP port 4789 for overlay network traffic
kernel下载: https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/RPMS/
kernel-lt-* : 为长期支持版本(lt: long term)
kernel-ml-* : 为主线版本(ml: mainline )
docker swarm 居然和–live-restore 不兼容,真恶心
1 2 |
# docker swarm init --advertise-addr 172.16.18.37 Error response from daemon: --live-restore daemon configuration is incompatible with swarm mode |
参考:
摘自: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver-option
The Docker daemon has support for several different image layer storage drivers: aufs
, devicemapper
, btrfs
, zfs
, overlay
and overlay2
.
The aufs
driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged into the main kernel. These are also known to cause some serious kernel crashes. However, aufs
allows containers to share executable and shared library memory, so is a useful choice when running thousands of containers with the same program or libraries.
The devicemapper
driver uses thin provisioning and Copy on Write (CoW) snapshots. For each devicemapper graph location – typically/var/lib/docker/devicemapper
– a thin pool is created based on two block devices, one for data and one for metadata. By default, these block devices are created automatically by using loopback mounts of automatically created sparse files. Refer to Storage driver optionsbelow for a way how to customize this setup. ~jpetazzo/Resizing Docker containers with the Device Mapper plugin article explains how to tune your existing setup without the use of options.
The btrfs
driver is very fast for docker build
– but like devicemapper
does not share executable memory between devices. Usedockerd -s btrfs -g /mnt/btrfs_partition
.
The zfs
driver is probably not as fast as btrfs
but has a longer track record on stability. Thanks to Single Copy ARC
shared blocks between clones will be cached only once. Use dockerd -s zfs
. To select a different zfs filesystem set zfs.fsname
option as described in Storage driver options.
The overlay
is a very fast union filesystem. It is now merged in the main Linux kernel as of 3.18.0. overlay
also supports page cache sharing, this means multiple containers accessing the same file can share a single page cache entry (or entries), it makes overlay
as efficient with memory as aufs
driver. Call dockerd -s overlay
to use it.
Note: As promising as
overlay
is, the feature is still quite young and should not be used in production. Most notably, usingoverlay
can cause excessive inode consumption (especially as the number of images grows), as well as being incompatible with the use of RPMs.
The overlay2
uses the same fast union filesystem but takes advantage of additional features added in Linux kernel 4.0 to avoid excessive inode consumption. Call dockerd -s overlay2
to use it.
Note: Both
overlay
andoverlay2
are currently unsupported onbtrfs
or any Copy on Write filesystem and should only be used overext4
partitions
overlay 实现上只支持2层的文件系统,但是overlay2 支持多层的(docker镜像都是多层的)
更多参考:
https://wenku.baidu.com/view/3c669b57bcd126fff6050b5b.html
https://wenku.baidu.com/view/27c2edf2f90f76c661371a1d.html
http://jingyan.baidu.com/article/ca41422ffb521e1eae99eda1.html
https://wenku.baidu.com/view/9f742b50f01dc281e53af0aa.html
https://wenku.baidu.com/view/9e723734f111f18583d05acb.html?re=view
下面这个是服务器上的认证网关软件: (不是网络设备上的)
在PHP没有协程的时候,我们也玩过并发多线程,但是,对于结果的实时处理就没那么方便了(尽管也可以实现),有了协程之后,代码看起来就会舒服许多,参考下面的multi_cmd.php
multi_cmd.php :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
<?php $arr_task = array( array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 1 && echo -n succ 1"'), array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 2 && echo -n succ 2"'), array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 3 && echo -n succ 3"'), ); $start = microtime(1); $arr_result = multi_cmd($arr_task); foreach($arr_result as $k=>$v) { echo "$k: {$v['cmd']}\n"; echo "\tstdout: ".$v['output']['stdout']."\n"; echo "\tstderr: ".$v['output']['stderr']."\n"; } $end = microtime(1); echo "time_used:". ($end - $start) ."\n"; function multi_cmd($arr_task) { $arr_result = array(); foreach ($arr_task as $k => $arr) { $cmd = $arr['cmd']; $arr_result[$k] = array("cmd"=>$cmd, "result"=>0, "output"=>array("stdout"=>null, "stderr"=>null), "pipes"=>array()); $descriptorspec[$k] = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("pipe", "w") // stderr is a file to write to ); $process[$k] = proc_open($cmd, $descriptorspec[$k], $arr_result[$k]["pipes"]); if (!is_resource($process[$k])) { $arr_result[$k] = 3; continue; } stream_set_blocking($arr_result[$k]["pipes"][1], 0); stream_set_blocking($arr_result[$k]["pipes"][2], 0); } while(!empty($arr_result)) { foreach($arr_result as $k=>&$v) { if ($v['result'] == 3) { yield $k=>$v; unset($arr_result[$k]); continue; } $stdout = $arr_result[$k]["pipes"][1]; $stderr = $arr_result[$k]["pipes"][2]; if ($v['result'] != 1) { $str = @fread($stdout, 1024); $arr_result[$k]["output"]["stdout"] .= $str; if (@feof($stdout)) { $v['result'] += 1; fclose($stdout); } } if ($v['result'] != 2) { $str = @fread($stderr, 1024); $arr_result[$k]["output"]["stderr"] .= $str; if (@feof($stderr)) { $v['result'] += 2; fclose($stderr); } } } } } |
我们见到的更多的可能是并发执行多个任务,每个任务都完成后(如: curl异步多请求)在一并处理结果,如果处理结果本身也是需要时间的话,就会比较浪费时间,如果能完成一个处理一个的结果,效果会好一些
上面脚本存在瑕疵:
1、为了避免某个流阻塞整个进程,上面使用了非阻塞;但是,后面的死循环却导致了大量cpu的占用,所以,考虑使用stream_select会更好一些
2、为了能控制并发数就更好了,并发太大也可能不是好事
改进的脚本如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
<?php $arr_task = array( array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 2 && echo -n succ 1"'), array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 2 && echo -n succ 2"'), array('cmd'=>'ssh -i /home/phpor/.ssh/id_rsa phpor@localhost "sleep 3 && echo -n succ 3"'), ); $start = microtime(1); $arr_result = multi_cmd($arr_task, 2); foreach($arr_result as $k=>$v) { echo "$k: {$v['cmd']}\n"; echo "\tretcode: ".$v['retcode']."\n"; echo "\tstdout: ".$v['output']['stdout']."\n"; echo "\tstderr: ".$v['output']['stderr']."\n"; } $end = microtime(1); echo "time_used:". ($end - $start) ."\n"; function add_task(&$arr_task, &$arr_result, $concurent) { if (empty($arr_task)) return; foreach ($arr_task as $k => $arr) { if ($concurent != -1 && count($arr_result) >= $concurent ) return; echo "add task $k\n"; $cmd = $arr['cmd']; unset($arr_task[$k]); $arr_result[$k] = array("cmd"=>$cmd, "result"=>0, "output"=>array("stdout"=>null, "stderr"=>null), "pipes"=>array()); $arr_result[$k]['filedesc'] = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("pipe", "w") // stderr is a file to write to ); $arr_result[$k]['handle'] = proc_open($cmd, $arr_result[$k]['filedesc'], $arr_result[$k]["pipes"]); if (!is_resource($arr_result[$k]['handle'])) { $arr_result[$k] = 3; continue; } //stream_set_blocking($arr_result[$k]["pipes"][1], 0); //stream_set_blocking($arr_result[$k]["pipes"][2], 0); } } function multi_cmd($arr_task, $concurent = -1) { $arr_result = array(); add_task($arr_task, $arr_result, $concurent); while(!empty($arr_result)) { $arr_read_fds = array(); foreach($arr_result as $k=>&$v) { if ($v['result'] == 3) { unset($arr_result[$k]); $v['retcode'] = @proc_close($v['handle']); add_task($arr_task, $arr_result, $concurent); yield $k=>$v; continue; } if (!($v['result'] & 1)) { $arr_read_fds[] = $v["pipes"][1]; } if (!($v['result'] & 2)) { $arr_read_fds[] = $v["pipes"][2]; } } if (empty($arr_read_fds)) break; $arr_expect = array(); $arr_write_fds = $arr_expect_fds = array(); $arr_fds = $arr_read_fds; while( 0 === stream_select($arr_fds, $arr_write_fds, $arr_expect_fds, 0, 500000)) { $arr_fds = $arr_read_fds; } foreach($arr_result as $k=>&$v) { $stdout = $arr_result[$k]["pipes"][1]; $stderr = $arr_result[$k]["pipes"][2]; if (!($v['result'] & 1) && in_array($stdout, $arr_fds)) { $str = @fread($stdout, 1024); $arr_result[$k]["output"]["stdout"] .= $str; if (@feof($stdout)) { $v['result'] |= 1; @fclose($stdout); } } if (!($v['result'] & 2) && in_array($stderr, $arr_fds)) { $str = @fread($stderr, 1024); $arr_result[$k]["output"]["stderr"] .= $str; if (@feof($stderr)) { $v['result'] |= 2; @fclose($stderr); } } } } } |
注意:
在使用stream_select 的时候, 是否阻塞也就不重要了? 也不完全,加入其中一个比较靠前的任务执行时间很长,就算第一批的大部分任务执行时间都很短,也会因为fread而阻塞在执行时间长的任务身上而无法快速完成其它任务,进而加入更多的任务; 所以,这里可能的办法有: