vagrant package centos7

当自己制作vagrant的vbox时,才发现其实有很多问题是需要注意的,也或者说,自己才能学到一些东西。

今天,自己随便安装了一个centos7.3,然后就:

使用之前用过的vagrantfile,随便修改了几下,当vagrant up的时候,无法正常走完启动流程,也无法vagrant ssh进去。通过其他方式进入虚拟机,发现没有一个网卡是被配置过的,以往的使用来看,至少要有一块网卡配置IP为: 10.0.2.15,而vagrant应该也是通过该网卡进入虚拟机进行其他配置的,那么,问题是:

  1. 在vagrant能进入虚拟机之前,10.0.2.15这个IP是如何配置上去的呢?

分析: vagrant有能力为虚拟机安装一块物理网卡,并接入自己的网络,而该网络提供了一个dhcp服务;现在,只需要该网卡能够自动启动,并且配置为dhcp,就可以得到一个IP

尝试: 进入自己虚拟机,发现配置为NAT的那块儿网卡确实是配置为DHCP的,只是ONBOOT=no,将ONBOOT修改为yes如下:

然后,重新使用vagrant操作该虚拟机,一切顺利

 

另外: 使用vagrant会发现一个现象,就是,在已经看到机器启动成功的界面的时候,还需要等待很长时间才能连接进去,莫非dhcp操作延迟执行?测试发现dhcp很快就获得ip了

docker swarm 学习

docker swarm 居然和–live-restore 不兼容,真恶心

 

 

参考:

http://www.jianshu.com/p/9eb9995884a5

Docker 存储驱动比较

摘自: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver-option

The Docker daemon has support for several different image layer storage drivers: aufs, devicemapper, btrfs, zfs, overlay and overlay2.

The aufs driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged into the main kernel. These are also known to cause some serious kernel crashes. However, aufs allows containers to share executable and shared library memory, so is a useful choice when running thousands of containers with the same program or libraries.

The devicemapper driver uses thin provisioning and Copy on Write (CoW) snapshots. For each devicemapper graph location – typically/var/lib/docker/devicemapper – a thin pool is created based on two block devices, one for data and one for metadata. By default, these block devices are created automatically by using loopback mounts of automatically created sparse files. Refer to Storage driver optionsbelow for a way how to customize this setup. ~jpetazzo/Resizing Docker containers with the Device Mapper plugin article explains how to tune your existing setup without the use of options.

The btrfs driver is very fast for docker buildbut like devicemapper does not share executable memory between devices. Usedockerd -s btrfs -g /mnt/btrfs_partition.

The zfs driver is probably not as fast as btrfs but has a longer track record on stability. Thanks to Single Copy ARC shared blocks between clones will be cached only once. Use dockerd -s zfs. To select a different zfs filesystem set zfs.fsname option as described in Storage driver options.

The overlay is a very fast union filesystem. It is now merged in the main Linux kernel as of 3.18.0. overlay also supports page cache sharing, this means multiple containers accessing the same file can share a single page cache entry (or entries), it makes overlay as efficient with memory as aufs driver. Call dockerd -s overlay to use it.

Note: As promising as overlay is, the feature is still quite young and should not be used in production. Most notably, using overlay can cause excessive inode consumption (especially as the number of images grows), as well as being incompatible with the use of RPMs.

The overlay2 uses the same fast union filesystem but takes advantage of additional features added in Linux kernel 4.0 to avoid excessive inode consumption. Call dockerd -s overlay2 to use it.

Note: Both overlay and overlay2 are currently unsupported on btrfs or any Copy on Write filesystem and should only be used over ext4 partitions

 

overlay 实现上只支持2层的文件系统,但是overlay2 支持多层的(docker镜像都是多层的)

 

更多参考:

https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/#overlayfs-and-docker-performance

portal认证学习

 

https://wenku.baidu.com/view/3c669b57bcd126fff6050b5b.html

 

https://wenku.baidu.com/view/27c2edf2f90f76c661371a1d.html

 

http://jingyan.baidu.com/article/ca41422ffb521e1eae99eda1.html

 

https://wenku.baidu.com/view/9f742b50f01dc281e53af0aa.html

 

https://wenku.baidu.com/view/9e723734f111f18583d05acb.html?re=view

 

下面这个是服务器上的认证网关软件: (不是网络设备上的)

https://wenku.baidu.com/view/22f2b2d2c1c708a1284a449f.html

PHP 协程示例

在PHP没有协程的时候,我们也玩过并发多线程,但是,对于结果的实时处理就没那么方便了(尽管也可以实现),有了协程之后,代码看起来就会舒服许多,参考下面的multi_cmd.php

 

multi_cmd.php :

我们见到的更多的可能是并发执行多个任务,每个任务都完成后(如: curl异步多请求)在一并处理结果,如果处理结果本身也是需要时间的话,就会比较浪费时间,如果能完成一个处理一个的结果,效果会好一些

 

上面脚本存在瑕疵:

1、为了避免某个流阻塞整个进程,上面使用了非阻塞;但是,后面的死循环却导致了大量cpu的占用,所以,考虑使用stream_select会更好一些

2、为了能控制并发数就更好了,并发太大也可能不是好事

 

改进的脚本如下:

 

注意:

在使用stream_select 的时候, 是否阻塞也就不重要了? 也不完全,加入其中一个比较靠前的任务执行时间很长,就算第一批的大部分任务执行时间都很短,也会因为fread而阻塞在执行时间长的任务身上而无法快速完成其它任务,进而加入更多的任务; 所以,这里可能的办法有:

  1. 办法一:给stderr和stdout添加read timeout限制,但是测试发现,stream_set_timeout 给这两个流添加超时时间是失败的,stream_set_timeout 返回false
  2. 办法二:依然使用非阻塞模式, 依然不行,尽管stream_set_blocking(stdout, false) 返回true,也是无效的
  3. 注意:

    也就是说仅有的两个函数都是不能用的,也就是说,不是函数不行,而是,这事儿就不可行