vagrant package centos7

当自己制作vagrant的vbox时,才发现其实有很多问题是需要注意的,也或者说,自己才能学到一些东西。

今天,自己随便安装了一个centos7.3,然后就:

使用之前用过的vagrantfile,随便修改了几下,当vagrant up的时候,无法正常走完启动流程,也无法vagrant ssh进去。通过其他方式进入虚拟机,发现没有一个网卡是被配置过的,以往的使用来看,至少要有一块网卡配置IP为: 10.0.2.15,而vagrant应该也是通过该网卡进入虚拟机进行其他配置的,那么,问题是:

  1. 在vagrant能进入虚拟机之前,10.0.2.15这个IP是如何配置上去的呢?

分析: vagrant有能力为虚拟机安装一块物理网卡,并接入自己的网络,而该网络提供了一个dhcp服务;现在,只需要该网卡能够自动启动,并且配置为dhcp,就可以得到一个IP

尝试: 进入自己虚拟机,发现配置为NAT的那块儿网卡确实是配置为DHCP的,只是ONBOOT=no,将ONBOOT修改为yes如下:

然后,重新使用vagrant操作该虚拟机,一切顺利

 

另外: 使用vagrant会发现一个现象,就是,在已经看到机器启动成功的界面的时候,还需要等待很长时间才能连接进去,莫非dhcp操作延迟执行?测试发现dhcp很快就获得ip了

docker swarm 学习

docker swarm 居然和–live-restore 不兼容,真恶心

 

 

参考:

http://www.jianshu.com/p/9eb9995884a5

Docker 存储驱动比较

摘自: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver-option

The Docker daemon has support for several different image layer storage drivers: aufs, devicemapper, btrfs, zfs, overlay and overlay2.

The aufs driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged into the main kernel. These are also known to cause some serious kernel crashes. However, aufs allows containers to share executable and shared library memory, so is a useful choice when running thousands of containers with the same program or libraries.

The devicemapper driver uses thin provisioning and Copy on Write (CoW) snapshots. For each devicemapper graph location – typically/var/lib/docker/devicemapper – a thin pool is created based on two block devices, one for data and one for metadata. By default, these block devices are created automatically by using loopback mounts of automatically created sparse files. Refer to Storage driver optionsbelow for a way how to customize this setup. ~jpetazzo/Resizing Docker containers with the Device Mapper plugin article explains how to tune your existing setup without the use of options.

The btrfs driver is very fast for docker buildbut like devicemapper does not share executable memory between devices. Usedockerd -s btrfs -g /mnt/btrfs_partition.

The zfs driver is probably not as fast as btrfs but has a longer track record on stability. Thanks to Single Copy ARC shared blocks between clones will be cached only once. Use dockerd -s zfs. To select a different zfs filesystem set zfs.fsname option as described in Storage driver options.

The overlay is a very fast union filesystem. It is now merged in the main Linux kernel as of 3.18.0. overlay also supports page cache sharing, this means multiple containers accessing the same file can share a single page cache entry (or entries), it makes overlay as efficient with memory as aufs driver. Call dockerd -s overlay to use it.

Note: As promising as overlay is, the feature is still quite young and should not be used in production. Most notably, using overlay can cause excessive inode consumption (especially as the number of images grows), as well as being incompatible with the use of RPMs.

The overlay2 uses the same fast union filesystem but takes advantage of additional features added in Linux kernel 4.0 to avoid excessive inode consumption. Call dockerd -s overlay2 to use it.

Note: Both overlay and overlay2 are currently unsupported on btrfs or any Copy on Write filesystem and should only be used over ext4 partitions

 

overlay 实现上只支持2层的文件系统,但是overlay2 支持多层的(docker镜像都是多层的)

 

更多参考:

https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/#overlayfs-and-docker-performance