Docker Machine

Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or Digital Ocean.

Using docker-machine commands, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.

Why should I use it?
Docker Machine enables you to provision multiple remote Docker hosts on various flavors of Linux.
Additionally, Machine allows you to run Docker on older Mac or Windows systems, as described in the previous topic.

[root@jenkins ~]# curl -L https://github.com/docker/machine/releases/download/v0.12.2/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine &&
> chmod +x /tmp/docker-machine &&
> sudo cp /tmp/docker-machine /usr/local/bin/docker-machine

[root@jenkins ~]# docker-machine version
docker-machine version 0.12.2, build 9371605

 

Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.

Install Docker Compose

https://github.com/docker/compose/releases

[root@jenkins ~]# curl -o /usr/local/bin/docker-compose -L “https://github.com/docker/compose/releases/download/1.15.0/docker-compose-$(uname -s)-$(uname -m)”
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 617 0 617 0 0 36 0 –:–:– 0:00:16 –:–:– 142
100 8650k 100 8650k 0 0 93774 0 0:01:34 0:01:34 –:–:– 551k

[root@jenkins ~]# chmod +x /usr/local/bin/docker-compose

[root@jenkins ~]# docker-compose –version
docker-compose version 1.15.0, build e12f3b9

[root@jenkins ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

[root@jenkins ~]# mkdir hello-world/

[root@jenkins ~]# cd hello-world/

[root@jenkins hello-world]# vim docker-compose.yml
my-test:
image: hello-world

[root@jenkins hello-world]# docker-compose up
Pulling my-test (hello-world:latest)…
latest: Pulling from library/hello-world
b04784fba78d: Pull complete
Digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f
Status: Downloaded newer image for hello-world:latest
Creating helloworld_my-test_1 …
Creating helloworld_my-test_1 … done
Attaching to helloworld_my-test_1
my-test_1 |
my-test_1 | Hello from Docker!
my-test_1 | This message shows that your installation appears to be working correctly.
my-test_1 |
my-test_1 | To generate this message, Docker took the following steps:
my-test_1 | 1. The Docker client contacted the Docker daemon.
my-test_1 | 2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
my-test_1 | 3. The Docker daemon created a new container from that image which runs the
my-test_1 | executable that produces the output you are currently reading.
my-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it
my-test_1 | to your terminal.
my-test_1 |
my-test_1 | To try something more ambitious, you can run an Ubuntu container with:
my-test_1 | $ docker run -it ubuntu bash
my-test_1 |
my-test_1 | Share images, automate workflows, and more with a free Docker ID:
my-test_1 | https://cloud.docker.com/
my-test_1 |
my-test_1 | For more examples and ideas, visit:
my-test_1 | https://docs.docker.com/engine/userguide/
my-test_1 |
helloworld_my-test_1 exited with code 0

[root@jenkins hello-world]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 1815c82652c0 2 months ago 1.84kB

[root@jenkins hello-world]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

[root@jenkins hello-world]# docker run -it hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

[root@jenkins hello-world]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c6f6b4742d96 hello-world “/hello” 26 seconds ago Exited (0) 24 seconds ago hungry_blackwell
413afefd5a74 hello-world “/hello” About a minute ago Exited (0) About a minute ago helloworld_my-test_1

 

Docker Deep Dive

[root@jenkins ~]# docker

Usage: docker COMMAND

A self-sufficient runtime for containers

Options:
–config string Location of client config files (default “/root/.docker”)
-D, –debug Enable debug mode
–help Print usage
-H, –host list Daemon socket(s) to connect to
-l, –log-level string Set the logging level (“debug”|”info”|”warn”|”error”|”fatal”) (default “info”)
–tls Use TLS; implied by –tlsverify
–tlscacert string Trust certs signed only by this CA (default “/root/.docker/ca.pem”)
–tlscert string Path to TLS certificate file (default “/root/.docker/cert.pem”)
–tlskey string Path to TLS key file (default “/root/.docker/key.pem”)
–tlsverify Use TLS and verify the remote
-v, –version Print version information and quit

Management Commands:
config Manage Docker configs
container Manage containers
image Manage images
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
volume Manage volumes

Commands:
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container’s changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container’s filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container’s filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes

Run ‘docker COMMAND –help’ for more information on a command.

[root@jenkins ~]# docker run –help

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG…]

Run a command in a new container

Options:
–add-host list Add a custom host-to-IP mapping (host:ip)
-a, –attach list Attach to STDIN, STDOUT or STDERR
–blkio-weight uint16 Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
–blkio-weight-device list Block IO weight (relative device weight) (default [])
–cap-add list Add Linux capabilities
–cap-drop list Drop Linux capabilities
–cgroup-parent string Optional parent cgroup for the container
–cidfile string Write the container ID to the file
–cpu-period int Limit CPU CFS (Completely Fair Scheduler) period
–cpu-quota int Limit CPU CFS (Completely Fair Scheduler) quota
–cpu-rt-period int Limit CPU real-time period in microseconds
–cpu-rt-runtime int Limit CPU real-time runtime in microseconds
-c, –cpu-shares int CPU shares (relative weight)
–cpus decimal Number of CPUs
–cpuset-cpus string CPUs in which to allow execution (0-3, 0,1)
–cpuset-mems string MEMs in which to allow execution (0-3, 0,1)
-d, –detach Run container in background and print container ID
–detach-keys string Override the key sequence for detaching a container
–device list Add a host device to the container
–device-cgroup-rule list Add a rule to the cgroup allowed devices list
–device-read-bps list Limit read rate (bytes per second) from a device (default [])
–device-read-iops list Limit read rate (IO per second) from a device (default [])
–device-write-bps list Limit write rate (bytes per second) to a device (default [])
–device-write-iops list Limit write rate (IO per second) to a device (default [])
–disable-content-trust Skip image verification (default true)
–dns list Set custom DNS servers
–dns-option list Set DNS options
–dns-search list Set custom DNS search domains
–entrypoint string Overwrite the default ENTRYPOINT of the image
-e, –env list Set environment variables
–env-file list Read in a file of environment variables
–expose list Expose a port or a range of ports
–group-add list Add additional groups to join
–health-cmd string Command to run to check health
–health-interval duration Time between running the check (ms|s|m|h) (default 0s)
–health-retries int Consecutive failures needed to report unhealthy
–health-start-period duration Start period for the container to initialize before starting health-retries countdown (ms|s|m|h)
(default 0s)
–health-timeout duration Maximum time to allow one check to run (ms|s|m|h) (default 0s)
–help Print usage
-h, –hostname string Container host name
–init Run an init inside the container that forwards signals and reaps processes
-i, –interactive Keep STDIN open even if not attached
–ip string IPv4 address (e.g., 172.30.100.104)
–ip6 string IPv6 address (e.g., 2001:db8::33)
–ipc string IPC namespace to use
–isolation string Container isolation technology
–kernel-memory bytes Kernel memory limit
-l, –label list Set meta data on a container
–label-file list Read in a line delimited file of labels
–link list Add link to another container
–link-local-ip list Container IPv4/IPv6 link-local addresses
–log-driver string Logging driver for the container
–log-opt list Log driver options
–mac-address string Container MAC address (e.g., 92:d0:c6:0a:29:33)
-m, –memory bytes Memory limit
–memory-reservation bytes Memory soft limit
–memory-swap bytes Swap limit equal to memory plus swap: ‘-1’ to enable unlimited swap
–memory-swappiness int Tune container memory swappiness (0 to 100) (default -1)
–mount mount Attach a filesystem mount to the container
–name string Assign a name to the container
–network string Connect a container to a network (default “default”)
–network-alias list Add network-scoped alias for the container
–no-healthcheck Disable any container-specified HEALTHCHECK
–oom-kill-disable Disable OOM Killer
–oom-score-adj int Tune host’s OOM preferences (-1000 to 1000)
–pid string PID namespace to use
–pids-limit int Tune container pids limit (set -1 for unlimited)
–privileged Give extended privileges to this container
-p, –publish list Publish a container’s port(s) to the host
-P, –publish-all Publish all exposed ports to random ports
–read-only Mount the container’s root filesystem as read only
–restart string Restart policy to apply when a container exits (default “no”)
–rm Automatically remove the container when it exits
–runtime string Runtime to use for this container
–security-opt list Security Options
–shm-size bytes Size of /dev/shm
–sig-proxy Proxy received signals to the process (default true)
–stop-signal string Signal to stop a container (default “SIGTERM”)
–stop-timeout int Timeout (in seconds) to stop a container
–storage-opt list Storage driver options for the container
–sysctl map Sysctl options (default map[])
–tmpfs list Mount a tmpfs directory
-t, –tty Allocate a pseudo-TTY
–ulimit ulimit Ulimit options (default [])
-u, –user string Username or UID (format: <name|uid>[:<group|gid>])
–userns string User namespace to use
–uts string UTS namespace to use
-v, –volume list Bind mount a volume
–volume-driver string Optional volume driver for the container
–volumes-from list Mount volumes from the specified container(s)
-w, –workdir string Working directory inside the container

Dockerfiles

Each Dockerfile is a script, composed of various commands (instructions) and arguments listed successively to automatically perform actions on a base image in order to create (or form) a new one. They are used for organizing things and greatly help with deployments by simplifying the process start-to-finish.

Each Dockerfile is a script, composed of various commands (instructions) and arguments listed successively to automatically perform actions on a base image in order to create (or form) a new one. They are used for organizing things and greatly help with deployments by simplifying the process start-to-finish.

Dockerfiles begin with defining an image FROM which the build process starts. Followed by various other methods, commands and arguments (or conditions), in return, provide a new image which is to be used for creating docker containers.

Dockerfile Syntax
Dockerfile syntax consists of two kind of main line blocks: comments and commands + arguments.
# Print “Hello docker!”
RUN echo “Hello docker!”

Command Description
ADD Copies a file from the host system onto the container
CMD The command that runs when the container starts
ENTRYPOINT
ENV Sets an environment variable in the new container
EXPOSE Opens a port for linked containers
FROM The base image to use in the build. This is mandatory and must be the first command in the file.
MAINTAINER An optional value for the maintainer of the script
ONBUILD A command that is triggered when the image in the Dcokerfile is used as a base for another image
RUN Executes a command and save the result as a new layer
USER Sets the default user within the container
VOLUME Creates a shared volume that can be shared among containers or by the host machine
WORKDIR Set the default working directory for the container

Dockerfiles begin with defining an image FROM which the build process starts. Followed by various other methods, commands and arguments (or conditions), in return, provide a new image which is to be used for creating docker containers.

Dockerfile Syntax
Dockerfile syntax consists of two kind of main line blocks: comments and commands + arguments.
# Print “Hello docker!”
RUN echo “Hello docker!”

ADD
The ADD command gets two arguments: a source and a destination. It basically copies the files from the source on the host into the container’s own filesystem at the set destination.
ADD [destination directory]

CMD
The command CMD, similarly to RUN, can be used for executing a specific command. However, unlike RUN it is not executed during build, but when a container is instantiated using the image being built. Therefore, it should be considered as an initial, default command that gets executed (i.e. run) with the creation of containers based on the image.
CMD “echo” “Hello docker!”

ENTRYPOINT
ENTRYPOINT argument sets the concrete default application that is used every time a container is created using the image. For example, if you have installed a specific application inside an image and you will use this image to only run that application, you can state it with ENTRYPOINT and whenever a container is created from that image, your application will be the target.
ENTRYPOINT echo

ENV
The ENV command is used to set the environment variables (one or more). These variables consist of “key = value” pairs which can be accessed within the container by scripts and applications alike. This functionality of docker offers an enormous amount of flexibility for running programs.
ENV SERVER_WORKS 4

EXPOSE
The EXPOSE command is used to associate a specified port to enable networking between the running process inside the container and the outside world (i.e. the host).
EXPOSE 8080

FROM
FROM directive is probably the most crucial amongst all others for Dockerfiles. It defines the base image to use to start the build process. It can be any image, including the ones you have created previously. If a FROM image is not found on the host, docker will try to find it (and download) from the docker image index. It needs to be the first command declared inside a Dockerfile.
FROM ubuntu

MAINTAINER
One of the commands that can be set anywhere in the file – although it would be better if it was declared on top – is MAINTAINER. This non-executing command declares the author, hence setting the author field of the images. It should come nonetheless after FROM.
MAINTAINER authors_name

RUN
The RUN command is the central executing directive for Dockerfiles. It takes a command as its argument and runs it to form the image. Unlike CMD, it actually is used to build the image (forming another layer on top of the previous one which is committed).
RUN aptitude install -y riak

USER
The USER directive is used to set the UID (or username) which is to run the container based on the image being built.
USER 751

VOLUME
The VOLUME command is used to enable access from your container to a directory on the host machine (i.e. mounting it).
VOLUME [“/my_files”]

WORKDIR
The WORKDIR directive is used to set where the command defined with CMD is to be executed.
WORKDIR ~/

https://rafishaikblog.wordpress.com/2017/05/04/creating-docker-images/

https://rafishaikblog.wordpress.com/2017/05/05/latest-docker-version-on-centos-7/

https://github.com/kstaken/dockerfile-examples

 

Linux namespaces

A namespace wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource. Changes to the global resource are visible to other processes that are members of the namespace, but are invisible to other processes. One use of namespaces is to implement containers.

Namespaces are a feature of the Linux kernel that isolates and virtualizes system resources of a collection of processes. Examples of resources that can be virtualized include process IDs, hostnames, user IDs, network access, interprocess communication, and filesystems.

Linux provides the following namespaces:

Namespace           Constant                         Isolates
Cgroup         CLONE_NEWCGROUP  Cgroup root directory
IPC                CLONE_NEWIPC           System V IPC, POSIX message queues
Network      CLONE_NEWNET         Network devices, stacks, ports, etc.
Mount          CLONE_NEWNS            Mount points
PID                CLONE_NEWPID          Process IDs
User              CLONE_NEWUSER       User and group IDs
UTS                CLONE_NEWUTS        Hostname and NIS domain name

As of kernel version 4.10, there are 7 kinds of namespaces. Namespace functionality is the same across all kinds: each process is associated with a namespace and can only see or use the resources associated with that namespace, and descendant namespaces where applicable. This way each process (or group thereof) can have a unique view on the resource. Which resource is isolated depends on the kind of namespace that has been created for a given process group.
##############################
Mount (mnt)
Mount namespaces control mount points. Upon creation the mounts from the current mount namespace are copied to the new namespace, but mount points created afterwards do not propagate between namespaces (using shared subtrees, it is possible to propagate mount points between namespaces).

The clone flag CLONE_NEWNS – short for “NEW NameSpace” – was used because the mount namespace kind was the first to be introduced. At the time nobody thought of other namespaces but the name has stuck for backwards compatibility.
##############################
Process ID (pid)
The PID namespace provides processes with an independent set of process IDs (PIDs) from other namespaces. PID namespaces are nested, meaning when a new process is created it will have a PID for each namespace from its current namespace up to the initial PID namespace. Hence the initial PID namespace is able to see all processes, albeit with different PIDs than other namespaces will see processes with.

The first process created in a PID namespace is assigned the process id number 1 and receives most of the same special treatment as the normal init process, most notably that orphaned processes within the namespace are attached to it. This also means that the termination of this PID 1 process will immediately terminate all processes in its PID namespace and any descendants.
##############################
Network (net)
Network namespaces virtualize the network stack. On creation a network namespace contains only a loopback interface.
Each network interface (physical or virtual) is present in exactly 1 namespace and can be moved between namespaces.
Each namespace will have a private set of IP addresses, its own routing table, socket listing, connection tracking table, firewall, and other network-related resources.
On its destruction, a network namespace will destroy any virtual interfaces within it and move any physical interfaces back to the initial network namespace.
##############################
Interprocess Communication (ipc)

IPC namespaces isolate processes from SysV style inter-process communication. This prevents processes in different IPC namespaces from using, for example, the SHM family of functions to establish a range of shared memory between the two processes. Instead each process will be able to use the same identifiers for a shared memory region and produce two such distinct regions.
##############################
UTS
UTS namespaces allow a single system to appear to have different host and domain names to different processes.
##############################
User ID (user)
User namespaces are a feature to provide both privilege isolation and user identification segregation across multiple sets of processes. With administrative assistance it is possible to build a container with seeming administrative rights without actually giving elevated privileges to user processes. Like the PID namespace, user namespaces are nested and each new user namespace is considered to be a child of the user namespace that created it.

A user namespace contains a mapping table converting user IDs from the container’s point of view to the system’s point of view. This allows, for example, the root user to have user id 0 in the container but is actually treated as user id 1,400,000 by the system for ownership checks. A similar table is used for group id mappings and ownership checks.

To facilitate privilege isolation of administrative actions, each namespace type is considered owned by a user namespace based on the active user namespace at the moment of creation. A user with administrative privileges in the appropriate user namespace will be allowed to perform administrative actions within that other namespace type. For example, if a process has administrative permission to change the IP address of a network interface, it may do so as long as its own user namespace is the same as (or ancestor of) the user namespace that owns the network namespace. Hence the initial user namespace has administrative control over all namespace types in the system.
##############################
Control Group (cgroup)
To prevent leaking the control group to which a process belongs, a new namespace type was suggested and created to hide the actual control group a process is a member of. A process in such a namespace, checking which control group any process is part of, would see a path that is actually relative to the control group set at creation time, hiding its true control group position and identity. The code creating this namespace was merged into kernel version 4.6.
##############################

The kernel assigns each process a symbolic link per namespace kind in /proc/<pid>/ns/. The inode number pointed to by this symlink is the same for each process in this namespace. This uniquely identifies each namespace by the inode number pointed to by one of its symlinks.

Reading the symlink via readlink returns a string containing the namespace kind name and the inode number of the namespace.

Syscalls
Three syscalls can directly manipulate namespaces:

clone, flags to specify which new namespace the new process should be migrated to.
unshare, flags to specify which new namespace the current process should be migrated to.
setns, enters the namespace specified by a file descriptor.

Destruction
If a namespace is no longer referenced, it will be deleted, the handling of the contained resource depends on the namespace kind. Namespaces can be referenced in three ways:

a process belonging to the namespace
an open filedescriptor to the namespace’s file (/proc/<pid>/ns/<ns-kind>)
a bind mount of the namespace’s file (/proc/<pid>/ns/<ns-kind>)

Namespaces API includes the following system calls:
clone(2)
The clone(2) system call creates a new process. If the flags argument of the call specifies one or more of the CLONE_NEW* flags listed below, then new namespaces are created for each flag, and the child process is made a member of those namespaces. (This system call also implements a number of features unrelated to namespaces.)

setns(2)
The setns(2) system call allows the calling process to join an existing namespace. The namespace to join is specified via a file descriptor that refers to one of the /proc/[pid]/ns files described below.

unshare(2)
The unshare(2) system call moves the calling process to a new namespace. If the flags argument of the call specifies one or more of the CLONE_NEW* flags listed below, then new namespaces are created for each flag, and the calling process is made a
member of those namespaces. (This system call also implements a number of features unrelated to namespaces.)

Creation of new namespaces using clone(2) and unshare(2) in most cases requires the CAP_SYS_ADMIN capability. User namespaces are the exception: since Linux 3.8, no privilege is required to create a user namespace.

Each process has a /proc/[pid]/ns/ subdirectory containing one entry for each namespace that supports being manipulated by setns(2):

root@NOC-RAFI:~# ls -l /proc/$$/ns
total 0
lrwxrwxrwx 1 root root 0 Aug 17 15:37 cgroup -> cgroup:[4026531835]
lrwxrwxrwx 1 root root 0 Aug 17 15:37 ipc -> ipc:[4026531839]
lrwxrwxrwx 1 root root 0 Aug 17 15:37 mnt -> mnt:[4026531840]
lrwxrwxrwx 1 root root 0 Aug 17 15:37 net -> net:[4026531957]
lrwxrwxrwx 1 root root 0 Aug 17 15:37 pid -> pid:[4026531836]
lrwxrwxrwx 1 root root 0 Aug 17 15:37 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 Aug 17 15:37 uts -> uts:[4026531838]

root@NOC-RAFI:~# ls -l /proc/$$/ns/cgroup
lrwxrwxrwx 1 root root 0 Aug 17 15:46 /proc/23633/ns/cgroup -> cgroup:[4026531835]

root@NOC-RAFI:~# readlink /proc/$$/ns/uts
uts:[4026531838]

Creating a new cgroup namespace. First, (as superuser) we create a child cgroup in the freezer hierarchy, and put the shell into that cgroup:

[root@localhost ~]# ls -l /sys/fs/cgroup/freezer/
total 0
-rw-r–r– 1 root root 0 Aug 17 16:01 cgroup.clone_children
–w–w–w- 1 root root 0 Aug 17 16:01 cgroup.event_control
-rw-r–r– 1 root root 0 Aug 17 16:01 cgroup.procs
-r–r–r– 1 root root 0 Aug 17 16:01 cgroup.sane_behavior
-rw-r–r– 1 root root 0 Aug 17 16:01 notify_on_release
-rw-r–r– 1 root root 0 Aug 17 16:01 release_agent
-rw-r–r– 1 root root 0 Aug 17 16:01 tasks

[root@localhost ~]# mkdir /sys/fs/cgroup/freezer/rafi

[root@localhost ~]# ls -l /sys/fs/cgroup/freezer/rafi/
total 0
-rw-r–r– 1 root root 0 Aug 17 16:27 cgroup.clone_children
–w–w–w- 1 root root 0 Aug 17 16:27 cgroup.event_control
-rw-r–r– 1 root root 0 Aug 17 16:28 cgroup.procs
-r–r–r– 1 root root 0 Aug 17 16:27 freezer.parent_freezing
-r–r–r– 1 root root 0 Aug 17 16:27 freezer.self_freezing
-rw-r–r– 1 root root 0 Aug 17 16:27 freezer.state
-rw-r–r– 1 root root 0 Aug 17 16:27 notify_on_release
-rw-r–r– 1 root root 0 Aug 17 16:27 tasks

[root@localhost ~]# echo $$
2065

[root@localhost ~]# sh -c ‘echo 2065 > /sys/fs/cgroup/freezer/rafi/cgroup.procs’

[root@localhost ~]# cat /proc/self/cgroup |grep freezer
4:freezer:/rafi

[root@localhost ~]# unshare -m bash

[root@localhost ~]# cat /proc/self/cgroup |grep freezer
4:freezer:/rafi

[root@localhost ~]# cat /proc/1/cgroup |grep freezer
4:freezer:/

###########################################################3

Mount namespaces

were the first namespace type added to Linux, appearing in 2002 in Linux 2.4.19. They isolate the list of mount points seen by the processes in a namespace. Or, to put things another way, each mount namespace has its own list of mount points, meaning that processes in different namespaces see and are able to manipulate different views of the single directory hierarchy.

A mount namespace is the set of filesystem mounts that are visible to a process. From clone man page : Every process lives in a mount namespace. The namespace of a process is the data (the set of mounts) describing the file hierarchy as seen by that process.

Docker uses these primarily to make the container look like it has its entire own filesystem namespace. If you’ve ever used a chroot jail, this is its tougher cousin. It looks a lot like a chroot jail but goes all the way down to the kernel so that even mount and unmount system calls are namespaced. If you use docker exec or nsenter to get into a container, you’ll see a filesystem rooted on “/ “. but we know this isn’t the actual root partition of the system. It’s the mount namespaces that make that possible.

Mount namespaces provide isolation of the list of mount points seen by the processes in each namespace instance. Thus, the processes in each of the mount namespace instances will see distinct single-directory hierarchies.

When the system is first booted, there is a single mount namespace, the so-called “initial namespace”. New mount namespaces are created by using the CLONE_NEWNS flag with either the clone() system call (to create a new child process in the new namespace) or the unshare() system call (to move the caller into the new namespace). When a new mount namespace is created, it receives a copy of the mount point list replicated from the namespace of the caller of clone() or unshare().

Following the clone() or unshare() call, mount points can be independently added and removed in each namespace (via mount() and umount()). Changes to the mount point list are (by default) visible only to processes in the mount namespace where the process resides; the changes are not visible in other mount namespaces.

Mount namespaces serve a variety of purposes. For example, they can be used to provide per-user views of the filesystem. Other uses include mounting a /proc filesystem for a new PID namespace without causing side effects for other process and chroot()-style isolation of a process to a portion of the single directory hierarchy. In some use cases, mount namespaces are combined with bind mounts.

 

Environments usage in puppet

[root@server ~]# puppet module list
/etc/puppetlabs/code/environments/production/modules
├── bashtoni-timezone (v1.0.0)
├── fschaer-omd (v1.0.3)
├── puppetlabs-apache (v1.8.1)
├── puppetlabs-apt (v2.4.0)
├── puppetlabs-concat (v2.2.1)
├── puppetlabs-firewall (v1.9.0)
├── puppetlabs-inifile (v1.6.0)
├── puppetlabs-ntp (v6.2.0)
├── puppetlabs-postgresql (v4.9.0)
├── puppetlabs-puppetdb (v5.1.2)
├── puppetlabs-stdlib (v4.17.1)
├── puppetlabs-xinetd (v2.0.0)
├── saz-ssh (v3.0.1)
├── saz-sudo (v4.2.0)
└── users (???)
/etc/puppetlabs/code/modules (no modules installed)
/opt/puppetlabs/puppet/modules (no modules installed)

[root@server ~]# mkdir /etc/puppetlabs/code/environments/testing/

[root@server ~]# puppet module install stahnma-epel –version 1.2.2 –environment testing
Notice: Preparing to install into /etc/puppetlabs/code/environments/testing/modules …
Notice: Created target directory /etc/puppetlabs/code/environments/testing/modules
Notice: Downloading from https://forgeapi.puppet.com
Notice: Installing — do not interrupt …
/etc/puppetlabs/code/environments/testing/modules
└─┬ stahnma-epel (v1.2.2)
└── puppetlabs-stdlib (v4.18.0)

[root@server ~]# puppet module list –environment testing
/etc/puppetlabs/code/environments/testing/modules
├── puppetlabs-stdlib (v4.18.0)
└── stahnma-epel (v1.2.2)
/etc/puppetlabs/code/modules (no modules installed)
/opt/puppetlabs/puppet/modules (no modules installed)

[root@server ~]# puppet module uninstall stahnma-epel –environment testing
Notice: Preparing to uninstall ‘stahnma-epel’ …
Removed ‘stahnma-epel’ (v1.2.2) from /etc/puppetlabs/code/environments/testing/modules

[root@server ~]# puppet module install puppetlabs-apache –version 1.6.0 –environment testing
Notice: Preparing to install into /etc/puppetlabs/code/environments/testing/modules …
Notice: Downloading from https://forgeapi.puppet.com
Notice: Installing — do not interrupt …
/etc/puppetlabs/code/environments/testing/modules
└─┬ puppetlabs-apache (v1.6.0)
├── puppetlabs-concat (v1.2.5)
└── puppetlabs-stdlib (v4.18.0)

[root@server ~]# mkdir /etc/puppetlabs/code/environments/testing/manifests
[root@server ~]# touch /etc/puppetlabs/code/environments/testing/manifests/site.pp

puppet agent –environment=test

[root@server ~]# puppet module install puppetlabs-firewall –version 1.9.0 –environment testing

[root@server ~]# mkdir -p /etc/puppetlabs/code/environments/testing/hieradata/roles

[root@server ~]# touch /etc/puppetlabs/code/environments/testing/hieradata/common.yaml

[root@server ~]# mkdir -p /etc/puppetlabs/code/environments/testing/modules/{role,profile,my_fw,users}/manifests

[root@server ~]# touch /etc/puppetlabs/code/environments/testing/modules/{role,profile}/manifests/init.pp

[root@server ~]# touch /etc/puppetlabs/code/environments/testing/modules/my_fw/manifests/{pre,post,init,http}.pp

[root@server ~]# echo “alias prod=’cd /etc/puppetlabs/code/environments/production'” >> ~/.bashrc

[root@server ~]# echo “alias test=’cd /etc/puppetlabs/code/environments/testing'” >> ~/.bashrc

[root@server ~]# test
[root@server testing]# prod
[root@server production]#

[root@server ~]# puppet module list –environment testing
/etc/puppetlabs/code/environments/testing/modules
├── my_fw (???)
├── profile (???)
├── puppetlabs-apache (v1.6.0)
├── puppetlabs-concat (v1.2.5)
├── puppetlabs-firewall (v1.9.0)
├── puppetlabs-stdlib (v4.18.0)
├── role (???)
└── users (???)

[root@server ~]# test
[root@server testing]# vim manifests/site.pp
node default {
package{‘nmap’:
ensure => present
}
}

 

agent side

vim /etc/puppetlabs/puppet/puppet.conf

server = server.example.com
environment = testing

[root@spire ~]# puppet agent -t
Info: Using configured environment ‘testing
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for spire.example.com
Info: Applying configuration version ‘1503010899’
Notice: /Stage[main]/Main/Node[default]/Package[nmap]/ensure: created
Notice: Applied catalog in 2.70 seconds

 

 

nagios plugins functionality

[root@localhost ~]# locate nagios/plugins |head -n1
/usr/lib64/nagios/plugins

check_swap 

[root@localhost ~]# /usr/lib64/nagios/plugins/check_swap –help
check_swap v2.2.1 (nagios-plugins 2.2.1)
Copyright (c) 2000-2014 Nagios Plugin Development Team
<devel@nagios-plugins.org>

Check swap space on local machine.

Usage:
check_swap [-av] -w % -c %
-w -c

Options:
-h, –help
Print detailed help screen
-V, –version
Print version information
–extra-opts=[section][@file]
Read options from an ini file. See
https://www.nagios-plugins.org/doc/extra-opts.html
for usage and examples.
-w, –warning=INTEGER
Exit with WARNING status if less than INTEGER bytes of swap space are free
-w, –warning=PERCENT%%
Exit with WARNING status if less than PERCENT of swap space is free
-c, –critical=INTEGER
Exit with CRITICAL status if less than INTEGER bytes of swap space are free
-c, –critical=PERCENT%%
Exit with CRITICAL status if less than PERCENT of swap space is free
-a, –allswaps
Conduct comparisons for all swap partitions, one by one
-v, –verbose
Show details for command-line debugging (Nagios may truncate output)

Notes:
Both INTEGER and PERCENT thresholds can be specified, they are all checked.
On AIX, if -a is specified, uses lsps -a, otherwise uses lsps -s.

Send email to help@nagios-plugins.org if you have questions regarding use
of this software. To submit patches or suggest improvements, send email to
devel@nagios-plugins.org

[root@localhost ~]# /usr/lib64/nagios/plugins/check_swap -w 50
SWAP OK – 100% free (819 MB out of 819 MB) |swap=819MB;0;0;0;819

[root@localhost ~]# echo $?
0

[root@localhost ~]# /usr/lib64/nagios/plugins/check_swap -c 1300
check_swap: Warning free space should be more than critical free space
Usage:
check_swap [-av] -w % -c %
-w -c

[root@localhost ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 839676 620 -1

[root@localhost ~]# swapoff /dev/dm-1

[root@localhost ~]# swapon -s

[root@localhost ~]# /usr/lib64/nagios/plugins/check_swap -w 10
SWAP WARNING – 0% free (0 MB out of 0 MB) – Swap is either disabled, not present, or of zero size. |swap=0MB;0;0;0;0

[root@localhost ~]# echo $?
1

[root@localhost ~]# /usr/lib64/nagios/plugins/check_swap -w 1200 -c 1000
SWAP CRITICAL – 0% free (0 MB out of 0 MB) – Swap is either disabled, not present, or of zero size. |swap=0MB;0;0;0;0

check_ping

[root@localhost ~]# /usr/lib64/nagios/plugins/check_ping -h
check_ping v2.2.1 (nagios-plugins 2.2.1)
Copyright (c) 1999 Ethan Galstad <nagios@nagios.org>
Copyright (c) 2000-2014 Nagios Plugin Development Team
<devel@nagios-plugins.org>

Use ping to check connection statistics for a remote host.

Usage:
check_ping -H -w ,% -c ,%
[-p packets] [-t timeout] [-4|-6]

Options:
-h, –help
Print detailed help screen
-V, –version
Print version information
–extra-opts=[section][@file]
Read options from an ini file. See
https://www.nagios-plugins.org/doc/extra-opts.html
for usage and examples.
-4, –use-ipv4
Use IPv4 connection
-6, –use-ipv6
Use IPv6 connection
-H, –hostname=HOST
host to ping
-w, –warning=THRESHOLD
warning threshold pair
-c, –critical=THRESHOLD
critical threshold pair
-p, –packets=INTEGER
number of ICMP ECHO packets to send (Default: 5)
-s, –show-resolution
show name resolution in the plugin output (DNS & IP)
-L, –link
show HTML in the plugin output (obsoleted by urlize)
-t, –timeout=INTEGER:
Seconds before connection times out (default: 10)
Optional “:” can be a state integer (0,1,2,3) or a state STRING

THRESHOLD is ,% where is the round trip average travel
time (ms) which triggers a WARNING or CRITICAL state, and is the
percentage of packet loss to trigger an alarm state.

This plugin uses the ping command to probe the specified host for packet loss
(percentage) and round trip average (milliseconds). It can produce HTML output
linking to a traceroute CGI contributed by Ian Cass. The CGI can be found in
the contrib area of the downloads section at http://www.nagios.org/

Send email to help@nagios-plugins.org if you have questions regarding use
of this software. To submit patches or suggest improvements, send email to
devel@nagios-plugins.org

[root@localhost ~]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=63 time=42.2 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=63 time=41.5 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=63 time=41.3 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=63 time=39.1 ms
^C
— 8.8.8.8 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 39.186/41.107/42.282/1.185 ms

check if ping for 5sec and if unable to ping report as 5% loss, if unable to reach for 10 sec report it as 10% loss
[root@localhost ~]# /usr/lib64/nagios/plugins/check_ping 8.8.8.8 -w 5000.0,5% -c 10000.0,10%
PING OK – Packet loss = 0%, RTA = 53.40 ms|rta=53.398998ms;5000.000000;10000.000000;0.000000 pl=0%;5;10;0

[root@localhost ~]# /usr/lib64/nagios/plugins/check_ping 192.168.122.133 -w 5000.0,5% -c 10000.0,10%
CRITICAL – Host Unreachable (192.168.122.133)

[root@localhost ~]# /usr/lib64/nagios/plugins/check_ping 127.0.0.1 -w 5000.0,5% -c 10000.0,10%
PING OK – Packet loss = 0%, RTA = 0.09 ms|rta=0.094000ms;5000.000000;10000.000000;0.000000 pl=0%;5;10;0

[root@localhost ~]# /usr/lib64/nagios/plugins/check_ping 8.8.8.8 -w 5.0,5% -c 20.0,10%
PING CRITICAL – Packet loss = 0%, RTA = 48.54 ms|rta=48.536999ms;5.000000;20.000000;0.000000 pl=0%;5;10;0

[root@localhost ~]# /usr/lib64/nagios/plugins/check_ping 8.8.8.8 -w 5.0,5% -c 200.0,10%
PING WARNING – Packet loss = 0%, RTA = 48.29 ms|rta=48.291000ms;5.000000;200.000000;0.000000 pl=0%;5;10;0

check_http

[root@localhost ~]# /usr/lib64/nagios/plugins/check_http -h
check_http v2.2.1 (nagios-plugins 2.2.1)
Copyright (c) 1999 Ethan Galstad <nagios@nagios.org>
Copyright (c) 1999-2014 Nagios Plugin Development Team
<devel@nagios-plugins.org>

This plugin tests the HTTP service on the specified host. It can test
normal (http) and secure (https) servers, follow redirects, search for
strings and regular expressions, check connection times, and report on
certificate expiration times.

Usage:
check_http -H | -I [-u ] [-p ]
[-J ] [-K ]
[-w ] [-c ] [-t ] [-L] [-E] [-a auth]
[-b proxy_auth] [-f <ok|warning|critcal|follow|sticky|stickyport>]
[-e ] [-d string] [-s string] [-l] [-r | -R ]
[-P string] [-m :] [-4|-6] [-N] [-M ]
[-A string] [-k string] [-S ] [–sni] [-C [,]]
[-T ] [-j method]
NOTE: One or both of -H and -I must be specified

Options:
-h, –help
Print detailed help screen
-V, –version
Print version information
–extra-opts=[section][@file]
Read options from an ini file. See
https://www.nagios-plugins.org/doc/extra-opts.html
for usage and examples.
-H, –hostname=ADDRESS
Host name argument for servers using host headers (virtual host)
Append a port to include it in the header (eg: example.com:5000)
-I, –IP-address=ADDRESS
IP address or name (use numeric address if possible to bypass DNS lookup).
-p, –port=INTEGER
Port number (default: 80)
-4, –use-ipv4
Use IPv4 connection
-6, –use-ipv6
Use IPv6 connection
-S, –ssl=VERSION[+]
Connect via SSL. Port defaults to 443. VERSION is optional, and prevents
auto-negotiation (2 = SSLv2, 3 = SSLv3, 1 = TLSv1, 1.1 = TLSv1.1,
1.2 = TLSv1.2). With a ‘+’ suffix, newer versions are also accepted.
–sni
Enable SSL/TLS hostname extension support (SNI)
-C, –certificate=INTEGER[,INTEGER]
Minimum number of days a certificate has to be valid. Port defaults to 443
(when this option is used the URL is not checked.)
-J, –client-cert=FILE
Name of file that contains the client certificate (PEM format)
to be used in establishing the SSL session
-K, –private-key=FILE
Name of file containing the private key (PEM format)
matching the client certificate
-e, –expect=STRING
Comma-delimited list of strings, at least one of them is expected in
the first (status) line of the server response (default: HTTP/1.)
If specified skips all other status line logic (ex: 3xx, 4xx, 5xx processing)
-d, –header-string=STRING
String to expect in the response headers
-s, –string=STRING
String to expect in the content
-u, –uri=PATH
URI to GET or POST (default: /)
–url=PATH
(deprecated) URL to GET or POST (default: /)
-P, –post=STRING
URL encoded http POST data
-j, –method=STRING (for example: HEAD, OPTIONS, TRACE, PUT, DELETE, CONNECT)
Set HTTP method.
-N, –no-body
Don’t wait for document body: stop reading after headers.
(Note that this still does an HTTP GET or POST, not a HEAD.)
-M, –max-age=SECONDS
Warn if document is more than SECONDS old. the number can also be of
the form “10m” for minutes, “10h” for hours, or “10d” for days.
-T, –content-type=STRING
specify Content-Type header media type when POSTing

-l, –linespan
Allow regex to span newlines (must precede -r or -R)
-r, –regex, –ereg=STRING
Search page for regex STRING
-R, –eregi=STRING
Search page for case-insensitive regex STRING
–invert-regex
Return CRITICAL if found, OK if not

-a, –authorization=AUTH_PAIR
Username:password on sites with basic authentication
-b, –proxy-authorization=AUTH_PAIR
Username:password on proxy-servers with basic authentication
-A, –useragent=STRING
String to be sent in http header as “User Agent”
-k, –header=STRING
Any other tags to be sent in http header. Use multiple times for additional headers
-E, –extended-perfdata
Print additional performance data
-L, –link
Wrap output in HTML link (obsoleted by urlize)
-f, –onredirect=<ok|warning|critical|follow|sticky|stickyport>
How to handle redirected pages. sticky is like follow but stick to the
specified IP address. stickyport also ensures port stays the same.
-m, –pagesize=INTEGER<:INTEGER>
Minimum page size required (bytes) : Maximum page size required (bytes)
-w, –warning=DOUBLE
Response time to result in warning status (seconds)
-c, –critical=DOUBLE
Response time to result in critical status (seconds)
-t, –timeout=INTEGER:
Seconds before connection times out (default: 10)
Optional “:” can be a state integer (0,1,2,3) or a state STRING
-v, –verbose
Show details for command-line debugging (Nagios may truncate output)

Notes:
This plugin will attempt to open an HTTP connection with the host.
Successful connects return STATE_OK, refusals and timeouts return STATE_CRITICAL
other errors return STATE_UNKNOWN. Successful connects, but incorrect reponse
messages from the host result in STATE_WARNING return values. If you are
checking a virtual server that uses ‘host headers’ you must supply the FQDN
(fully qualified domain name) as the [host_name] argument.
You may also need to give a FQDN or IP address using -I (or –IP-Address).

This plugin can also check whether an SSL enabled web server is able to
serve content (optionally within a specified time) or whether the X509
certificate is still valid for the specified number of days.

Please note that this plugin does not check if the presented server
certificate matches the hostname of the server, or if the certificate
has a valid chain of trust to one of the locally installed CAs.

Examples:
CHECK CONTENT: check_http -w 5 -c 10 –ssl -H http://www.verisign.com

When the ‘www.verisign.com’ server returns its content within 5 seconds,
a STATE_OK will be returned. When the server returns its content but exceeds
the 5-second threshold, a STATE_WARNING will be returned. When an error occurs,
a STATE_CRITICAL will be returned.

CHECK CERTIFICATE: check_http -H http://www.verisign.com -C 14

When the certificate of ‘www.verisign.com’ is valid for more than 14 days,
a STATE_OK is returned. When the certificate is still valid, but for less than
14 days, a STATE_WARNING is returned. A STATE_CRITICAL will be returned when
the certificate is expired.

CHECK CERTIFICATE: check_http -H http://www.verisign.com -C 30,14

When the certificate of ‘www.verisign.com’ is valid for more than 30 days,
a STATE_OK is returned. When the certificate is still valid, but for less than
30 days, but more than 14 days, a STATE_WARNING is returned.
A STATE_CRITICAL will be returned when certificate expires in less than 14 days
CHECK SSL WEBSERVER CONTENT VIA PROXY USING HTTP 1.1 CONNECT:

check_http -I 192.168.100.35 -p 80 -u https://www.verisign.com/ -S -j CONNECT -H http://www.verisign.com
all these options are needed: -I -p -u -S(sl) -j CONNECT -H
a STATE_OK will be returned. When the server returns its content but exceeds
the 5-second threshold, a STATE_WARNING will be returned. When an error occurs,
a STATE_CRITICAL will be returned.

Send email to help@nagios-plugins.org if you have questions regarding use
of this software. To submit patches or suggest improvements, send email to
devel@nagios-plugins.org

[root@localhost ~]# /usr/lib64/nagios/plugins/check_http -I 127.0.0.1

HTTP WARNING: HTTP/1.1 403 Forbidden – 5179 bytes in 0.001 second response time |time=0.001248s;;;0.000000 size=5179B;;;0

[root@localhost ~]# /usr/lib64/nagios/plugins/check_http -I 8.8.8.8
CRITICAL – Socket timeout

[root@localhost ~]# /usr/lib64/nagios/plugins/check_http -H google.com
HTTP OK: HTTP/1.1 302 Found – 526 bytes in 0.434 second response time |time=0.433613s;;;0.000000 size=526B;;;0

[root@localhost ~]# dig +short google.com
216.58.196.110

[root@localhost ~]# /usr/lib64/nagios/plugins/check_http -I 216.58.196.110
HTTP OK: HTTP/1.0 302 Found – 507 bytes in 0.108 second response time |time=0.108401s;;;0.000000 size=507B;;;0

[root@localhost ~]# /usr/lib64/nagios/plugins/check_http -I localhost
HTTP WARNING: HTTP/1.1 403 Forbidden – 5179 bytes in 0.007 second response time |time=0.006610s;;;0.000000 size=5179B;;;0

 

How Containers Work’s

What is a Linux container?
LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. … LXC combines the kernel’s cgroups and support for isolated namespaces to provide an isolated environment for applications.

What is a chroot?
Chroot is an operation that changes the apparent root directory for the current running process and their children. A program that is run in such a modified environment cannot access files and commands outside that environmental directory tree. This modified environment is called a chroot jail.

How does a docker container work?
Docker is basically a container engine which uses the Linux Kernel features like namespaces and control groups to create containers on top of an operating system and automates application deployment on the container. It provides and lightweight environment to run your application code.

What is Docker client?
Docker Engine. Docker Engine is a client-server application with these major components: A server which is a type of long-running program called a daemon process (the dockerd command). A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.

What is a container docker?
Docker container is an open source software development platform. Its main benefit is to package applications in “containers,” allowing them to be portable among any system running the Linux operating system (OS).

What is the LXD?
LXD is a container “hypervisor” and a new user experience for LXC. Specifically, it’s made of three components: A system-wide daemon (lxd) A command line client (lxc)

What is Docker machine?
Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like AWS or Digital Ocean.

 

How to check yaml syntax

[mshaik@server ~]$ sudo yum install gem

[mshaik@server ~]$ sudo gem install yaml-lint

[mshaik@server ~]$ yaml-lint /etc/puppetlabs/code/environments/production/hieradata/common.yaml
Checking the content of [“/etc/puppetlabs/code/environments/production/hieradata/common.yaml”]
File : /etc/puppetlabs/code/environments/production/hieradata/common.yaml, Syntax OK
Done.

 

Apache deep dive

Apache Hypertext Transfer Protocol Server (httpd)

Apache:
Open Source cross-platform web server software
Services 57% of all active websites
Name comes from respect for apache Native American tribe(A PATCH SERVER)

http://httpd.apache.org/

https://en.wikipedia.org/wiki/List_of_Apache_modules

https://httpd.apache.org/docs/2.4/mod/

Background

Most of the functionality of the Apache web server is provided by modules. A module can be either:

static, meaning that it is built into the Apache executable at compile time (and is therefore always available).
shared, meaning that it is loaded at run time by a LoadModule directive within the Apache configuration file.

It is thus possible for a shared module to be installed on a machine, but not loaded by Apache and therefore not usable. This is one of the more common reasons for Apache failing to start or failing to behave as expected.

In short, the modules extend the Apache server. An administrator can easily configure Apache by adding and removing the modules according to required needs. Apache comes with a set or pre-installed modules.

[root@ansible ~]# yum install httpd httpd-manual 

[root@ansible ~]# service httpd restart

http://192.168.183.128/manual/

[root@ansible ~]# rpm -qa httpd
httpd-2.4.6-45.el7.centos.4.x86_64

[root@ansible ~]# grep LoadModule /etc/httpd/conf/httpd.conf
# have to place corresponding `LoadModule’ lines at this location so the
# LoadModule foo_module modules/mod_foo.so

Commonly used Apache modules

The below list shows few commonly used Apache modules.
1) Mod_security
2) Mod_rewrite
3) Mod_deflate
4) Mod_cache
5) Mod_proxy
6) Mod_ssl

Find list of compiled modules in Apache
[root@ansible ~]# httpd -l
Compiled in modules:
core.c
mod_so.c
http_core.c

Find list of loaded modules in Apache
[root@ansible ~]# httpd -M
Loaded Modules:
core_module (static)
so_module (static)
http_module (static)
access_compat_module (shared)
actions_module (shared)
alias_module (shared)
allowmethods_module (shared)
auth_basic_module (shared)
auth_digest_module (shared)
authn_anon_module (shared)
authn_core_module (shared)
authn_dbd_module (shared)
authn_dbm_module (shared)
authn_file_module (shared)
authn_socache_module (shared)
authz_core_module (shared)
authz_dbd_module (shared)
authz_dbm_module (shared)
authz_groupfile_module (shared)
authz_host_module (shared)
authz_owner_module (shared)
authz_user_module (shared)
autoindex_module (shared)
cache_module (shared)
cache_disk_module (shared)
data_module (shared)
dbd_module (shared)
deflate_module (shared)
dir_module (shared)
dumpio_module (shared)
echo_module (shared)
env_module (shared)
expires_module (shared)
ext_filter_module (shared)
filter_module (shared)
headers_module (shared)
include_module (shared)
info_module (shared)
log_config_module (shared)
logio_module (shared)
mime_magic_module (shared)
mime_module (shared)
negotiation_module (shared)
remoteip_module (shared)
reqtimeout_module (shared)
rewrite_module (shared)
setenvif_module (shared)
slotmem_plain_module (shared)
slotmem_shm_module (shared)
socache_dbm_module (shared)
socache_memcache_module (shared)
socache_shmcb_module (shared)
status_module (shared)
substitute_module (shared)
suexec_module (shared)
unique_id_module (shared)
unixd_module (shared)
userdir_module (shared)
version_module (shared)
vhost_alias_module (shared)
dav_module (shared)
dav_fs_module (shared)
dav_lock_module (shared)
lua_module (shared)
mpm_prefork_module (shared)
proxy_module (shared)
lbmethod_bybusyness_module (shared)
lbmethod_byrequests_module (shared)
lbmethod_bytraffic_module (shared)
lbmethod_heartbeat_module (shared)
proxy_ajp_module (shared)
proxy_balancer_module (shared)
proxy_connect_module (shared)
proxy_express_module (shared)
proxy_fcgi_module (shared)
proxy_fdpass_module (shared)
proxy_ftp_module (shared)
proxy_http_module (shared)
proxy_scgi_module (shared)
proxy_wstunnel_module (shared)
systemd_module (shared)
cgi_module (shared)
php5_module (shared)

[root@ansible ~]# httpd -M |wc -l
84

[root@ansible ~]# ls -l /usr/lib64/httpd/modules/ |wc -l
102

How to add new module ex: mod_ssl

[root@ansible ~]# yum install mod_ssl -y

[root@ansible ~]# httpd -M |wc -l
85

[root@ansible ~]# httpd -M |grep ssl
ssl_module (shared)

[root@ansible ~]# ls -l /usr/lib64/httpd/modules/ |grep -i ssl
-rwxr-xr-x 1 root root 219464 Apr 13 02:34 mod_ssl.so

[root@ansible ~]# ls -l /usr/lib64/httpd/modules/ |wc -l
103

####################################
[root@ansible ~]# cat /etc/httpd/conf/httpd.conf |wc -l
353
[root@ansible ~]# grep “#” /etc/httpd/conf/httpd.conf |wc -l
259

[root@ansible ~]# grep -v “#” /etc/httpd/conf/httpd.conf
ServerRoot “/etc/httpd”
Listen 80
Include conf.modules.d/*.conf
User apache
Group apache
ServerAdmin root@localhost
AllowOverride none
Require all denied

DocumentRoot “/var/www/html”

<Directory “/var/www”>
AllowOverride None
Require all granted

<Directory “/var/www/html”>
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
DirectoryIndex index.html

<Files “.ht*”>
Require all denied

ErrorLog “logs/error_log”
LogLevel warn

LogFormat “%h %l %u %t \”%r\” %>s %b \”%{Referer}i\” \”%{User-Agent}i\”” combined
LogFormat “%h %l %u %t \”%r\” %>s %b” common

LogFormat “%h %l %u %t \”%r\” %>s %b \”%{Referer}i\” \”%{User-Agent}i\” %I %O” combinedio

CustomLog “logs/access_log” combined
ScriptAlias /cgi-bin/ “/var/www/cgi-bin/”

<Directory “/var/www/cgi-bin”>
AllowOverride None
Options None
Require all granted
TypesConfig /etc/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml

AddDefaultCharset UTF-8

MIMEMagicFile conf/magic

EnableSendfile on
IncludeOptional conf.d/*.conf

apache1

 What is a module in Apache?
In computing, Apache, an open-source HTTP server, comprises a small core for HTTP request/response processing and for Multi-Processing Modules (MPM) which dispatches data processing to threads and/or processes. Many additional modules (or “mods” ) are available to extend the core functionality for special purposes.

What is a directive in Apache?
Apache directives are a set of rules which define how your server should run, number of clients that can access your server, etc. you can change them by editing the httpd.conf and related files to meet your requirements. shareimprove this answer. ex:Listen

Loading Modules
The default config file loads a large number of modules

LoadModule ldap_module modules/mod_ldap.so
LoadModule status_module modules/mod_status.so
LoadModule proxy_module modules/mod_proxy.so

Defining Multi-Process Settings
To improve response times, apache manages a pool of “spare” server processes

These numbers control the size of the pool:
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256

Containers
Container directives use XML-style opening / closing tags
Restrict the scope of the directives they contain

<Directory “/var/www/cgi-bin”>
AllowOverride None
Options None
within the specified
Order allow,deny
Allow from all

Other containers include <Location> and <VirtualHost>

 

Boot procedure

The stages involved in Linux Booting Process are:
BIOS/uefi boot
Boot Loader
– MBR
– GRUB
Kernel
Init
Runlevel scripts
BIOS
This is the first thing which loads once you power on your machine.
When you press the power button of the machine, CPU looks out into ROM for further instruction.
The ROM contains JUMP function in the form of instruction which tells the CPU to bring up the BIOS
BIOS determines all the list of bootable devices available in the system.
Prompts to select bootable device which can be Hard Disk, CD/DVD-ROM, Floppy Drive, USB Flash Memory Stick etc (optional)
Operating System tries to boot from Hard Disk where the MBR contains primary boot loader.
Boot Loader
To be very brief this phase includes loading of the boot loader (MBR and GRUB/LILO) into memory to bring up the kernel.
MBR (Master Boot Record)
It is the first sector of the Hard Disk with a size of 512 bytes.
The first 434 – 446 bytes are the primary boot loader, 64 bytes for partition table and 6 bytes for MBR validation timestamp.

NOTE: Now MBR directly cannot load the kernel as it is unaware of the file-system concept and requires a boot loader with file system driver for each supported file systems, so that they can be understood and accessed by the boot loader itself.
To overcome this situation GRUB is used with the details of the filesystem in /boot/grub.conf and file system drivers

GRUB (GRand Unified Boot loader)
This loads the kernel in 3 stages

GRUB stage 1:
The primary boot loader takes up less than 512 bytes of disk space in the MBR – too small a space to contain the instructions necessary to load a complex operating system.
Instead the primary boot loader performs the function of loading either the stage 1.5 or stage 2 boot loader.

GRUB Stage 1.5:
Stage 1 can load the stage 2 directly, but it is normally set up to load the stage 1.5.
This can happen when the /boot partition is situated beyond the 1024 cylinder head of the hard drive.
GRUB Stage 1.5 is located in the first 30 KB of Hard Disk immediately after MBR and before the first partition.
This space is utilized to store file system drivers and modules.
This enabled stage 1.5 to load stage 2 to load from any known location on the file system i.e. /boot/grub

GRUB Stage 2:
This is responsible for loading kernel from /boot/grub/grub.conf and any other modules needed
Loads a GUI interface i.e. splash image located at /grub/splash.xpm.gz with list of available kernels where you can manually select the kernel or else after the default timeout value the selected kernel will boot

The original file is /etc/grub.conf of which you can observe a symlink file at /boot/grub/grub.conf
Sample /boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.18-194.26.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.26.1.el5 ro root=/dev/VolGroup00/root clocksource=acpi_pm divisor=10
initrd /initrd-2.6.18-194.26.1.el5.img
title Red Hat Enterprise Linux Server (2.6.18-194.11.4.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.11.4.el5 ro root=/dev/VolGroup00/root clocksource=acpi_pm divisor=10
initrd /initrd-2.6.18-194.11.4.el5.img
title Red Hat Enterprise Linux Server (2.6.18-194.11.3.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.11.3.el5 ro root=/dev/VolGroup00/root clocksource=acpi_pm divisor=10
initrd /initrd-2.6.18-194.11.3.el5.img
Kernel
This can be considered the heart of operating system responsible for handling all system processes.
Kernel is loaded in the following stages:
Kernel as soon as it is loaded configures hardware and memory allocated to the system.
Next it un-compresses the initrd image (compressed using zlib into zImage or bzImage formats) and mounts it and loads all the necessary drivers.
Loading and unloading of kernel modules is done with the help of programs like insmod, and rmmod present in the initrd image.
Looks out for hard disk types be it a LVM or RAID.
Unmounts initrd image and frees up all the memory occupied by the disk image.
Then kernel mounts the root partition as specified in grub.conf as read-only.
Next it runs the init process

Init Process
Executes the system to boot into the run level as specified in /etc/inittab
Sample output defining the default boot runlevel inside /etc/inittab
# Default runlevel. The runlevels used by RHS are:
# 0 – halt (Do NOT set initdefault to this)
# 1 – Single user mode
# 2 – Multiuser, without NFS (The same as 3, if you do not have networking)
# 3 – Full multiuser mode
# 4 – unused
# 5 – X11
# 6 – reboot (Do NOT set initdefault to this)
#
id:5:initdefault:
As per above O/P system will boot into runlevel 5
You can check current runlevel details of your system using below command on the terminal
# who -r
run-level 3 Jan 28 23:29 last=S
Next as per the fstab entry file system’s integrity is checked and root partition is re-mounted as read-write (earlier it was mounted as read-only).
Runlevel scripts
A no. of runlevel scripts are defined inside /etc/rc.d/rcx.d
Runlevel Directory
0 /etc/rc.d/rc0.d
1 /etc/rc.d/rc1.d
2 /etc/rc.d/rc2.d
3 /etc/rc.d/rc3.d
4 /etc/rc.d/rc4.d
5 /etc/rc.d/rc5.d
6 /etc/rc.d/rc6.d
Based on the selected runlevel, the init process then executes startup scripts located in subdirectories of the /etc/rc.d directory.
Scripts used for runlevels 0 to 6 are located in subdirectories /etc/rc.d/rc0.d through /etc/rc.d/rc6.d, respectively.
Lastly, init runs whatever it finds in /etc/rc.d/rc.local (regardless of run level). rc.local is rather special in that it is executed every time that you change run levels.
NOTE: rc.local is not used in all the distros as for example Debian.
Next if everything goes fine you should be able to see the Login Screen on your system.

########################################################

Introducing process basics
A running instance of a program is called as process. A program stored in the hard disk or pen drive is not a process. When that stored program starts executing, then we say that process has been created and is running. Let’s very briefly understand the Linux operating system boot-up sequence:

1. In PCs, initially the BIOS chip initializes system hardware, such as PCI bus, display device drivers, and so on.
2.Then the BIOS executes the boot loader program.
3.The boot loader program then copies kernel in memory, and after basic checks, it calls a kernel function called start_kenel() .
4.The kernel then initiates the OS and creates the first process called init .
5.You can check the presence of this process with the following command:
$ ps –ef
6.Every process in the OS has one numerical identification associated with it. It is called a process ID. The process ID of the init process is 1. This process is the parent process of all user space processes.
7.In the OS, every new process is created by a system call called fork() .
8.Therefore, every process has a process ID as well as the parent process ID.
9.We can see the complete process tree using the following command:
    $ pstree

https://rafishaikblog.wordpress.com/introduction-to-linux-and-os-installations/

https://rafishaikblog.wordpress.com/2017/02/13/list-block-device-information/

https://linoxide.com/booting/boot-process-of-linux-in-detail/

###################################################################

Linux Boot process

As soon as you switch ON’s your machine. The first thing that happens is the BIOS is loaded. And it is present in ROM(read only memory) which is present on a chip-set in motherboard.

So the first role of the bios is to perform POST. And make sure the h/w attached to the system is working fine, it loads the generic system drivers it loads.

Bios can be bootted from any of the device, or you specify sequence from from which its going to boot.

Bios hand’s over the control to the first sector of the disk, thus first sector will have information about how to start booting.

And first sector consist of master boot record MBR(512-Bytes). MBR knows how to boot from that particular harddisk.
If you multi boot environment you could have lilo or grub installed on it.

Incase that is installed on MBR that grub is first that is loaded, after grub is loaded, It provides you a menu, where u can see no of OS present on it.
If you section corresponding to it, it will have
root (hd0,0) hd0=first hard-disk 0=first-parition
hd(hd0,0) is the first boot partition
Kenel /vmlinuz-x.x.x.xv(ro-readonly filesystem) Initially kernel is loaded into readonly filesystem. Kernel is the statically complied with various drivers, but these drivers are not sufficient to load operating system i.e root partitions. So Initrd is Intial Ramdisk which consist’s of device drivers which are necessary to load the actual root file system.
initrd /initrramfs-x.x.x.
Dracut is actually initrd infrastructure.

After checking the filesystem with fsck it will remount the filessytem in rw mode. And check in /etc/fstab for the presence of various partitions in fstab. And then start’s the services present in runlevel, The first process that was initiated after the kernel was mounted was /sbin/init