limits.conf with ansible

[root@localhost ~]# ansible-galaxy init limits.conf
– limits.conf was created successfully

[root@localhost ~]# ansible-doc pam_limits

[root@localhost ~]# vim limits.conf/tasks/main.yml

# tasks file for limits.conf
– pam_limits:
domain: “{{ item.domain }}”
limit_type: “{{ item.limit_type }}”
limit_item: “{{ item.limit_item }}”
value: “{{ item.value }}”
with_items: “{{ limits_conf_settings }}”

[root@localhost ~]# vim limits_conf.yml

– hosts: all
roles:
– limits.conf
vars:
limits_conf_settings:
– domain: joe
limit_type: soft
limit_item: nofile
value: 64000

[root@localhost ~]# ansible-playbook limits_conf.yml -C

PLAY [all] *************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************
ok: [localhost]

TASK [limits.conf : pam_limits] ****************************************************************************************************************
skipping: [localhost] => (item={u’domain’: u’joe’, u’limit_item’: u’nofile’, u’limit_type’: u’soft’, u’value’: 64000})

PLAY RECAP *************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0

[root@localhost ~]# ansible-playbook limits_conf.yml

PLAY [all] *************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************
ok: [localhost]

TASK [limits.conf : pam_limits] ****************************************************************************************************************
changed: [localhost] => (item={u’domain’: u’joe’, u’limit_item’: u’nofile’, u’limit_type’: u’soft’, u’value’: 64000})

PLAY RECAP *************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

[root@localhost ~]# tail -n1 /etc/security/limits.conf
joe soft nofile 64000

[root@localhost ~]# su – joe
Last login: Wed Sep 13 09:05:21 IST 2017 on pts/0
[joe@localhost ~]$ ulimit -Sn
64000

 

Advertisements

how to use sysctl with ansible

[root@localhost ~]# sysctl -a |grep vm.swappiness
vm.swappiness = 30

[root@localhost ~]# ansible-galaxy init sysctl
– sysctl was created successfully

[root@localhost ~]# ansible-doc sysctl

[root@localhost ~]# vim test.yml

– hosts: localhost
roles:
– sysctl
vars:
sysctl_settings:
– name: vm.swappiness
value: 90

[root@localhost ~]# vim sysctl/tasks/main.yml

# tasks file for sysctl
– name: sysctl settings
sysctl:
name: “{{ item.name }}”
value: “{{ item.value }}”
reload: true
state: “{{ item.state | default(‘present’) }}”
with_items: “{{ sysctl_settings }}”

[root@localhost ~]# ansible-playbook test.yml

PLAY [localhost] *******************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************
ok: [localhost]

TASK [sysctl : sysctl settings] ****************************************************************************************************************
changed: [localhost] => (item={u’state’: u’present’, u’name’: u’vm.swappiness’, u’value’: 90})

PLAY RECAP *************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

[root@localhost ~]# sysctl -a |grep vm.swappiness
vm.swappiness = 90

 

How to set ulimit values

1. Soft limit: is the value that the kernel enforces for the corresponding resource.
2. Hard limit: acts as a ceiling for the soft limit.

[root@localhost ~]# cat /etc/security/limits.conf
# /etc/security/limits.conf
#
#This file sets the resource limits for the users logged in via PAM.
#It does not affect resource limits of the system services.
#
#Also note that configuration files in /etc/security/limits.d directory,
#which are read in alphabetical order, override the settings in this
#file in case the domain is the same or more specific.
#That means for example that setting a limit for wildcard domain here
#can be overriden with a wildcard setting in a config file in the
#subdirectory, but a user specific setting here can be overriden only
#with a user specific setting in the subdirectory.
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# – a user name
# – a group name, with @group syntax
# – the wildcard *, for default entry
# – the wildcard %, can be also used with %group syntax,
# for maxlogin limit
#
#<type> can have the two values:
# – “soft” for enforcing the soft limits
# – “hard” for enforcing hard limits
#
#<item> can be one of the following:
# – core – limits the core file size (KB)
# – data – max data size (KB)
# – fsize – maximum filesize (KB)
# – memlock – max locked-in-memory address space (KB)
# – nofile – max number of open file descriptors
# – rss – max resident set size (KB)
# – stack – max stack size (KB)
# – cpu – max CPU time (MIN)
# – nproc – max number of processes
# – as – address space limit (KB)
# – maxlogins – max number of logins for this user
# – maxsyslogins – max number of logins on the system
# – priority – the priority to run user process with
# – locks – max number of file locks the user can hold
# – sigpending – max number of pending signals
# – msgqueue – max memory used by POSIX message queues (bytes)
# – nice – max nice priority allowed to raise to values: [-20, 19]
# – rtprio – max realtime priority
#
#<domain> <type> <item> <value>
#

#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student – maxlogins 4

# End of file
#############################

<domain> can be:
an user name
a group name, with @group syntax
the wildcard *, for default entry
the wildcard %, can be also used with %group syntax, for maxlogin limit

<type> can have the two values:
“soft” for enforcing the soft limits
“hard” for enforcing hard limits

<item> can be one of the following:
core – limits the core file size (KB)
data – max data size (KB)
fsize – maximum filesize (KB)
memlock – max locked-in-memory address space (KB)
nofile – max number of open files
rss – max resident set size (KB)
stack – max stack size (KB)
cpu – max CPU time (MIN)
nproc – max number of processes
as – address space limit (KB)
maxlogins – max number of logins for this user
maxsyslogins – max number of logins on the system
priority – the priority to run user process with
locks – max number of file locks the user can hold
sigpending – max number of pending signals
msgqueue – max memory used by POSIX message queues (bytes)
nice – max nice priority allowed to raise to values: [-20, 19]
rtprio – max realtime priority

Exit and re-login from the terminal for the change to take effect.

#############################

[root@localhost ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1874
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1874
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

To change the nofile to 10000 you can do

[root@localhost ~]# ulimit -n 10000

[root@localhost ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1874
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 10000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1874
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

[root@localhost ~]# logout
Connection to 192.168.57.101 closed.
mohammedrafi@NOC-RAFI:~$ ssh root@192.168.57.101
root@192.168.57.101’s password:
Last login: Wed Sep 13 08:32:57 2017 from 192.168.57.1

[root@localhost ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1874
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1874
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

[root@localhost ~]# vim /etc/security/limits.conf
joe soft nofile 64000
root soft nofile 75000
root hard nofile 85000

[root@localhost ~]# logout
Connection to 192.168.57.101 closed.
mohammedrafi@NOC-RAFI:~$ ssh root@192.168.57.101
root@192.168.57.101’s password:
Last login: Wed Sep 13 08:59:14 2017 from 192.168.57.1
[root@localhost ~]# ulimit -Sn
75000
[root@localhost ~]# ulimit -Hn
85000
[root@localhost ~]# reboot
Connection to 192.168.57.101 closed by remote host.
Connection to 192.168.57.101 closed.

mohammedrafi@NOC-RAFI:~$ ssh root@192.168.57.101
root@192.168.57.101’s password:
Last login: Wed Sep 13 09:03:04 2017 from 192.168.57.1

[root@localhost ~]# ulimit -Sn
75000
[root@localhost ~]# ulimit -Hn
85000

[root@localhost ~]# su – joe
Last login: Wed Sep 13 08:55:59 IST 2017 on pts/0
[joe@localhost ~]$ ulimit -Sn
64000
[joe@localhost ~]$ ulimit -Hn
85000

 

learning php

Note: install vagrant on your physical machine(laptop or desktop)

mohammedrafi@NOC-RAFI:~$ mkdir phplearning

mohammedrafi@NOC-RAFI:~$ cd phplearning/

mohammedrafi@NOC-RAFI:~/phplearning$

mohammedrafi@NOC-RAFI:~/phplearning$ vagrant –version
Vagrant 1.8.1

mohammedrafi@NOC-RAFI:~/phplearning$ vim Vagrantfile
VAGRANTFILE_API_VERSION = “2”
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = “ubuntu/trusty32”
config.vm.network “forwarded_port”, guest: 80, host: 8080
config.vm.provision “shell”, path: “provisioner.sh”
end

mohammedrafi@NOC-RAFI:~/phplearning$ vim provisioner.sh

#!/bin/bash
sudo apt-get install python-software-properties -y
sudo C_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php -y
sudo apt-get update
sudo apt-get install php7.0 php7.0-fpm php7.0-mysql -y
sudo apt-get –purge autoremove -y
sudo service php7.0-fpm restart
sudo debconf-set-selections <<< ‘mysql-server mysql-server/root_password password root’
sudo debconf-set-selections <<< ‘mysql-server mysql-server/root_password_again password root’
sudo apt-get -y install mysql-server mysql-client
sudo service mysql start
sudo apt-get install nginx -y
sudo cat > /etc/nginx/sites-available/default <<- EOM
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /vagrant;
index index.php index.html index.htm;
server_name server_domain_or_IP;
location / {
try_files \$uri \$uri/ /index.php?\$query_string;
}
location ~ \.php\$ {
try_files \$uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)\$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_
name;
include fastcgi_params;
}
}
EOM
sudo service nginx restart

###############################################################

mohammedrafi@NOC-RAFI:~/phplearning$ vagrant up

Bringing machine ‘default’ up with ‘virtualbox’ provider… ==> default: Importing base box ‘ubuntu/trusty32’…
==> default: Matching MAC address for NAT networking…
==> default: Checking if box ‘ubuntu/trusty32’ is up to date…
==> default: Setting the name of the VM: phplearning_default_1505133613502_71479
==> default: Clearing any previously set forwarded ports…
==> default: Clearing any previously set network interfaces…
==> default: Preparing network interfaces based on configuration…
default: Adapter 1: nat
==> default: Forwarding ports…
default: 80 (guest) => 8080 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM…
==> default: Waiting for machine to boot. This may take a few minutes…
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Remote connection disconnect. Retrying…
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest…
default: Removing insecure key from the guest if it’s present…
default: Key inserted! Disconnecting and reconnecting using new SSH key…
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM…
==> default: Mounting shared folders…
default: /vagrant => /home/mohammedrafi/phplearning
==> default: Running provisioner: shell…
default: Running: /tmp/vagrant-shell20170911-16246-hno98l.sh
==> default: stdin: is not a tty
==> default: Reading package lists…
==> default: Building dependency tree…
==> default: Reading state information…
==> default: The following NEW packages will be installed:
==> default: python-software-properties

.
.
.

###############################################################

mohammedrafi@NOC-RAFI:~/phplearning$ vagrant ssh
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-129-generic i686)

* Documentation: https://help.ubuntu.com/

System information disabled due to load higher than 1.0

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

New release ‘16.04.3 LTS’ available.
Run ‘do-release-upgrade’ to upgrade to it.

vagrant@vagrant-ubuntu-trusty-32:~$ netstat -tlpn
(No info could be read for “-p”: geteuid()=1000 but you should be root.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:50175 0.0.0.0:* LISTEN –
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN –
tcp6 0 0 :::22 :::* LISTEN –
tcp6 0 0 :::111 :::* LISTEN –
tcp6 0 0 :::80 :::* LISTEN –
tcp6 0 0 :::33328 :::* LISTEN –

vagrant@vagrant-ubuntu-trusty-32:~$ service nginx status
* nginx is running

vagrant@vagrant-ubuntu-trusty-32:~$ service mysql status
mysql start/running, process 12639

vagrant@vagrant-ubuntu-trusty-32:~$ curl -I localhost
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Mon, 11 Sep 2017 12:50:25 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 04 Mar 2014 11:46:45 GMT
Connection: keep-alive
ETag: “5315bd25-264”
Accept-Ranges: bytes

###############################################################

vagrant@vagrant-ubuntu-trusty-32:~$ cat /etc/nginx/nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;

events {
worker_connections 768;
# multi_accept on;
}

http {

##
# Basic Settings
##

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;

# server_names_hash_bucket_size 64;
# server_name_in_redirect off;

include /etc/nginx/mime.types;
default_type application/octet-stream;

##
# Logging Settings
##

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

##
# Gzip Settings
##

gzip on;
gzip_disable “msie6”;

# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##

#include /etc/nginx/naxsi_core.rules;

##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##

#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;

##
# Virtual Host Configs
##

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities “TOP” “USER”;
# # imap_capabilities “IMAP4rev1” “UIDPLUS”;
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}

vagrant@vagrant-ubuntu-trusty-32:~$ cat /etc/nginx/sites-enabled/default
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /vagrant;
index index.php index.html index.htm;
server_name server_domain_or_IP;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_
name;
include fastcgi_params;
}
}

###############################################################

verify url on your base machine http://localhost:8080/

HTTP stands for HyperText Transfer Protocol. As any other protocol, the goal is to allow two entities or nodes to communicate with each other. In order to achieve this, the messages need to be formatted in a way that they both understand, and the entities must follow some pre-established rules.

HTTP is stateless; that is, it treats each request independently, unrelated to any previous one. This means that with this request and response sequence, the communication is finished. Any new requests will not be aware of this specific interchange of messages.

Parts of the message
An HTTP message contains several parts. We will define only the most important of them.
##########
URL: The URL of the message is the destination of the message. The request will contain the receiver’s URL, and the response will contain the sender’s.

As you might know, the URL can contain extra parameters, known as a query string. This is used when the sender wants to add extra data. For example, consider this URL: http://myserver.com/greeting?name=Alex . This URL contains one parameter: name with the value Alex .
##########
The HTTP method: The HTTP method is the verb of the message. It identifies what kind of action the sender wants to perform with this message. The most common ones are GET and POST.

GET: This asks the receiver about something, and the receiver usually sends this information back. The most common example is asking for a web page, where the receiver will respond with the HTML code of the requested page.

POST: This means that the sender wants to perform an action that will update the data that the receiver is holding. For example, the sender can ask the receiver to update his profile name.

There are other methods, such as PUT, DELETE, or OPTION, but they are less used in web development, although they play a crucial role in REST APIs, which will be explained while Building REST APIs.

##########
Body: The body part is usually present in response messages even though a request message can contain it too. The body of the message contains the content of the message itself; for example, if the user requested a web page, the body of the response would consist of the HTML code that represents this page.

The body can contain text in any format; it can be an HTML text that represents aweb page, plain text, the content of an image, JSON, and so on.
##########
Headers: The headers on an HTTP message are the metadata that the receiver needs in order to understand the content of the message. There are a lot of headers.

Headers consist of a map of key-value pairs. The following could be the headers of a request:
Accept: text/html
Cookie: name=Richard

This request tells the receiver, which is a server, that it will accept text as HTML, which is the common way of representing a web page; and that it has a cookie named Richard.
##########
The status code: The status code is present in responses. It identifies the status of the request with a numeric code so that browsers and other tools know how to react. For example, if we try to access a URL that does not exist, the server should reply with a status code 404. In this way, the browser knows what happened without even looking at the content of the response.
Common status codes are:
• 200: The request was successful
• 401: Unauthorized; the user does not have permission to see this resource
• 404: Page not found
• 500: Internal server error; something wrong happened on the server side and it could not be recovered
##########

vagrant@vagrant-ubuntu-trusty-32:~$ cat /etc/nginx/sites-enabled/default
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /vagrant;
index index.php index.html index.htm;
server_name server_domain_or_IP;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_
name;
include fastcgi_params;
}
}

vagrant@vagrant-ubuntu-trusty-32:~$ sudo mv /usr/share/nginx/html/index.html /usr/share/nginx/html/index.html.bak

vagrant@vagrant-ubuntu-trusty-32:~$ sudo vim /usr/share/nginx/html/index.html
<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<title>Your first app</title>
</head>
<body>
<a id=”special” class=”link” href=”http://yourpage.com”>Your page</a>
<a class=”link” href=”http://theirpage.com”>Their page</a>
</body>
</html>

Let’s focus on the highlighted code. As you can see, we are describing two links with some properties. Both links have a class, a destination, and a text. The first one also contains an ID. Save this code into a file named index.html and execute it. You will see how your default browser opens a very simple page with two links.

If we want to add some styles, or change the color, size, and position of the links, we need to add CSS. CSS describes how elements from the HTML are displayed. There are several ways to include CSS, but the best approach is to have it in a separated file and then reference it from the HTML. Let’s update oursection as shown in the following code:

vagrant@vagrant-ubuntu-trusty-32:~$ sudo vim /usr/share/nginx/html/index.html
<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<title>Your first app</title>
<link rel=”stylesheet” type=”text/css” href=”mystyle.css”>
</head>
<body>
<a id=”special” class=”link” href=”http://yourpage.com”>Your page</a>
<a class=”link” href=”http://theirpage.com”>Their page</a>
</body>
</html>

Now, let’s create a new mystyle.css file in the same folder with the following content:

vagrant@vagrant-ubuntu-trusty-32:~$ sudo vim /usr/share/nginx/html/mystyle.css
.link {
color: green;
font-weight: bold;
}
#special {
font-size: 30px;
}

This CSS file contains two style definitions: one for the link class and one for the special ID. The class style will be applied to both the links as they both define this class, and it sets them as green and bold. The ID style that increases the font of the link is only applied to the first link.

Finally, in order to add behavior to our web page, we need to add JS or JavaScript. JS is a programming language in fact, there are quite a lot of them. If you want to give it a chance, we recommend the free online book Eloquent JavaScript, Marijn Haverbeke, which you can find at http:// eloquentjavascript.net/ . As with CSS, the best approach would be to add a separate file and then reference it from our HTML. Update thesection with the following highlighted code:

vagrant@vagrant-ubuntu-trusty-32:~$ sudo vim /usr/share/nginx/html/index.html
<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<title>Your first app</title>
<link rel=”stylesheet” type=”text/css” href=”mystyle.css”>
</head>
<body>
<a id=”special” class=”link” href=”http://yourpage.com”>Your page</a>
<a class=”link” href=”http://theirpage.com”>Their page</a>
http://myactions.js
</body>
</html>

vagrant@vagrant-ubuntu-trusty-32:~$ sudo vim /usr/share/nginx/html/myactions.js
document.getElementById(“special”).onclick = function() {
alert(“You clicked me?”);
}

The JS file adds a function that will be called when the special link is clicked on. This function just pops up an alert. You can save all your changes and refresh the browser to see how it looks now and how the links behave.

 

 

Allow icmp with specific Message Types

It can be with Type or Description ICMP Message Types

Description ICMP Message Types:

ex: iptables -I INPUT -p icmp –icmp-type echo-request -s 1.2.3.4 -j ACCEPT

Type: 

ex: iptables -I INPUT -p icmp –icmp-type 8 -s 1.2.3.4 -j ACCEPT

ICMP Message Types

The type field identifies the type of the message sent by the host or gateway. Many of the type fields contain more specific information about the error condition. Table 3.2 lists the ICMP message types.

Table 3.2 ICMP Message Types

Type Description ICMP Message Types
0 Echo Reply (Ping Reply, used with Type 8, Ping Request)
3 Destination Unreachable
4 Source Quench
5 Redirect
8 Echo Request (Ping Request, used with Type 0, Ping Reply)
9 Router Advertisement (Used with Type 9)
10 Router Solicitation (Used with Type 10)
11 Time Exceeded
12 Parameter Problem
13 Timestamp Request (Used with Type 14)
14 Timestamp Reply (Used with Type 13)
15 Information Request (obsolete) (Used with Type 16)
16 Information Reply (obsolete) (Used with Type 15)
17 Address Mask Request (Used with Type 17)
18 Address Mask Reply (Used with Type 18)

 

Installing PHP

Installing PHP:

PHP originally stood for Personal Home Page, but of late it is referred to as Hypertext Preprocessor. It is a server-side scripting language designed for web development. It is one of the core standing pillars for the LAMP (Linux, Apache, MySQL, and PHP) or LEMP (Linux, Nginx, MySQL, and PHP) stack. Please keep in mind that there are multiple options for running PHP. The most common options available to you are mod_php , FastCGI, and PHP-FPM .

=> mod_php is the built-in version available only for Apache. Installing it is easy, and its ease of use coupled with tight integration is probably the most common reason to deploy mod_php. However, it forces every Apache child to use more memory and needs a restart of Apache to read an updated php.ini file.

=> FastCGI is a pretty generic protocol available on most platforms including Windows IIS. It is an improvisation over the earlier variation of Common Gateway Interface (CGI) that reduces the overheads by spinning up one process for multiple requests. You might be already aware that CGI used one process per request and it was not as scalable for extremely busy sites. FastCGI has a smaller memory footprint than mod_php and has more configuration options.

=> PHP-FPM is an alternative for PHP FastCGI implementation and is the newest kid on the block. It can be used with any web server that is compatible with FastCGI and plays well with Nginx too. It gives you a lot of configuration options and it really shines in multiple areas, especially related to availability. You can start different processes with different settings and different php.ini options. This means you can have multiple processes serving different versions of PHP in case your application is not compatible with a specific PHP version.

[root@localhost ~]# service php-fpm status
Redirecting to /bin/systemctl status php-fpm.service
Unit php-fpm.service could not be found.

[root@localhost ~]# yum install php-fpm

[root@localhost ~]# service php-fpm start

[root@localhost ~]# service php-fpm status
Redirecting to /bin/systemctl status php-fpm.service
● php-fpm.service – The PHP FastCGI Process Manager
Loaded: loaded (/usr/lib/systemd/system/php-fpm.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2017-09-06 19:11:52 IST; 1s ago
Main PID: 2953 (php-fpm)
Status: “Ready to handle connections”
CGroup: /system.slice/php-fpm.service
├─2953 php-fpm: master process (/etc/php-fpm.conf)
├─2954 php-fpm: pool www
├─2955 php-fpm: pool www
├─2956 php-fpm: pool www
├─2957 php-fpm: pool www
└─2958 php-fpm: pool www

[root@localhost ~]# ps -ef –forest |grep php-fpm
root 2972 2669 0 19:12 pts/0 00:00:00 \_ grep –color=auto php-fpm
root 2953 1 0 19:11 ? 00:00:00 php-fpm: master process (/etc/php-fpm.conf)
apache 2954 2953 0 19:11 ? 00:00:00 \_ php-fpm: pool www
apache 2955 2953 0 19:11 ? 00:00:00 \_ php-fpm: pool www
apache 2956 2953 0 19:11 ? 00:00:00 \_ php-fpm: pool www
apache 2957 2953 0 19:11 ? 00:00:00 \_ php-fpm: pool www
apache 2958 2953 0 19:11 ? 00:00:00 \_ php-fpm: pool www

[root@localhost ~]# php-fpm -v
PHP 5.4.16 (fpm-fcgi) (built: Nov 6 2016 00:30:57)
Copyright (c) 1997-2013 The PHP Group
Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies

[root@localhost ~]# cat /etc/php-fpm.conf
;;;;;;;;;;;;;;;;;;;;;
; FPM Configuration ;
;;;;;;;;;;;;;;;;;;;;;

; All relative paths in this configuration file are relative to PHP’s install
; prefix.

; Include one or more files. If glob(3) exists, it is used to include a bunch of
; files from a glob(3) pattern. This directive can be used everywhere in the
; file.
include=/etc/php-fpm.d/*.conf

;;;;;;;;;;;;;;;;;;
; Global Options ;
;;;;;;;;;;;;;;;;;;

[global]
; Pid file
; Default Value: none
pid = /run/php-fpm/php-fpm.pid

; Error log file
; Default Value: /var/log/php-fpm.log
error_log = /var/log/php-fpm/error.log

; Log level
; Possible Values: alert, error, warning, notice, debug
; Default Value: notice
;log_level = notice

; If this number of child processes exit with SIGSEGV or SIGBUS within the time
; interval set by emergency_restart_interval then FPM will restart. A value
; of ‘0’ means ‘Off’.
; Default Value: 0
;emergency_restart_threshold = 0

; Interval of time used by emergency_restart_interval to determine when
; a graceful restart will be initiated. This can be useful to work around
; accidental corruptions in an accelerator’s shared memory.
; Available Units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
;emergency_restart_interval = 0

; Time limit for child processes to wait for a reaction on signals from master.
; Available units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
;process_control_timeout = 0

; Send FPM to background. Set to ‘no’ to keep FPM in foreground for debugging.
; Default Value: yes
daemonize = no

;;;;;;;;;;;;;;;;;;;;
; Pool Definitions ;
;;;;;;;;;;;;;;;;;;;;

; See /etc/php-fpm.d/*.conf

 

[root@localhost ~]# grep pm.start_servers /etc/php-fpm.d/www.conf
; pm.start_servers – the number of children created on startup.
pm.start_servers = 5

[root@localhost ~]# netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2953/php-fpm: maste
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1353/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 845/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1959/master
tcp6 0 0 :::80 :::* LISTEN 1353/nginx: master
tcp6 0 0 :::22 :::* LISTEN 845/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1959/master

[root@localhost ~]# cat /etc/php-fpm.d/www.conf
; Start a new pool named ‘www’.
[www]

; The address on which to accept FastCGI requests.
; Valid syntaxes are:
; ‘ip.add.re.ss:port’ – to listen on a TCP socket to a specific address on
; a specific port;
; ‘port’ – to listen on a TCP socket to all addresses on a
; specific port;
; ‘/path/to/unix/socket’ – to listen on a unix socket.
; Note: This value is mandatory.
listen = 127.0.0.1:9000

; Set listen(2) backlog. A value of ‘-1’ means unlimited.
; Default Value: -1
;listen.backlog = -1

; List of ipv4 addresses of FastCGI clients which are allowed to connect.
; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original
; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address
; must be separated by a comma. If this value is left blank, connections will be
; accepted from any ip address.
; Default Value: any
listen.allowed_clients = 127.0.0.1

; Set permissions for unix socket, if one is used. In Linux, read/write
; permissions must be set in order to allow connections from a web server. Many
; BSD-derived systems allow connections regardless of permissions.
; Default Values: user and group are set as the running user
; mode is set to 0666
;listen.owner = nobody
;listen.group = nobody
;listen.mode = 0666

; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user’s group
; will be used.
; RPM: apache Choosed to be able to access some dir as httpd
user = apache
; RPM: Keep a group allowed to write in log dir.
group = apache

; Choose how the process manager will control the number of child processes.
; Possible Values:
; static – a fixed number (pm.max_children) of child processes;
; dynamic – the number of child processes are set dynamically based on the
; following directives:
; pm.max_children – the maximum number of children that can
; be alive at the same time.
; pm.start_servers – the number of children created on startup.
; pm.min_spare_servers – the minimum number of children in ‘idle’
; state (waiting to process). If the number
; of ‘idle’ processes is less than this
; number then some children will be created.
; pm.max_spare_servers – the maximum number of children in ‘idle’
; state (waiting to process). If the number
; of ‘idle’ processes is greater than this
; number then some children will be killed.
; Note: This value is mandatory.
pm = dynamic

; The number of child processes to be created when pm is set to ‘static’ and the
; maximum number of child processes to be created when pm is set to ‘dynamic’.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI.
; Note: Used when pm is set to either ‘static’ or ‘dynamic’
; Note: This value is mandatory.
pm.max_children = 50

; The number of child processes created on startup.
; Note: Used only when pm is set to ‘dynamic’
; Default Value: min_spare_servers + (max_spare_servers – min_spare_servers) / 2
pm.start_servers = 5

; The desired minimum number of idle server processes.
; Note: Used only when pm is set to ‘dynamic’
; Note: Mandatory when pm is set to ‘dynamic’
pm.min_spare_servers = 5

; The desired maximum number of idle server processes.
; Note: Used only when pm is set to ‘dynamic’
; Note: Mandatory when pm is set to ‘dynamic’
pm.max_spare_servers = 35

; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify ‘0’. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
;pm.max_requests = 500

; The URI to view the FPM status page. If this value is not set, no URI will be
; recognized as a status page. By default, the status page shows the following
; information:
; accepted conn – the number of request accepted by the pool;
; pool – the name of the pool;
; process manager – static or dynamic;
; idle processes – the number of idle processes;
; active processes – the number of active processes;
; total processes – the number of idle + active processes.
; The values of ‘idle processes’, ‘active processes’ and ‘total processes’ are
; updated each second. The value of ‘accepted conn’ is updated in real time.
; Example output:
; accepted conn: 12073
; pool: www
; process manager: static
; idle processes: 35
; active processes: 65
; total processes: 100
; By default the status page output is formatted as text/plain. Passing either
; ‘html’ or ‘json’ as a query string will return the corresponding output
; syntax. Example:
; http://www.foo.bar/status
; http://www.foo.bar/status?json
; http://www.foo.bar/status?html
; Note: The value must start with a leading slash (/). The value can be
; anything, but it may not be a good idea to use the .php extension or it
; may conflict with a real PHP file.
; Default Value: not set
;pm.status_path = /status

; The ping URI to call the monitoring page of FPM. If this value is not set, no
; URI will be recognized as a ping page. This could be used to test from outside
; that FPM is alive and responding, or to
; – create a graph of FPM availability (rrd or such);
; – remove a server from a group if it is not responding (load balancing);
; – trigger alerts for the operating team (24/7).
; Note: The value must start with a leading slash (/). The value can be
; anything, but it may not be a good idea to use the .php extension or it
; may conflict with a real PHP file.
; Default Value: not set
;ping.path = /ping

; This directive may be used to customize the response of a ping request. The
; response is formatted as text/plain with a 200 response code.
; Default Value: pong
;ping.response = pong

; The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the ‘max_execution_time’ ini option
; does not stop script execution for some reason. A value of ‘0’ means ‘off’.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0

; The timeout for serving a single request after which a PHP backtrace will be
; dumped to the ‘slowlog’ file. A value of ‘0s’ means ‘off’.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_slowlog_timeout = 0

; The log file for slow requests
; Default Value: not set
; Note: slowlog is mandatory if request_slowlog_timeout is set
slowlog = /var/log/php-fpm/www-slow.log

; Set open file descriptor rlimit.
; Default Value: system defined value
;rlimit_files = 1024

; Set max core size rlimit.
; Possible Values: ‘unlimited’ or an integer greater or equal to 0
; Default Value: system defined value
;rlimit_core = 0

; Chroot to this directory at the start. This value must be defined as an
; absolute path. When this value is not set, chroot is not used.
; Note: chrooting is a great security feature and should be used whenever
; possible. However, all PHP paths will be relative to the chroot
; (error_log, sessions.save_path, …).
; Default Value: not set
;chroot =

; Chdir to this directory at the start. This value must be an absolute path.
; Default Value: current directory or / when chroot
;chdir = /var/www

; Redirect worker stdout and stderr into main error log. If not set, stdout and
; stderr will be redirected to /dev/null according to FastCGI specs.
; Default Value: no
;catch_workers_output = yes

; Limits the extensions of the main script FPM will allow to parse. This can
; prevent configuration mistakes on the web server side. You should only limit
; FPM to .php extensions to prevent malicious users to use other extensions to
; exectute php code.
; Note: set an empty value to allow all extensions.
; Default Value: .php
;security.limit_extensions = .php .php3 .php4 .php5

; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from
; the current environment.
; Default Value: clean env
;env[HOSTNAME] = $HOSTNAME
;env[PATH] = /usr/local/bin:/usr/bin:/bin
;env[TMP] = /tmp
;env[TMPDIR] = /tmp
;env[TEMP] = /tmp

; Additional php.ini defines, specific to this pool of workers. These settings
; overwrite the values previously defined in the php.ini. The directives are the
; same as the PHP SAPI:
; php_value/php_flag – you can set classic ini defines which can
; be overwritten from PHP call ‘ini_set’.
; php_admin_value/php_admin_flag – these directives won’t be overwritten by
; PHP call ‘ini_set’
; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no.

; Defining ‘extension’ will load the corresponding shared extension from
; extension_dir. Defining ‘disable_functions’ or ‘disable_classes’ will not
; overwrite previously defined php.ini values, but will append the new value
; instead.

; Default Value: nothing is defined by default except the values in php.ini and
; specified at startup with the -d argument
;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www@my.domain.com
;php_flag[display_errors] = off
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
;php_admin_value[memory_limit] = 128M

; Set session path to a directory owned by process user
php_value[session.save_handler] = files
php_value[session.save_path] = /var/lib/php/session

 

nginx deep dive

What Is a Web Server ?
Web server is a server that hosts an application that listens to the HTTP requests.

It is the web server’s responsibility to hear (i.e., to understand HTTP) what the browser is saying , and respond appropriately.

Sometimes, it could be as simple as fetching a file from the file system and delivering it to the web browser.

At other times, it delegates the request to a handler that performs complicated logic and returns the processed response to the web server, which in turn transfers it back to the client.

Typically, the server that hosts web server software is termed a web server or a web front-end server .

Although there are quite a few web servers around, three dominate: Apache , Microsoft Internet Information Services (IIS) , and Nginx combined have captured around 85 percent of the market.

Reasons Why You Should Be Using Nginx ?

It’s Affordable to Install and Maintain:
Nginx performs pretty well even on servers with a very low hardware footprint. Even with default settings, you can get much more throughout from an Nginx server compared toApache or IIS .

It’s Easy to Use:
Don’t be intimidated by the lack of a user interface (UI) . Nginx is easy if you understand how to use it. The configuration system is pretty well thought out and once you get up to speed, you will thoroughly enjoy it!

You Can Upgrade It On the Fly:
Nginx provides you an ability to reconfigure and upgrade Nginx instances on the fly without interrupting customer activity.
With Nginx you can patch your production environment reliably without completely bringing down your services levels.

It’s Fast:
Fast page load times builds trust in your site and leads to more returning visitors.

It Can Accelerate Your Application:
The idea is to drop Nginx in front of an existing set of web servers and let it take care of routing traffic to the back end intelligently. This way, you can offload a lot of tasks to Nginx and let your back-end server handle more data intensive tasks.

It Has a Straightforward Load Balancer:
Setting up a hardware load balancer is quite costly and resource intensive.
With Nginx you can set up a pretty straightforward and fast software load balancer. It can immediately help you out by sharing load across your front-end web servers.

It Scales Well:
With Apache and IIS , it is a common pain: The more connections, the more issues. These servers solved a big problem around bringing dynamic content to the web server instead of static files, but scalability has always been a challenge.
Let’s say you have server that can handle 1000 concurrent connections . As long as the requests are short and the server is able to handle 1000 connections/second, you are good.
If you have large files available for download, your server will most likely choke with a high number of concurrent connections. Apache and IIS servers are not suitable for this kind of load, simply because of the way they have been architected.
They are also prone to denial of service attacks (DoS) .
Nginx is one of the very few servers (along with Node.js) that is capable of addressing this issue, which is often referred to as C10K problem.

Main Features of Nginx

More Than Just a Web Server: At its core, you can consider Nginx to be an event-based reverse proxy server . That may come as a surprise to many, because mostly Nginx is usually said to be a web server. A reverse proxy is a type of proxy server that retrieves resources from the servers on behalf of a client. It can be helpful to offload the number of requests that the actual web server ends up handling. Figure 1-2
illustrates what a proxy server does.

Modular Design: Nginx has a fairly robust way of upgrading its live processes and it can be done without interrupting the service levels.

Asynchronous Web Server: Nginx gains much of its performance due to its asynchronous and event-based architecture whereas Apache and IIS like to spin new threads per connection, which are blocking in nature. Both IIS and Apache handle the threads using multithreaded programming techniques . Nginx differs in the approach completely. It does not create a separate thread for each request. Instead it relies on events.

Reverse Proxy and Load Balancing Capability: Nginx analyzes the request based on its URI and decides how to proceed with the request. In other words, it is not looking at the file system to decide what it has to do with it. Instead, it makes that decision based on the URI. This differentiation enables Nginx to act as a very fast front end that acts as a reverse proxy and helps balance the load on the application servers. It’s no exaggeration to say that Nginx is a reverse proxy first and a web server later.

Low Resource Requirement and Consumption: Small things that go a long way, define Nginx. Where other web servers typically allow a simple plug-and- play architecture for plug-ins using configuration files, Nginx requires you to recompile the source with required modules. Every module that it requires is loaded directly inside of an Nginx process. Such tweaks along with smart architectural differences ensure that Nginx has a very small memory and CPU footprint on the server and yields a much better throughput than its competition. You will learn about the Nginx architecture with granular details in the coming chapters.

Unparalleled Performance: Nginx is probably the best server today when it comes to serving static files. There are situations where it cannot be considered the best (like dynamic files), but even then, the fact that it plays well as a reverse proxy ensures that you get the best of both worlds. If configured well, you can save a lot of cost that you typically incur on caching, SSL termination, hardware load balancing, zipping/unzipping on the fly, and completing many more web-related tasks.

Multiple Protocol Support: HTTP(S), WebSocket , IMAP , POP3 , SMTP: As a proxy server, Nginx can handle not only HTTP and HTTPS requests, but also mail protocols with equal grace. There are modules available that you can use while compiling your build and Nginx will proxy your mail-related traffic too.

SSL Termination: Secure Sockets Layer is a necessity for any website that deals with sensitive data. And, just like any other necessity, there is a cost involved. When it comes to web traffic, SSL also induces an extra processing overhead on the server side where it has to decrypt the request every time. There lies a catch-22 situation: If you remove the SSL, you are opening yourself up for attacks and if you use SSL, you end up losing a little bit on speed.

Since Nginx has the capability of acting as a load balancer, you can give it additional work as well. Essentially, the idea of an SSL termination is that the request will come to the load balancer on a secure channel but will be sent to the other web servers without SSL. This way, your web server acts faster and eventually your requests go out to the clients in a secure manner as well.

HTTP Video Streaming Using MP4/FLV/HDS/ HLS: ou have already learned that the Input/Output (IO) in Nginx doesn’t block if the client is slow. Video
streaming is typically a very IO-intensive process, and Nginx does a great job here. It has multiple modules that help you provide streaming services. To give a little perspective as to what is special about video streaming, imagine watching YouTube. You can easily skip the video from one position to another and it almost immediately starts serving the content. The key here is to not download the entire file at one shot. The request, hence, should be created in such a way that it has certain markers in the query string, like this:

http://www.yoursite.com/yourfile.mp4?start=120.12
The preceding request is asking the server to send the content of yourfile.mp4 starting from (notice the start query string) 120.12 seconds. This allows random seeking of a file in a very efficient way.

Extended Monitoring and Logging: The more servers you have, and the more traffic you get, the harder it becomes. There are all sorts of nasty
people out there who have ulterior motives to bring the website down and disrupt your web service. The best wayto ensure safety, hence, is to be cautious and alert. Log as much as possible and ensure that you react proactively .

Graceful Restarting: The way Nginx is designed, you can easily upgrade Nginx. You can also update its configuration while the server is running, without losing client connections. This allows you to test your troubleshooting approach , and if something doesn’t work as desired, you can simply revert the settings.
nginx
-s reload , a command that will simply reload the configuration changes without recycling the worker processes.

Upgrades without Downtime Using Live Binaries: This is probably one of the most powerful features of Nginx. In the IIS or Apache worlds, you can’t upgradeyour web server without bringing the service down. Nginx spawns a master process when the service starts. Its main purpose is to read and evaluate configuration files. Apart from that, the master process starts one or more worker processes that do the real work by handling the client connections.
If you need to upgrade the binary, there are simple steps and commands that you need to issue in order to make the new worker processes run at tandem with the older ones. The new requests will be sent to the newer worker processes that have the latest configuration loaded in it. If by any chance, you find out that the upgrade is causing issues, you can simply issue another set of commands that will gracefully return the requests to the older process that already has the previous working configuration loaded in it.

Enterprise Features of Nginx Plus: Nginx has two versions. The basic version is free, and the paid option is called Nginx Plus . Nginx Plus has
quite a few important features that are very helpful for managing busy sites. Choosing Nginx Plus helps you save a lot of time. It has features like load balancing, session persistence, cache control, and even health checks out of the box.

Differences between Apache and Nginx:

Nginx and Apache are both versatile and powerful web servers. Together they serve more than 70 percent of the top million websites. At times they compete, but often they are found complementing each other. One important thing to point out here is that they are not entirely interchangeable. You will need to pick them up carefully according to your workload.

Performance: For Apache users, there is a choice of multiprocessing modules (MPM) that control the way the requests are handled. You can choose between mpm_prefork, mpm_worker, mpm_event . Basically mpm_prefork spawns processes for every request, mpm_worker spawns processes , which in turn spawn threads and manages the threads, mpm_event is further optimization of mpm_worker where Apache juggles the keep alive connections using dedicated threads. If you haven’t already noted, these changes are all for the better and evolutionary. Nginx was created to solve the concurrency problem and it did by using a new design altogether. It spawns multiple worker processes that can handle thousands of connections each! It is completely asynchronous, non-blocking, and event-driven. It consumes very little resources and helps in reducing cost of scaling out of a web server. The web server can be upgraded on the fly without losing the connected visitors and reduces downtime of your service.

Resource Requirements: Nginx needs fewer resources than Apache because of its new architecture. Fewer resources = Lower cost = More profit.

Proxy and Load Balancing Server: Nginx was designed as a reverse proxy that doubles up as a web server. This is quite different than Apache
since it was designed as a general purpose web server. This feature gives an edge to Nginx since it is more effective in dealing with a high volume of requests. It also has good load balancing capability. Quite often, Nginx acts as a web accelerator by handling the request in the front end and passing the request to the back-end servers when required. So, Nginx in the front end and Apache in the back end gives you the best of both worlds. They are more complementing than competing from this perspective.

Static vs. Dynamic Content: Nginx has a clear advantage when serving static content. The dynamic content story is quite different though. Apache has a clear, early mover advantage here. It has built-in support for PHP, Python, Perl, and many other languages. Nginx almost always requires extra effort to make it work with these languages. If you are a Python or Ruby developer, Apache might be a better choice since it will not need CGI to execute it. Even though PHP has good support on Nginx, you still need to dedicate a little time to get PHP- based solutions that work directly on Nginx. For example, installing WordPress on LAMP stack is super easy, and even though it can be easily done on a LEMP stack, you will still need to configure some nuts here, and some bolts there.

Configuration: Apache’s basic configuration ideology is drastically different from Nginx. You can have a .htaccess file in every directory (if you like) using which you can provide additional directions to Apache about how to respond to the requests of that specific directory. Nginx on the other hand interprets the requests based on the URL, instead of a directory structure. It doesn’t even process the .htaccess file. It has both merits (better performance) and demerits (lesser configuration flexibility). Although for static files the requests are eventually mapped to the file, the core power of parsing the URI comes to play when you use it for scenarios like mail and proxy server roles.

Modules (or Plug-Ins): Both Apache and Nginx have a robust set of modules that extend the platform. There is still a stark difference in the way these extensions are added and configured. In Apache, you can dynamically load/unload the modules using configuration, but in Nginx you are supposed to build the binaries using different switches. It may sound limiting and less flexible (and it is), but it has its own advantages. For example, the binaries won’t have any unnecessary code inside it. It requires and forces you to have a prior understanding of what you need the specific web server to do.
It is also good in a way, because mostly it is seen that even though the modular software has modules, web administrators end up installing much more than what they need. Any unnecessary module that is loaded in the memory is extra CPU cycles getting wasted. Obviously, if you are wasting those cycles due to lack of planning, it all adds up eventually and you will get poorer performance from the same hardware!

Nginx Core Directives

 

What Are Directives?
“Directive” is defined as an instruction or to direct. Directives define how Nginx runs on your server. Directives are of two types: simple directives and block directives.

Simple directive – A simple directive can be as straightforward as a set of names and parameters separated by spaces, and ending with a semicolon. For example, the directive for worker processes looks like this: worker_processes 1;
This directive is giving direction to the master process of Nginx about how to spawn worker processes.

Block directive – As the name suggests, it looks like a block of text enclosed by curly braces { } and contains a set of simple directives.

Context Types

There are quite a few different contexts available in Nginx: for example, main, events, HTTP, server, location, upstream, if, stream, mail, etc. Out of these, HTTP, events, server, and location are most commonly used. The contexts could be nested as well.

main {
simple_directives parameters;

events{
event_directives parameters;

}
http{
http_directives parameters;

server{
server_directives parameters;

location{
location_directives parameters;

}
}
}
}

###########################################

Understanding the Default Configuration

[root@localhost ~]# cat /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

http {
log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;

# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;

location / {
}

error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 default_server;
# listen [::]:443 ssl http2 default_server;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate “/etc/pki/nginx/server.crt”;
# ssl_certificate_key “/etc/pki/nginx/private/server.key”;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }

}

###################################################

Simple Directives

The whole body could be referred to as the main context. There are a few simple directives defined in the main block:

user directive has a default value of nobody. You can add user directive to define the account under which Nginx worker process will be executed on the server. The syntax for user directive is like user <user_name> <group_name>. The user should exist on the server before Nginx starts or else there will be an error while starting Nginx services.

worker_process directive has a default value of 1 and it implies the number of worker processes that will be spawned by Nginx. Setting the value to auto is also permitted and in that case Nginx tries to autodetect the number of cores.

error_log directive can be applied in multiple contexts like main, http, mail, stream, server, and location. Here, you see it applied in the main context. The first parameter /var/log/nginx/error.log tells you the the name of the file where the log file will be created, whereas the second parameter warn informs you that anything above the warning level will be logged.

The logging levels are defined in increasing order as you can see from the list below and it is important to note that if you set the level to error, then the logs of type warning, notice, and info will be ignored. Keeping it to info is not recommended in production since the logs may become massive if your website is hit very frequently.The logs should be periodically analyzed to verify if it contains anything alarming.
• info – Information
• notice – Notice
• warn – Warnings
• error – Error
• crit – Critical
• alert – High Alert
• emerg – Emergency

pid directive has a parameter that defines the file name that stores the process ID of the master process /var/run/nginx.pid. You may be thinking why does nginx log the PID to a file? Glad you asked! Imagine a scenario where you are supposed to check the uptime of a process. Running a command like ps -ax | grep nginx will help you get the current status and process id (PID), but you cannot really tell how long the process has been alive.

[root@localhost ~]# ps -ax | grep nginx
2421 ? Ss 0:00 nginx: master process /usr/sbin/nginx
2422 ? S 0:00 nginx: worker process
2826 pts/0 R+ 0:00 grep –color=auto nginx
[root@localhost ~]# ps -p 2421 -o etime=
55:43
[root@localhost ~]# ps -p `cat /var/run/nginx.pid` -o etime=
57:10
[root@localhost ~]# cat /var/run/nginx.pid
2421

##########################################################

Events Context

After the simple directives in the default configuration, you will find a context called events. The events context can be declared only in the main context and there can be only a single events context defined within the Nginx configuration. With the use of directives in the event context, you can fine-tune the way Nginx behaves. There are just six different event directives.

worker_connections directive allows a maximum of 1024 concurrent worker connections. The defaults in Nginx usually suffice. The number of concurrent connections you may get on a web server can be calculated roughly using the following (N = average number of connections per request): (worker_processes x worker_connections x N) / Average Request Time

use directive does not need to be applied explicitly since Nginx tries to use the most efficient method automatically. Basically, the use directive allows Nginx to support a variety of connection methods depending on the platform.

multi_accept is set to off by default. It means that a worker process will accept only one new connection at a time by default. It is a generally a good idea to enable multi_accept so that Nginx can accept as many connections as possible.

accept_mutex is set to on by default, and it is generally left untouched. Basically, it means that the worker processes will get the requests one by one. This implies that the worker processes will not jump up for every request and go back to sleep if the number of requests is low.

accept_mutex_delay comes into effect only when accept_mutex is enabled. As the name implies it is the maximum time to wait for the existing worker process to accept the new connection before initiating a new process to execute the request.

################################################

HTTP Context

The HTTP context (or block) can be considered the heart of the configuration system for an Nginx web server. In the default configuration you will notice the following directives.

include /etc/nginx/mine.types – The include directive keeps the core configuration file clean. You can use this directive to keep related configurations in a separate file. Nginx will ensure that the file is loaded in-place when loading the configuration. At the same time, it will keep the main configuration readable and manageable.

If you view /etc/nginx/mime.types you will find a block of text that is nothing but another directive called types . It maps file name extension to MIME types of responses. The extensions are case insensitive and there can be many extensions mapped to one type. The following snippet shows the structure of this file. Notice how html htm shtml extensions are all mapped to text/html MIME type.

[root@localhost ~]# cat /etc/nginx/mime.types

types {
text/html html htm shtml;
text/css css;
text/xml xml;
image/gif gif;
image/jpeg jpeg jpg;
application/javascript js;
application/atom+xml atom;
application/rss+xml rss;
.
.
.
.
}

default_type directive has a value of application/octet-stream . It specifies the default MIME type if Nginx fails to find a specific one in the /etc/nginx/mine.types . It is this MIME type that guides the browser that it has to download the file directly.

log_format directive configures the ngx_http_log_module . It writes the log in a specified format. The first parameter is the name of the format, in this case main. The second parameter is a series of variables (you will learn about it in detail soon) that will contain a different value for every request. Once you have named a log_ format , you will need to use it.

log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘ ‘$status $body_bytes_sent “$http_referer” ‘ ‘”$http_user_agent” “$http_x_forwarded_for”‘;

access_log directive requires a path ( /var/log/nginx/access.log ) and name of a format ( main ). There is much more to access_log that you will learn in the upcoming chapters, but for now you can simply understand that every request you make from the server can be logged to a file so that you can analyze it later. A good web administrator takes very good care of these logs, and analyzes it periodically to find out issues that sometimes go unnoticed. These logs also prove to be helpful during troubleshooting scenarios.

The default value for sendfile directive is off if the directive is not present. Nginx default configuration hence, turns it on . It is generally a good idea to enable it, since it ensures that the function is called with SF_NODISKIO . In simple words, it means that the call will not block on disk I/O. The data is loaded in chunks and sent appropriately to the client. As you can guess, it has a huge advantage and enables Nginx to scale very well, especially while serving large files.

tcp_nopush directive is commented by default and the default value is off. This comes into effect only when you are using sendfile and basically directs the Nginx server to send the packets in full. Typically, you can leave it disabled.

keepalive_timeout directive has a value of 65. Normally, when a connection is made to the server, you need not disconnect the connection straightaway. That is because a web page normally comprises of a lot of assets. It will not be very effective to create a new connection for every asset that is sent to the client.

gzip directive compresses the output so that lesser bandwidth is consumed per request. By default it is turned off, and it is recommended to turn it on.

The last line in the configuration is yet another include and it is an interesting one! You can see that it accepts wild cards ( include /etc/nginx/conf.d/*.conf; ) and it implies that it will load all the configuration file sat once from the folder /etc/nginx/ conf.d . In the next section you will see what is included in the conf.d folder .

################################################################

Server Context

The server block can be set in multiple contexts to configure various modules.

#################################################################

Nginx Core Architecture

The Master Process: Think of the master process as the owner of the restaurant. The master process of Nginx is the one who performs privileged operations like reading from the configuration files , binding to ports, and spawning child processes when required. The worker processes are almost analogous to waiters in the restaurant. They do the running around and manage the show. Notice that the guests at the restaurant don’t come to visit the owner. They are there to have food that the chefs make in the kitchen. The guests don’t need to know who does the hard work behind the scenes. The chefs work in a dedicated manner to make the dish, and play a role analogous to slow input/output (I/O) or long-running networking calls in this story.

[root@localhost ~]# ps -ef –forest |grep nginx
root 2181 2097 0 10:39 pts/0 00:00:00 \_ grep –color=auto nginx
root 2166 1 0 10:39 ? 00:00:00 nginx: master process /usr/sbin/nginx
nginx 2167 2166 0 10:39 ? 00:00:00 \_ nginx: worker process

[root@localhost ~]# grep worker_processes /etc/nginx/nginx.conf
worker_processes auto;

[root@localhost ~]# vim /etc/nginx/nginx.conf
worker_processes 2;

[root@localhost ~]# ps -ef –forest |grep nginx
root 2210 2097 0 10:42 pts/0 00:00:00 \_ grep –color=auto nginx
root 2166 1 0 10:39 ? 00:00:00 nginx: master process /usr/sbin/nginx
nginx 2195 2166 0 10:42 ? 00:00:00 \_ nginx: worker process
nginx 2196 2166 0 10:42 ? 00:00:00 \_ nginx: worker process

[root@localhost ~]# ps -ax -o command |grep nginx
nginx: master process /usr/sbin/nginx
nginx: worker process
nginx: worker process
grep –color=auto nginx

[root@localhost ~]# grep worker_processes /etc/nginx/nginx.conf
worker_processes 5;

[root@localhost ~]# ps -ef –forest |grep nginx
root 2223 2097 0 10:43 pts/0 00:00:00 \_ grep –color=auto nginx
root 2166 1 0 10:39 ? 00:00:00 nginx: master process /usr/sbin/nginx
nginx 2215 2166 0 10:43 ? 00:00:00 \_ nginx: worker process
nginx 2216 2166 0 10:43 ? 00:00:00 \_ nginx: worker process
nginx 2217 2166 0 10:43 ? 00:00:00 \_ nginx: worker process
nginx 2218 2166 0 10:43 ? 00:00:00 \_ nginx: worker process
nginx 2219 2166 0 10:43 ? 00:00:00 \_ nginx: worker process

Processes vs. Threads: Fundamentally, from the OS perspective, the work is done inside a process using one or many threads. The processes can be considered as a boundary created by the memory space. Threads reside inside a process. They are objects that load the instructions and are scheduled to run on a CPU core. Most server applications run multiple threads or processes in parallel so that they can use the CPU cores effectively. As you can guess, both processes and threads consume resources and having too many of either of them leads to Problem.

Typical web server often creates the pages quickly. Unfortunately, it doesn’t have control on the clients’ network speed. This means that in a blocking architecture the server resources get tied down because of slow clients. Bring a lot of slow clients, and eventually you will find a client that complains that the server is slow. What an irony! Nginx handles the requests in such a way that its resources are not blocked.

The Worker Process: Each worker process in Nginx is single threaded and runs independently. Their core job is to grab new connections and process them as quickly as possible When the worker processes are launched, they are initialized with the configuration and the master process tells them to listen to the configured sockets. Once active, they read and write content to disk, and communicate with the upstream servers.

nginx.png

Since they are all forked from the master process, they can use the shared memory for cached data, session persistence data, and other shared resources.

State Machines: Nginx has different state machines . A state machine is nothing but a set of instructions that tell it how to handle a particular request. A HTTP state machine is the most commonly used, but you also have other state machines for processing streams (TCP traffic), mails (POP3, SMTP, IMAP), and so on.
When incoming requests hit the server, the kernel triggers the events. The worker processes wait for these events on the listen sockets and happily assigns it to an appropriate state machine.
Processing an HTTP request is a complicated process and every web server has a different way of handling its own state machines. With Nginx, the server might have to think whether it has to process the page locally, or send it to the upstream or authentication servers. Third-party modules go one step further by bending or extending these rules.

Primarily, one worker process can cater to hundreds (even thousands!) of requests at the same time even though it has just one thread internally. It is all made possible due to the never-ending event loop that is non-blocking in nature. Unlike other web servers (like IIS & Apache), the threads in Nginx don’t wait till the end of the request. It accepts the request on the listen socket, and the moment it finds a new request, it creates a connection socket.

nginx2.png

Traditional web server request processing (left) and Nginx (right)

Notice that in the traditional way (Figure, left), the threads or worker process is not freed up until the client consumes the data completely. If the connection is made to stay alive by using the keepalive setting, the resources allocated to this thread/process remains alive until the timeout of the connection. Compare this to Nginx, and you will find that the newly created connection socket keeps listening for the events of the ongoing requests at its own pace. So, the kernel will let Nginx know that the partial data that is sent to the client has been received, and the server can send additional data. This non-blocking event mechanism helps to achieve high scalability on the web server. In the meantime, the listen sockets are free to serve additional requests!

Update Configuration: You just found out that there is an issue with the worker process and the worker process needs to be restarted. Or maybe you just want the worker processes to be aware of the new configuration change you just made. One way would be to kill the worker processes and respawn them so that the configuration is loaded again. Updating a configuration in Nginx is a very simple, lightweight, and reliable operation. All you need to do is run nginx -s reload . This command will ensure that the configuration is correct, and if it is all set, it will send the master process a SIGHUP signal.

The master process obliges by doing two things:
1. It reloads the configuration and forks a new set of worker processes. This means, that if you have two worker processes running by default, it will spawn two more! These new worker processes will start listening for connections and process them with new configuration settings applied.

2. It will signal the old worker processes to gracefully exit. This implies that theolder worker processes will stop taking new requests. They will continue working on the requests that they are already handling, and once done will gracefully shut down after all the connections are closed. Notice that due to new worker processes being spawned, there will be additional load on the server for a few seconds, but the key idea here is to ensure that there is no disruption in service at all.

root@localhost ~]# kill -l 

1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR111) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+338) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+843) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+1348) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-1253) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-758) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-263) SIGRTMAX-1 64) SIGRTMAX

Upgrade: Here instead of spawning new worker processes with new configurations, it starts the newer version of the web server, which shares the resources with the older version. These keep running in parallel and their worker processes continue to handle traffic. If you find that your application is doing well with the newer version, you can send signals to kill the older version or vice versa!

HTTP Request Processing in Nginx: Now that you know the overall architecture of Nginx, it will be easier to understand how a typical request
is served end-to-end. Figure should give you an overall idea about the request processing in Nginx. Consider that you have a website that requires a valid user to access the site and wants to compress every request that is served by the web server. You will see how different components of Nginx work together to serve a request.

 

nginx3.png

1. After reading the main context from the nginx.conf, the request is passed to http context.
2. The first step is to parse the Request URI to a filename.
3. Read the location configuration and determine the configuration of a requested resource.
4. All modules parse the header and gather module specific information.
5. Checks if the client can access of the requested the resource. It is at this step that Nginx determines if any specific IP addresses are allowed or denied, etc.
6. Checks if the credentials supplied by the client are valid. This involves looking at the back-end system for validation. It can be a back-end database or accounts configured elsewhere.
7. Checks if the client credentials validated in the earlier step is authorized to access the resource.
8. Determines the MIME type of the requested resources. This step helps to determine the content handler.
9. Inserts modules filters in the output filter chain.
10. Inserts content handler, in this example its FastCGI and gzip handler. This will generate the response for the requested resource. The response is forwarded to the output filter chain for further manipulation.
11. Each module logs a message after processing the request.
12. The response is served to the client or any other resource in the chain (load balancer or proxy).
13. If there is any error in either of the processing cycles, a HTTP error message is generated and the client is responsed with the message.

Nginx Modules: Modules are those little pieces of code that give a specific feature or functionality in Nginx. It is because of these modules that you can identify if Nginx is behaving as a web server, reverse proxy server, or a load balancing server. Hence, it is important to understand what modules are and how they formulate the Nginx HTTP request processing structure.

How Modules Fit in Nginx:
1. Start Nginx web server.
2. Nginx master process gets initiated.
3. Read nginx.conf.
4. Creates worker process(es), memory allocation , and other architectural specific configuration as per the CPU architecture.
5. Based on the context like HTTP , MAIL, and STREAM , it creates a list of module handlers and maps them as per their location in the nginx.conf.
6. If a request is http://abc.com , the request is processed in http context.
7. It will check for the content module handler need to process the request and the respective handler grabs the request and starts working on it.
8. Once the request is processed, the output is handed over to the filters like gzip, headers, rewrite, etc. The filters will manipulate the output further depending on their order of execution.
9. If there is any load balancer or proxy module, the respective module will further handle the output.
10. Finally, the response is sent over to the client.

module.png

Core Module:

Command: user
Syntax user <username>;
user <username> <groupname>;
Default value user nobody nobody;
Context main
Description This defines the identify under which Nginx process is started. It is recommended to use least privileged user.

Command: worker_processes
Syntax worker_processes <number>;
worker_processes auto;
Default value worker_processes = 1;
Context main
Description This defines the number of worker processes started by Nginx. It is recommended to set the value to the number of CPU cores on the server. You can also use the value auto, which lets Nginx select an appropriate value. The optimal value depends on multiple factors and you should test the performance impact in your setup before and after making changes.

Command: error_log
Syntax error_log <path/filename> <level>;
error_log memory:size debug;
Default value error_log logs/error.log error;
Context main, http, mail, stream, server, location
Description This defines the location and level of error logs that are captured. The different levels of error logging are as below (starting from detailed to specific error):

debug : detailed information; used for debugging.
info : information message, lot of details; not very useful.
notice : only notices will be logged; not very useful.
warn : warning messages; indicates some kind of problem.
error : error logs; errors while serving pages.
crit : only critical problem that needs attention.
alert : alert messages of important issues.
emerg : emergency messages when the system is unstable.

Command: pid
Syntax pid <path/filename>;
Default value pid logs/nginx.pid;
Context main
Description This stores the process ID of the master process. You may think, why save a value of a process identifier in a file? It serves multiple purposes, especially signaling that the process has at least started successfully. It is also a cheaper way to poll a file in contrast to getting the output of the ps -ax | grep command. However, please be mindful that this approach is not fail-safe. It is possible that the process is dead for long, and the PID file contains stale information.
In general, the PID files are created by daemons that should only be run once on a system. When the process starts, it also creates the lock file. As long as the lock file exists, it won’t start another process. If the lock file exists, but the process id mentioned in the PID file is not running, the daemon can be considered as dead. It may also imply a crash or improper shutdown of the daemon, in which case it might initiate a special startup or restart scenario.

Command: worker_rlimit_nofile
Syntax worker_rlimit_nofile <number>;
Default value none
Context main
Description This defines the maximum number of open files for the worker processes.

Events Module:

Command: worker_connections
Syntax worker_connections <number>;
Default value worker_connections 512;
Context events
Description This defines the maximum number of simultaneous connections that can be treated by the worker process. Keep in mind that worker_connections cannot exceed worker_rlimit_nofile if configured.

Command: debug_connections
Syntax debug_connections <address>;
debug_connections <CIDR>;
Default value none
Context events
Description This defines debug logging for selected client connection. You can specify IPv4 or IPv6 address of a client.

HTTP Module:

Command:include
Syntax include <username>;
include <mask>;
Default value none
Context any
Description This defines including syntactically correct files or mask. Instead of making a long and cluttered nginx.conf file, you can define a virtual server in a specific configuration file and include them directly in the nginx.conf file.
Example:
include conf/mime.types;
include /etc/nginx/proxy.conf;
include vhost/abc.com.conf;
include /etc/nginx/vhosts/*.conf;

Command: default_type
Syntax default_type <mime.types>;
Default value default_type text/plain;
Context http, server, location
Description This defines the default mime type of a response.
Example:
default_type text/plain;
default_type application/octet-stream;

Command: log_format
Syntax log_format <name>;
Default value log_format combined ‘$body_bytes_sent “$http_referer” “$http_user_
agent” $remote_addr $remote_user “$request” $time_local $status’;
Context http
Description This defines the log format that defines variables that are used only at the time of writing the log. Some of the variables that can be used are as follows:

$body_bytes_sent : number of bytes sent to a client as the response body, this
does not include response header.
$http_referer : identifies the URL of the page that is requested.
$http_user_agent : identifies the agent or browser that requested the resource.
$remote_addr : IP address of the client making the request.
$remote_user : Username specified if basic authentication is used.
$request : raw HTTP request URL.
$time_local : local server time that served the request.
$status : A numeric value of the HTTP Statuc Code during the response.
Example:
log_format combined ‘$remote_addr – $remote_user [$time_local]
“$request” $status $bytes_sent “$http_referer” “$http_user_agent”
“$gzip_ratio”‘;

Command: access_log
Syntax access_log <path/filename> [format];
access_log off;
Default value access_log logs/access.log combined;
Context http, server, location
Description This defines the path where logs are captured for the requests that are served by the server. When set to off, no logs are captured. The name combined implies the format of the log to be used while logging. In the log_format section previously mentioned you have seen how the log_format is named as combined with appropriate fields.

Command: sendfile
Syntax sendfile on | off;
Default value sendfile off;
Context http, server, location
Description This defines enabling Nginx to send static content directly from the kernel instead of looking for the resource on disk again. This prevents context switching and enabling a speedy delivery of resources. If you are serving static content, enabling this is essential where as if you are using the server as a reverse proxy this setting does not make any sense.

Command: tcp_nopush
Syntax tcp_nopush on | off;
Default value tcp_nopush off;
Context http, server, location
Description This enables sending response header and the full file in one packet rather then sending it in chunks. This parameter should be enabled with the sendfile option.

Command: tcp_nodelay
Syntax tcp_nodelay on | off;
Default value tcp_nodelay on;
Context http, server, location
Description This enables Nginx to send data in chunks, hence avoiding network conjunction. This option is uses keepalive option that allows you to send data without initiating a new connection. This option is the exact opposite of tcp_nopush option you saw earlier.

Command: keepalive_timeout
Syntax keepalive_timeout <number>;
Default value keepalive_timeout 75s;
Context http, server, location
Description This option sets a timeout value for a connection to stay alive. If you set keepalive_timeout to zero it will disable keep_alive .

Command: listen
Syntax listen <address>;
listen <ip_address>:<port>;
Default value listen *:80;
Context server
Description This option sets address, IP address, or port on which the server will accept the request. You can use both IP address and port or port or address (i.e., URL or hostname). If you set a default_server with an IP address or address, it will become the default server for the specified address or port.
Example:
listen http://www.abc.com ;
listen 127.0.0.1:8080;
listen localhost default_server;

Command: server_name
Syntax server_name <address>;
server_name *.<address>;
server_name _;
Default value server_name “”;
Context server
Description This option allows you to set the address using either an exact name or wildcard or a hash table to the listen port. Setting up a hash table enables quick processing of static data like server name, MIME types, request header strings, etc. Hash table uses the ngx_http_map module. Using a wildcard enables having multiple URL’s using the same domain name. The catch-all server parameter “_” is used when no valid domain name exists.
Example:
server_name http://www.abc.com one.abc.com;
server_name *.abc.com;
server_name _;

Command: root
Syntax root <path>;
Default value root html;
Context http, server, location
Description This option specifies the root location of the web content.

Command: error_page
Syntax error_page code uri;
Default value none
Context http, server, location
Description This option allows you to set the URI for a specific error.
Example:
error_page 403 http://www.abc.com/forbidden.html ;
error_page 500 501 502 /50x.html;

Command: try_files
Syntax try_files file uri;
Default value none
Context server, location
Description This option tries to look for the existence of the file specified order or looks at the literal path you specified in the root directive and sets the internal file pointer. In the below example, when a request is made a try_files directive will look for index.html, index.php in the /website/root_directory. If it fails to find it, it will look into the $document_root directory specified at the root directive and serve the request.
Example: try_files /website/root_directory/ $uri;

 

Managing Container in docker

Make sure you have already installed vagrant and virtual-box

https://rafishaikblog.wordpress.com/2017/01/13/vagrant/

mohammedrafi@NOC-RAFI:~$ mkdir practise && cd practise

mohammedrafi@NOC-RAFI:~/practise$

mohammedrafi@NOC-RAFI:~/practise$ vim Vagrantfile
VAGRANTFILE_API_VERSION = “2”

$bootstrap=<<SCRIPT
apt-get update
apt-get -y install wget
wget -qO- https://get.docker.com/ | sh
gpasswd -a vagrant docker
service docker restart
SCRIPT

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = “ubuntu/trusty64”
config.vm.network “private_network”, ip: “192.168.33.10”
config.vm.provider “virtualbox” do |vb|
vb.customize [“modifyvm”, :id, “–memory”, “1024”]
end
config.vm.provision :shell, inline: $bootstrap
end

mohammedrafi@NOC-RAFI:~/practise$ vagrant up
Bringing machine ‘default’ up with ‘virtualbox’ provider…
==> default: Importing base box ‘ubuntu/trusty64’…
==> default: Matching MAC address for NAT networking…
==> default: Checking if box ‘ubuntu/trusty64’ is up to date…
==> default: A newer version of the box ‘ubuntu/trusty64’ is available! You currently
==> default: have version ‘20170424.0.0’. The latest is version ‘20170811.0.1’. Run
==> default: `vagrant box update` to update.
==> default: Setting the name of the VM: practise_default_1503821207852_39192
==> default: Clearing any previously set forwarded ports…
==> default: Clearing any previously set network interfaces…
==> default: Preparing network interfaces based on configuration…
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports…
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running ‘pre-boot’ VM customizations…
==> default: Booting VM…
==> default: Waiting for machine to boot. This may take a few minutes…
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest…
default: Removing insecure key from the guest if it’s present…
default: Key inserted! Disconnecting and reconnecting using new SSH key…
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM…
==> default: Configuring and enabling network interfaces…
==> default: Mounting shared folders…
default: /vagrant => /home/mohammedrafi/practise
==> default: Running provisioner: shell…
default: Running: inline script

mohammedrafi@NOC-RAFI:~/practise$ vagrant ssh

vagrant@vagrant-ubuntu-trusty-64:~$ cat /etc/os-release
NAME=”Ubuntu”
VERSION=”14.04.5 LTS, Trusty Tahr”
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME=”Ubuntu 14.04.5 LTS”
VERSION_ID=”14.04″
HOME_URL=”http://www.ubuntu.com/&#8221;
SUPPORT_URL=”http://help.ubuntu.com/&#8221;
BUG_REPORT_URL=”http://bugs.launchpad.net/ubuntu/&#8221;

vagrant@vagrant-ubuntu-trusty-64:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:73:9e:83 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe73:9e83/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:c0:2b:e4 brd ff:ff:ff:ff:ff:ff
inet 192.168.33.10/24 brd 192.168.33.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fec0:2be4/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:78:d4:c7:e3 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever

vagrant@vagrant-ubuntu-trusty-64:~$ service docker status
docker start/running, process 5715

vagrant@vagrant-ubuntu-trusty-64:~$ docker run -t -i ubuntu:14.04 /bin/bash
Unable to find image ‘ubuntu:14.04’ locally
14.04: Pulling from library/ubuntu
48f0413f904d: Pull complete
2bd2b2e92c5f: Pull complete
06ed1e3efabb: Pull complete
a220dbf88993: Pull complete
57c164185602: Pull complete
Digest: sha256:6a3e01207b899a347115f3859cf8a6031fdbebb6ffedea6c2097be40a298c85d
Status: Downloaded newer image for ubuntu:14.04
root@d576280af11a:/#

You see that Docker pulled the Ubuntu:14.04 image composed of several layers, and you got a session as root within a container. The prompt gives you the ID of the container.

vagrant@vagrant-ubuntu-trusty-64:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
vagrant@vagrant-ubuntu-trusty-64:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d576280af11a ubuntu:14.04 “/bin/bash” About a minute ago Exited (0) 6 seconds ago compassionate_galileo

Running a Docker Container in Detached Mode
vagrant@vagrant-ubuntu-trusty-64:~$ docker run -d -p 1234:1234 python:2.7 python -m SimpleHTTPServer 1234
Unable to find image ‘python:2.7’ locally
2.7: Pulling from library/python
ad74af05f5a2: Pull complete
2b032b8bbe8b: Pull complete
a9a5b35f6ead: Pull complete
3245b5a1c52c: Pull complete
032924b710ba: Pull complete
fd37168d02bb: Pull complete
8804355f8371: Pull complete
a19b278d4ccd: Pull complete
Digest: sha256:2cba1410697972e0148cbc547ac39e89ff85240f0d4d07db58091bc46e0ed0f2
Status: Downloaded newer image for python:2.7
1acc1263f6b53f25258fe3f0f1849b393719709557847d04fb8547c817f7416b
vagrant@vagrant-ubuntu-trusty-64:~$

vagrant@vagrant-ubuntu-trusty-64:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1acc1263f6b5 python:2.7 “python -m SimpleH…” 4 minutes ago Up 4 minutes 0.0.0.0:1234->1234/tcp infallible_brattain

http://192.168.33.10:1234/

Use the create , start , stop , kill , and rm commands of the Docker CLI.

Docker Image with a Dockerfile

A Dockerfile is a text file that describes the steps that Docker needs to take to prepare an image—including installing packages, creating directories, and defining environment variables, among other things.

Create the following text file named Dockerfile in an empty working directory:
vagrant@vagrant-ubuntu-trusty-64:~$ vim Dockerfile

FROM busybox
ENV foo=bar

Then to build a new image called busybox2, you use the docker build command like so:
docker build -t busybox2 .

vagrant@vagrant-ubuntu-trusty-64:~$ docker build -t busybox2 .
Sending build context to Docker daemon 13.31kB
Step 1/2 : FROM busybox
latest: Pulling from library/busybox
add3ddb21ede: Pull complete
Digest: sha256:b82b5740006c1ab823596d2c07f081084ecdb32fd258072707b99f52a3cb8692
Status: Downloaded newer image for busybox:latest
—> d20ae45477cb
Step 2/2 : ENV foo bar
—> Running in 9586e12112e9
—> 62b708e0e76d
Removing intermediate container 9586e12112e9
Successfully built 62b708e0e76d
Successfully tagged busybox2:latest
vagrant@vagrant-ubuntu-trusty-64:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox2 latest 62b708e0e76d 8 minutes ago 1.13MB
busybox latest d20ae45477cb 3 days ago 1.13MB

vagrant@vagrant-ubuntu-trusty-64:~$ docker run -it busybox2
/ # env |grep foo
foo=bar

vagrant@vagrant-ubuntu-trusty-64:~$ docker run busybox2 env |grep foo
foo=bar

Run WordPress in a Single Container

Use Supervisor to monitor and run both MySQL and HTTPD. Supervisor is not an init system, but is meant to control multiple processes and is run like any other program.

To run WordPress, you will need to install MySQL, Apache 2 (i.e., httpd ), and PHP, and grab the latest WordPress release. You will need to create a database for WordPress.

vagrant@vagrant-ubuntu-trusty-64:~$ vim Dockerfile
FROM ubuntu:14.04

RUN apt-get update && apt-get -y install apache2 php5 php5-mysql supervisor wget
RUN echo ‘mysql-server mysql-server/root_password password root’ | debconf-set-selections && echo ‘mysql-server mysql-server/root_password_again password root’ | debconf-set-selections
RUN apt-get install -qqy mysql-server
RUN wget http://wordpress.org/latest.tar.gz && tar xzvf latest.tar.gz && cp -R ./wordpress/* /var/www/html && rm /var/www/html/index.html

RUN (/usr/bin/mysqld_safe &); sleep 5; mysqladmin -u root -proot create wordpress
COPY wp-config.php /var/www/html/wp-config.php
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 80
CMD [“/usr/bin/supervisord”]

vagrant@vagrant-ubuntu-trusty-64:~$ vim supervisord.conf
[supervisord]
nodaemon=true

[program:mysqld]
command=/usr/bin/mysqld_safe
autostart=true
autorestart=true
user=root

[program:httpd]
command=/bin/bash -c “rm -rf /run/httpd/* && /usr/sbin/apachectl -D FOREGROUND”

vagrant@vagrant-ubuntu-trusty-64:~$ vim wp-config.php

/**
* The base configurations of the WordPress.
*
* This file has the following configurations: MySQL settings, Table Prefix,
* Secret Keys, and ABSPATH. You can find more information by visiting
* {@link http://codex.wordpress.org/Editing_wp-config.php Editing wp-config.php}
* Codex page. You can get the MySQL settings from your web host.
*
* This file is used by the wp-config.php creation script during the
* installation. You don’t have to use the web site, you can just copy this file
* to “wp-config.php” and fill in the values.
*
* @package WordPress
*/

// ** MySQL settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define(‘DB_NAME’, ‘wordpress’);

/** MySQL database username */
define(‘DB_USER’, ‘root’);

/** MySQL database password */
define(‘DB_PASSWORD’, ‘root’);

/** MySQL hostname */
define(‘DB_HOST’, ‘localhost’);

/** Database Charset to use in creating database tables. */
define(‘DB_CHARSET’, ‘utf8’);

/** The Database Collate type. Don’t change this if in doubt. */
define(‘DB_COLLATE’, ”);

/**#@+
* Authentication Unique Keys and Salts.
*
* Change these to different unique phrases!
* You can generate these using the {@link https://api.wordpress.org/secret-key/1.1/salt/ WordPress.org secret-key service}
* You can change these at any point in time to invalidate all existing cookies. This will force all users to have to log in again.
*
* @since 2.6.0
*/
define(‘AUTH_KEY’, ‘put your unique phrase here’);
define(‘SECURE_AUTH_KEY’, ‘put your unique phrase here’);
define(‘LOGGED_IN_KEY’, ‘put your unique phrase here’);
define(‘NONCE_KEY’, ‘put your unique phrase here’);
define(‘AUTH_SALT’, ‘put your unique phrase here’);
define(‘SECURE_AUTH_SALT’, ‘put your unique phrase here’);
define(‘LOGGED_IN_SALT’, ‘put your unique phrase here’);
define(‘NONCE_SALT’, ‘put your unique phrase here’);

/**#@-*/

/**
* WordPress Database Table prefix.
*
* You can have multiple installations in one database if you give each a unique
* prefix. Only numbers, letters, and underscores please!
*/
$table_prefix = ‘wp_’;

/**
* For developers: WordPress debugging mode.
*
* Change this to true to enable the display of notices during development.
* It is strongly recommended that plugin and theme developers use WP_DEBUG
* in their development environments.
*/
define(‘WP_DEBUG’, false);

/* That’s all, stop editing! Happy blogging. */

/** Absolute path to the WordPress directory. */
if ( !defined(‘ABSPATH’) )
define(‘ABSPATH’, dirname(__FILE__) . ‘/’);

/** Sets up WordPress vars and included files. */
require_once(ABSPATH . ‘wp-settings.php’);

vagrant@vagrant-ubuntu-trusty-64:~$ docker build -t wordpress .
c69811d4e993
Step 2/10 : RUN apt-get update && apt-get -y install apache2 php5 php5-mysql supervisor wget
—> Using cache
—> e60dd8009735
Step 3/10 : RUN echo ‘mysql-server mysql-server/root_password password root’ | debconf-set-selections && echo ‘mysql-server mysql-server/root_password_again password root’ | debconf-set-selections
—> Using cache
—> 7fdf6f470aa2
Step 4/10 : RUN apt-get install -qqy mysql-server
—> Using cache
—> 66a3a4462138
Step 5/10 : RUN wget http://wordpress.org/latest.tar.gz && tar xzvf latest.tar.gz && cp -R ./wordpress/* /var/www/html && rm /var/www/html/index.html
—> Using cache
—> 37772ea1e4b4
Step 6/10 : RUN (/usr/bin/mysqld_safe &); sleep 5; mysqladmin -u root -proot create wordpress
—> Using cache
—> 04cf1e84fd24
Step 7/10 : COPY wp-config.php /var/www/html/wp-config.php
—> 714039437acb
Removing intermediate container 55c236462fc8
Step 8/10 : COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
—> 3cc59b6f5fa1
Removing intermediate container bd300e4f8cec
Step 9/10 : EXPOSE 80
—> Running in 05fb5fb0f058
—> 37ab444b93b3
Removing intermediate container 05fb5fb0f058
Step 10/10 : CMD /usr/bin/supervisord
—> Running in 7733ace6279b
—> 576eb043a6f8
Removing intermediate container 7733ace6279b
Successfully built 576eb043a6f8
Successfully tagged wordpress:latest

vagrant@vagrant-ubuntu-trusty-64:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
wordpress latest 576eb043a6f8 41 seconds ago 466MB

vagrant@vagrant-ubuntu-trusty-64:~$ docker run -d -p 80:80 wordpress
37130c1afeed71132444f23b3b3923732ca032934c4e650d472acb66ede3263a

http://192.168.33.10

https://github.com/how2dock/docbook/tree/master/ch01/supervisor

 Backing Up a Database Running in a Container

vagrant@vagrant-ubuntu-trusty-64:~$ docker run –name mysqlwp -e MYSQL_ROOT_PASSWORD=wordpressdocker -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=wordpresspwd -v /home/docker/mysql:/var/lib/mysql -d mysql
da575a84ea6a187bd989409f4db3e45b4cb9268b249c121eb306a9e95075264b

vagrant@vagrant-ubuntu-trusty-64:~$ ls
Dockerfile supervisord.conf wp-config.php

vagrant@vagrant-ubuntu-trusty-64:~$ docker exec mysqlwp mysqldump –all-databases –password=wordpressdocker > wordpress.backup
mysqldump: [Warning] Using a password on the command line interface can be insecure.

vagrant@vagrant-ubuntu-trusty-64:~$ ls
Dockerfile supervisord.conf wordpress.backup wp-config.php

Copying Data to and from Containers

vagrant@vagrant-ubuntu-trusty-64:~$ docker run -d –name testcopy ubuntu:14.04 sleep 360
25acb54acf7154702c8999c965ba612d5f63b3b16aa3c97f0d195e80d0ea1d3b

vagrant@vagrant-ubuntu-trusty-64:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25acb54acf71 ubuntu:14.04 “sleep 360” About a minute ago Up About a minute testcopy

vagrant@vagrant-ubuntu-trusty-64:~$ docker exec -ti testcopy /bin/bash

root@25acb54acf71:/# cd /root/

root@25acb54acf71:~# echo ‘I am in the container‘ > file.txt

root@25acb54acf71:~# exit
exit

vagrant@vagrant-ubuntu-trusty-64:~$ docker cp testcopy:/root/file.txt .

vagrant@vagrant-ubuntu-trusty-64:~$ cat file.txt
I am in the container

Keeping Changes Made to a Container by Committing to an Image

vagrant@vagrant-ubuntu-trusty-64:~$ docker commit 25acb54acf71 ubuntu:update

vagrant@vagrant-ubuntu-trusty-64:~$ docker diff 25acb54acf71
C /root
A /root/.bash_history
A /root/file.txt

Saving Images and Containers as Tar Files for Sharing

vagrant@vagrant-ubuntu-trusty-64:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25acb54acf71 ubuntu:14.04 “sleep 360” 2 hours ago Exited (0) 2 hours ago testcopy

vagrant@vagrant-ubuntu-trusty-64:~$ docker export 25acb54acf71 > update.tar

vagrant@vagrant-ubuntu-trusty-64:~$ ls -lh update.tar
-rw-rw-r– 1 vagrant vagrant 176M Aug 28 12:06 update.tar

Writing Your First Dockerfile

The FROM instruction tells you which image to base the new image off of. Here you choose the ubuntu:14.04 image from the Official Ubuntu repository in Docker Hub.

The ENTRYPOINT instruction tells you which command to run when a container based on this image is started. To build the image, issue a docker build . at the prompt

vagrant@vagrant-ubuntu-trusty-64:~$ vim Dockerfile
FROM ubuntu:14.04
ENTRYPOINT [“/bin/echo”]

vagrant@vagrant-ubuntu-trusty-64:~$ docker build .
Sending build context to Docker daemon 250.7MB
Step 1/2 : FROM ubuntu:14.04
—> c69811d4e993
Step 2/2 : ENTRYPOINT /bin/echo
—> Running in 6493e28293cf
—> d4d1bcf8983b
Removing intermediate container 6493e28293cf
Successfully built d4d1bcf8983b

vagrant@vagrant-ubuntu-trusty-64:~$ docker run d4d1bcf8983b Hi Docker !
Hi Docker !

vagrant@vagrant-ubuntu-trusty-64:~$ docker run d4d1bcf8983b

vagrant@vagrant-ubuntu-trusty-64:~$

vagrant@vagrant-ubuntu-trusty-64:~$ vim Dockerfile
FROM ubuntu:14.04
CMD [“/bin/echo”, “Hello added content while building image itself !”]

vagrant@vagrant-ubuntu-trusty-64:~$ docker build .
Sending build context to Docker daemon 250.7MB
Step 1/2 : FROM ubuntu:14.04
—> c69811d4e993
Step 2/2 : CMD /bin/echo Hello added content while building image itself !
—> Running in 5882ba26a61a
—> 2a54fadd69bd
Removing intermediate container 5882ba26a61a
Successfully built 2a54fadd69bd

vagrant@vagrant-ubuntu-trusty-64:~$ docker run 2a54fadd69bd
Hello added content while building image itself !

vagrant@vagrant-ubuntu-trusty-64:~$ docker build -t cookbook:hello .
Sending build context to Docker daemon 250.7MB
Step 1/2 : FROM ubuntu:14.04
—> c69811d4e993
Step 2/2 : CMD /bin/echo Hello added content while building image itself !
—> Using cache
—> 2a54fadd69bd
Successfully built 2a54fadd69bd
Successfully tagged cookbook:hello

vagrant@vagrant-ubuntu-trusty-64:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
cookbook hello 2a54fadd69bd About a minute ago 188MB

You could also use the CMD instruction in a Dockerfile. This has the advantage that you can overwrite the CMD behavior when you launch a container, by passing a new CMD as an argument to docker run .

Remember that CMD can be overwritten by an argument to docker run , while ENTRY POINT can be overwritten only by using the –entrypoint option of docker run .

Packaging a Flask Application Inside a Container

vagrant@vagrant-ubuntu-trusty-64:~$ vim hello.py
#!/usr/bin/env python
from flask import Flask
app = Flask(__name__)
@app.route(‘/hi’)
def hello_world():
return ‘Hello World!’
if __name__ == ‘__main__’:
app.run(host=’0.0.0.0′, port=5000)

vagrant@vagrant-ubuntu-trusty-64:~$ vim Dockerfile
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y python
RUN apt-get install -y python-pip
RUN apt-get clean all
RUN pip install flask
ADD hello.py /tmp/hello.py
EXPOSE 5000
CMD [“python”,”/tmp/hello.py”]

vagrant@vagrant-ubuntu-trusty-64:~$ docker build -t flask .
Sending build context to Docker daemon 250.7MB
Step 1/9 : FROM ubuntu:14.04
—> c69811d4e993
Step 2/9 : RUN apt-get update
—> Using cache
—> fa3d4260c858
Step 3/9 : RUN apt-get install -y python
—> Using cache
—> 66cf6b785aeb
Step 4/9 : RUN apt-get install -y python-pip
—> Using cache
—> 05f778da6ea4
Step 5/9 : RUN apt-get clean all
—> Using cache
—> 8b030994d0b9
Step 6/9 : RUN pip install flask
—> Using cache
—> 19df885ecbc6
Step 7/9 : ADD hello.py /tmp/hello.py
—> Using cache
—> ea1a4989b57f
Step 8/9 : EXPOSE 5000
—> Using cache
—> 01ef8bd8ef06
Step 9/9 : CMD python /tmp/hello.py
—> Using cache
—> 7b14cc15e557
Successfully built 7b14cc15e557
Successfully tagged flask:latest

vagrant@vagrant-ubuntu-trusty-64:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
flask latest 7b14cc15e557 About a minute ago 363MB

vagrant@vagrant-ubuntu-trusty-64:~$ docker run -t -i -P flask /bin/bash

root@69e2a2a1811b:/# ls -l /tmp/hello.py
-rw-rw-r– 1 root root 189 Aug 28 13:18 /tmp/hello.py

root@69e2a2a1811b:/# cat /tmp/hello.py
#!/usr/bin/env python
from flask import Flask
app = Flask(__name__)
@app.route(‘/hi’)
def hello_world():
return ‘Hello World!’
if __name__ == ‘__main__’:
app.run(host=’0.0.0.0′, port=5000)

 

Types of Modules (roles & profiles) file structure in puppet

Types of Modules

Component Modules: Modules managing specific technologies.

Profile Modules: Wrapper modules which used multiple component modules to create a technology stack.
ex: creating app server module by using webserver and tomcat

Role Module: Wrapper modules which uses multiple profiles to create complete system configuration.
ex: create web-serve role with by using web-server host config profile modules

Directory & file structure for puppet modules

manifests: contains puppet logic written in puppet language
files: contains static puppet managed files required for configuration
templates: contains dynamic puppet managed files where content is different for different agent nodes
lib: contains custom plugins such as facts or resources.
facts.d: contains facts in the form of yaml, json or txt files.
example: contains smoke test files for configured manifests
spec: contains unit and other test for the modules.

###########################

Files:

Gemfile: contains list of ruby-gems required for module activities.
metadata.json: contains module specific information especially required for publishing modules in puppet forge
Rakefile: contains a series of tasks specific to testing module
README.md: conatins documentation for module
CHANGELOG.md: contains release specific modules change information such as bug fixes, features, know issues, and so on.

mohammedrafi@NOC-RAFI:~$ mkdir mymodule && cd mymodule
mohammedrafi@NOC-RAFI:~/mymodule$ puppet module generate –skip-interview rafi494-samplemodule

Notice: Generating module at /home/mohammedrafi/mymodule/rafi494-samplemodule…
Notice: Populating templates…
Finished; module generated in rafi494-samplemodule.
rafi494-samplemodule/README.md
rafi494-samplemodule/Gemfile
rafi494-samplemodule/manifests
rafi494-samplemodule/manifests/init.pp
rafi494-samplemodule/spec
rafi494-samplemodule/spec/spec_helper.rb
rafi494-samplemodule/spec/classes
rafi494-samplemodule/spec/classes/init_spec.rb
rafi494-samplemodule/metadata.json
rafi494-samplemodule/Rakefile
rafi494-samplemodule/tests
rafi494-samplemodule/tests/init.pp

mohammedrafi@NOC-RAFI:~/mymodule$ ls -l
total 4
drwxrwxr-x 5 mohammedrafi mohammedrafi 4096 Aug 24 07:45 rafi494-samplemodule

mohammedrafi@NOC-RAFI:~/mymodule$ tree rafi494-samplemodule/
rafi494-samplemodule/
|– Gemfile
|– manifests
| `– init.pp
|– metadata.json
|– Rakefile
|– README.md
|– spec
| |– classes
| | `– init_spec.rb
| `– spec_helper.rb
`– tests
`– init.pp

4 directories, 8 files

 

 

Validating syntax

bundle exec rake validate
puppet-lint manifest/
bundle exec rake
bundle exec rake lint

http://puppet-lint.com/
http://puppet-lint.com/checks/
http://puppet-lint.com/checks/2sp_soft_tabs/

vim Rakefile
PuppetLint.configuration.send(‘disable_2sp_soft_tabs’)

Writing Unit Tests with rspec-puppet

http://rspec-puppet.com/
http://rspec-puppet.com/tutorial/
https://github.com/rodjek/rspec-puppet/

Writing Tests with Beaker Using Serverspec
bundle install
bundle update

https://github.com/puppetlabs/beaker
http://serverspec.org/
http://serverspec.org/resource_types.html

 

Managing Puppet Environment with r10k

Managing Environment with r10k

r10k is a code management tool that allows you to manage your environment configurations (such as production, testing, and development) in a source control repository. Based on the code in your control repo branches, r10k creates environments on your master and installs and updates the modules you want in each environment.

[root@www ~]# vim /etc/puppetlabs/puppet/r10k.yaml
sources:
operations:
remote: ‘git@github.com:rafi494/environments.git’
basedir: ‘/etc/puppetlabs/code/environments’
prefix: false

Create repository in github and also add ssh-key at github so that we can push the code as well from remote node.

[root@www ~]# cd /opt/

[root@www opt]# git clone git@github.com:rafi494/environments.git
Cloning into ‘environments’…
The authenticity of host ‘github.com (192.30.253.112)’ can’t be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘github.com,192.30.253.112’ (RSA) to the list of known hosts.
warning: You appear to have cloned an empty repository.

[root@www opt]# ll
total 0
drwxr-xr-x 3 root root 18 Aug 23 13:12 environments
drwxr-xr-x 8 root root 95 Aug 15 10:40 puppetlabs

[root@www opt]# cd environments/

[root@www environments]# librarian-puppet init
create Puppetfile

[root@www environments]# cat Puppetfile
#!/usr/bin/env ruby
#^syntax detection

forge “https://forgeapi.puppetlabs.com&#8221;

mod ‘puppetlabs-stdlib’

mod ‘puppetlabs-ntp’,
:git => ‘git://github.com/puppetlabs/puppetlabs-ntp.git’

mod ‘puppetlabs-apt’,
:git => ‘https://github.com/puppetlabs/puppetlabs-apt.git&#8217;,
:ref => ‘1.4.x’

[root@www environments]# mkdir manifests
[root@www environments]# vim manifests/site.pp
node /agent/ {
include testrepo
}

[root@www environments]# vim environments.conf
modulepath = site:modules:$basemodulepath
manifest = manifests/site.pp

[root@www environments]# ls -l
total 8
-rw-r–r– 1 root root 71 Aug 23 13:40 environments.conf
drwxr-xr-x 2 root root 21 Aug 23 13:37 manifests
-rw-r–r– 1 root root 292 Aug 23 13:19 Puppetfile

[root@www environments]# cd /opt/

[root@www opt]# git clone git@github.com:rafi494/testrepo.git
Cloning into ‘testrepo’…
Warning: Permanently added the RSA host key for IP address ‘192.30.255.112’ to the list of known hosts.
warning: You appear to have cloned an empty repository.

[root@www opt]# cd testrepo/
[root@www testrepo]# mkdir manifests
[root@www testrepo]# touch manifests/init.pp
[root@www testrepo]# vim manifests/init.pp
class testrepo {
if $environment == ‘production’ {
notify { ‘default-message’ :
message => “This is the production environment”
}
} else {
notify { ‘default-message’ :
message => “This is not production”
}
}
}

[root@www testrepo]# git add -A
[root@www testrepo]# git commit -m “Initial commit”
[master (root-commit) 921b85e] Initial commit
Committer: root <root@www.server.com>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate.
You can suppress this message by setting them explicitly:

git config –global user.name “Your Name”
git config –global user.email you@example.com

After doing this, you may fix the identity used for this commit with:

git commit –amend –reset-author

1 file changed, 11 insertions(+)
create mode 100644 manifests/init.pp
[root@www testrepo]# git push origin master
Warning: Permanently added the RSA host key for IP address ‘192.30.253.113’ to the list of known hosts.
Counting objects: 4, done.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (4/4), 376 bytes | 0 bytes/s, done.
Total 4 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/testrepo.git
* [new branch] master -> master
[root@www testrepo]#

[root@www testrepo]# vim ../environments/Puppetfile
#!/usr/bin/env ruby
#^syntax detection

forge “https://forgeapi.puppetlabs.com&#8221;

mod ‘puppetlabs-stdlib’

mod ‘puppetlabs-ntp’,
:git => ‘git://github.com/puppetlabs/puppetlabs-ntp.git’

mod ‘puppetlabs-apt’,
:git => ‘https://github.com/puppetlabs/puppetlabs-apt.git&#8217;,
:ref => ‘1.4.x’

mod ‘testrepo’,
:git => ‘https://github.com/rafi494/testrepo.git&#8217;,
:branch => ‘master’

[root@www testrepo]# cd ../environments/
[root@www environments]# git add -A
[root@www environments]# git commit -m “adding env files”
[master (root-commit) 875c19c] adding env files
Committer: root <root@www.server.com>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate.
You can suppress this message by setting them explicitly:

git config –global user.name “Your Name”
git config –global user.email you@example.com

After doing this, you may fix the identity used for this commit with:

git commit –amend –reset-author

4 files changed, 26 insertions(+)
create mode 100644 .gitignore
create mode 100644 Puppetfile
create mode 100644 environments.conf
create mode 100644 manifests/site.pp
[root@www environments]# git push origin master
Counting objects: 7, done.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (7/7), 682 bytes | 0 bytes/s, done.
Total 7 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/environments.git
* [new branch] master -> master

[root@www environments]# git branch production
[root@www environments]# git branch staging
[root@www environments]# git branch test
[root@www environments]# git checkout production
Switched to branch ‘production’
[root@www environments]# git push origin production
Total 0 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/environments.git
* [new branch] production -> production
[root@www environments]# git checkout staging
Switched to branch ‘staging’
[root@www environments]# git push origin staging
Total 0 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/environments.git
* [new branch] staging -> staging
[root@www environments]# git checkout test
Switched to branch ‘test’
[root@www environments]# git push origin test
Warning: Permanently added the RSA host key for IP address ‘192.30.255.113’ to the list of known hosts.
Total 0 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/environments.git
* [new branch] test -> test

[root@www environments]# gem install r10k –no-rdoc –no-ri
ERROR: While executing gem … (Gem::DependencyError)
Unable to resolve dependencies: r10k requires semantic_puppet (~> 0.1.0)

[root@www environments]# gem install puppet_forge:2.2.6 r10k
Fetching: semantic_puppet-0.1.4.gem (100%)
Successfully installed semantic_puppet-0.1.4
Fetching: puppet_forge-2.2.6.gem (100%)
Successfully installed puppet_forge-2.2.6
Parsing documentation for semantic_puppet-0.1.4
Installing ri documentation for semantic_puppet-0.1.4
Parsing documentation for puppet_forge-2.2.6
Installing ri documentation for puppet_forge-2.2.6
Fetching: colored-1.2.gem (100%)
Successfully installed colored-1.2
Fetching: cri-2.6.1.gem (100%)
Successfully installed cri-2.6.1
Fetching: log4r-1.1.10.gem (100%)
Successfully installed log4r-1.1.10
Fetching: multi_json-1.12.1.gem (100%)
Successfully installed multi_json-1.12.1
Fetching: r10k-2.5.5.gem (100%)
Successfully installed r10k-2.5.5
Parsing documentation for colored-1.2
Installing ri documentation for colored-1.2
Parsing documentation for cri-2.6.1
Installing ri documentation for cri-2.6.1
Parsing documentation for log4r-1.1.10
Installing ri documentation for log4r-1.1.10
Parsing documentation for multi_json-1.12.1
Installing ri documentation for multi_json-1.12.1
Parsing documentation for r10k-2.5.5
Installing ri documentation for r10k-2.5.5
7 gems installed

-p to process Puppetfile
-c configfile
[root@www environments]# r10k deploy environment -p -c /etc/puppetlabs/puppet/r10k.yaml

[root@www ~]# r10k deploy environment -p -c /etc/puppetlabs/puppet/r10k.yaml -v debug2 –trace

[root@www ~]# ls -l /etc/puppetlabs/code/environments/
total 0
drwxr-xr-x 5 root root 136 Aug 24 00:09 master
drwxr-xr-x 5 root root 136 Aug 24 00:10 production
drwxr-xr-x 5 root root 136 Aug 24 00:12 staging
drwxr-xr-x 5 root root 136 Aug 24 00:13 test
[root@www ~]# ls -l /etc/puppetlabs/code/environments/master/
total 8
-rw-r–r– 1 root root 72 Aug 24 00:07 environments.conf
drwxr-xr-x 2 root root 21 Aug 24 00:07 manifests
drwxr-xr-x 6 root root 58 Aug 24 00:09 modules
-rw-r–r– 1 root root 384 Aug 24 00:07 Puppetfile
[root@www ~]# ls -l /etc/puppetlabs/code/environments/master/modules/
total 8
drwxr-xr-x 7 root root 296 Aug 24 00:09 apt
drwxr-xr-x 11 root root 4096 Aug 24 00:09 ntp
drwxr-xr-x 9 root root 4096 Aug 24 00:08 stdlib
drwxr-xr-x 4 root root 35 Aug 24 00:09 testrepo
[root@www ~]# ls -l /etc/puppetlabs/code/environments/master/manifests/
total 4
-rw-r–r– 1 root root 35 Aug 24 00:07 site.pp

[root@www ~]# puppet agent -t
Info: Using configured environment ‘production
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for agent
Info: Applying configuration version ‘1503514188’
Notice: This is the production environment
Notice: /Stage[main]/Testrepo/Notify[default-message]/message: defined ‘message’ as ‘This is the production environment’
Notice: Applied catalog in 0.22 seconds

[root@www ~]# puppet agent -t –environment staging
Info: Using configured environment ‘staging
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for agent
Info: Applying configuration version ‘1503514258’
Notice: This is not production
Notice: /Stage[main]/Testrepo/Notify[default-message]/message: defined ‘message’ as ‘This is not production’
Notice: Applied catalog in 0.16 seconds