AWS Certified Developer – Associate Level

AWS SDK Boto Configuration

1) If you’re executing code against AWS on an EC2 instance that is assigned an IAM role, which of the following is a true statement?

The code will assume the same permissions as the EC2 role

2) An IAM role, when assigned to an EC2 instance, will allow code to be executed on that instance without API access keys.

True

Explanation
An EC2 instance can assume an IAM role with the given IAM role permissions. Any code executed on the EC2 that assumes the role can access any API calls if the required permissions are assigned. The app or CLI on the EC2 instance that assumed the IAM role does not have to have API access credentials keys.

3) You need to already know Python in order to take this course.

False

Explanation
The AWS Certified Developer Associate Level Certification focuses on developer concepts. In this course we utilize Python to demonstrate certain concepts. However, knowing Python is not a requirement for taking the certification. Being familiar with REST API and the core API calls for common AWS services is required. You do not have to be a developer to take the certification but it is highly suggested.

4) If you are connecting to AWS from a computer, not an EC2 instance, you need to create an AWS user, attach permissions, and use the API access key and secret access key in your Python code.

True

Explanation
AWS is all an API. Almost all services/resources are available via an API. You can access the API calls without running an application on AWS. With a given access key and secrete access key you can connect to AWS API from any server/instance even if it is not running on AWS.

 

Managing kernel parameters

To list all kernel parameters

[root@instance-1 ~]# sysctl -a
abi.vsyscall32 = 1
crypto.fips_enabled = 0
debug.exception-trace = 1
debug.kprobes-optimization = 1
dev.hpet.max-user-freq = 64
dev.mac_hid.mouse_button2_keycode = 97
dev.mac_hid.mouse_button3_keycode = 100

ex:

[root@instance-1 ~]# cat /proc/sys/fs/file-max
373992

[root@instance-1 ~]# sysctl -a |grep fs.file-max
fs.file-max = 373992

[root@instance-1 ~]# sysctl -w fs.file-max=380000
fs.file-max = 380000

[root@instance-1 ~]# sysctl -q fs.file-max
fs.file-max = 380000

[root@instance-1 ~]# sysctl -a |grep fs.file-max
fs.file-max = 380000

[root@instance-1 ~]# sysctl -p

[root@instance-1 ~]# sysctl -a |grep net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768 60999

sysctl -p /etc/sysctl.conf

jenkins (CJE) Exercise Guide

#######################################################################
Exercise: Install a Jenkins Master and Prerequisites

NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Basic Instructions

Download the appropriate java jdk version from the Oracle website. For this course we use jdk-8u121.
Copy the package from your local environment to the target server.
Install and configure Java (jdk-8u121)
Install Jenkins (version 2.19.4)
Complete the Install Wizard.

Prerequisite Install

Go to: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Download the appropriate java jdk version from the Oracle website. For this course we use jdk-8u121.
Copy the package from your local environment to the target server.
Below is an example using scp:
scp jdk-8u121-linux-x64.rpm user@your-server:/home/user/
Install the jdk package.
rpm -Uvh jdk-8u121-linux-x64.rpm
Setup Alternatives for Java:
alternatives –install /usr/bin/java java /usr/java/latest/bin/java 200000
alternatives –install /usr/bin/javac javac /usr/java/latest/bin/javac 200000
alternatives –install /usr/bin/jar jar /usr/java/latest/bin/jar 200000
Note: Check out Terry’s nugget on this for more detail:
“Setting local and global java environment variables”
Set JAVA_HOME environmental variable in rc.local.
vi /etc/rc.local

export JAVA_HOME=”/usr/java/latest”
Jenkins Install

Add the Jenkins repo to your yum sources on the CentOS node.
wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
Import the Jenkins rpm signing key.
rpm –import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
Install the Jenkins package.
We will be using version 2.19.4 which is the target version for the certification test.
yum install -y jenkins-2.19.4-1.1
Check for services running on 8080 before starting the Jenkins service.
netstat -tulpn | grep 8080
If nothing is running on 8080, go ahead and start the service via systemctl.
systemctl start jenkins
Also, enable the Jenkins service so it starts on system startup.
systemctl enable jenkins
Check again for services running on port 8080.
There will be a slight delay, so we’ll use watch to wait for the signal.
watch n=1 “netstat -tulpn | grep 8080”
ctrl-c to break the watch when the service is shown to be running on 8080
Visit the web gui at:
http://your-server-fqdn:8080/
You’ll be prompted for the initialAdminPassword which is located in /var/lib/jenkins/secrets/initialAdminPassword on the system being configured. You’ll want to cat that and copy and paste it into the browser.
cat /var/lib/jenkins/secrets/initialAdminPassword
Paste the password into Install Wizard.
Choose “Install Suggested Plugins.”
Set admin settings, user, password, etc…
Press “Enter.”
Click “Start Using Jenkins.”
Now you have your Jenkins Master up and running!

#######################################################################
Exercise: Configuring Matrix-based Security

NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Basic Instructions

Add a ‘developer’ user
Setup Matrix-based security in “Configure Global Security”
Give all permissions to the admin user.
Give read permissions to Anonymous.
Give all permissions except “delete” and “administer” to the “developer” user.
Prerequisite

Must have a configured Jenkins Master
Procedure

Go to the Jenkins Dashboard.
http://your-server:8080
Click “Manage Jenkins.”
Click “Manage Users”
Click “Create User”
Create the ‘developer’ user.
Click back to “Manage Jenkins.”
Click “Configure Global Security.”
Check “Enable Security.”
Select “Jenkins’ own user database.”
Place a check mark next to “Allow users to sign up.”
Select “Matrix-based Security.”
Select the admin user and check all the permissions.
Set “Read” permissions for “Anonymous.”
Set the ‘developer’ user and select all permissions except Administer and “Delete” permissions.
Click Save.
###################################################
Exercise: Add a Jenkins Slave

NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisites:

Spin up another lab instance for a slave
Have an RSA key pair generated for the Jenkins User on the Master
Basic Instructions

Set up Java (jdk-8u121) like you did on the master previously.
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Add a Jenkins User and home to the slave node.
Coply the Jenkins Master pub key to the Slave nodes authorized keys.
Add a Node in the Jenkins Dashboard.
Set the appropriate configuration items, including the node’s fqdn.
Use “Private key” “From Jenkins Master” for credentials.
Ensure the agent is available for use from the Node view. (no red ‘x’)

 

NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisites

Spin up the node (cloud instance, or hardware).
Install and configure Java as described in the Install Prerequisite section.
Generate an RSA key pair for the Jenkins user on the Jenkins Master.
JAVA SETUP ON SLAVE

Go to: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Download the appropriate java jdk version from the Oracle website. For this course we use jdk-8u121.
Copy the package from your local environment to the target server.
Below is an example using scp:
scp jdk-8u121-linux-x64.rpm user@your-server:/home/user/
Install the jdk package.
rpm -Uvh jdk-8u121-linux-x64.rpm
Setup Alternatives for Java:
alternatives –install /usr/bin/java java /usr/java/latest/bin/java 200000
alternatives –install /usr/bin/javac javac /usr/java/latest/bin/javac 200000
alternatives –install /usr/bin/jar jar /usr/java/latest/bin/jar 200000
Note: Check out Terry’s nugget on this for more detail:
“Setting local and global java environment variables”
Set JAVA_HOME environmental variable in rc.local.
vi /etc/rc.local
export JAVA_HOME=”/usr/java/latest”
Jenkins Slave Setup

FROM THE TARGET SLAVE NODE’S CONSOLE

Switch to the “root” user.
sudo su
Add a jenkins user with the home “/var/lib/jenkins”.
useradd -d /var/lib/jenkins jenkins
FROM THE JENKINS MASTER

Copy the id_rsa.pub key from the Jenkins user on the master.
cat /var/lib/jenkins/.ssh/id_rsa.pub
FROM THE TARGET SLAVE NODE’S CONSOLE

Create an authorized_keys file for the Jenkins user.
mkdir /var/lib/jenkins/.ssh
vi /var/lib/jenkins/.ssh/authorized_keys
Paste the key from the Jenkins master into the file vim. Save with “:wq”.
FROM THE JENKINS DASHBOARD

Click “Manage Jenkins” from the left panel.
Click “Manage Nodes.”
Click “Add Node.”
Set a name for your node (e.g. “Slave 1”).
Select “Permanent Node.”
Set “Remote root directory” to ‘/var/lib/jenkins.’
Set “Usage” to “Use this node as much as possible.”
Set “Launch Method” to “Launch slave agents via SSH.”
Set “Host” to your nodes fqdn (e.g. brandon4232.mylabserver.com).
Select “Add” under “Credentials.”
Set “Kind” to “SSH Username with private key.”
Set “Username” to “jenkins.”
Set “Private key” to “From the Jenkins Master.”
Click “Add.”
Choose the new credential from the “Credentials” dropdown.
Click “Save.”
The agent should now be available for use.

##################################################################
Exercise: Working with the Plugin Manager

NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisites:

You must have a configured Jenkins Master.
Basic Instructions

Install the thinBackup plugin from an hpi file.
http://updates.jenkins-ci.org/download/plugins/thinBackup/1.8/thinBackup.hpi
Use the Plugin Manager to Upgrade thinBackup to the latest version.
Use the Plugin Manager to Downgrade thinBackup back to version 1.8.
Use the Plugin Manager to remove thinBackup.

Installing thinBackup 1.8 from a .hpi File

Download the file from this link:
http://updates.jenkins-ci.org/download/plugins/thinBackup/1.8/thinBackup.hpi
From the Dashboard, click “Manage Jenkins.”
Click “Manage Plugins.”
Click the “Advanced” tab.
In the “Upload Plugin” section, use the “Choose File” button to navigate to your hpi file.
Install it.
Upgrade thinBackup to the Latest Version

From the Dashboard, click “Manage Jenkins.”
Click “Manage Plugins.”
Click the “Updates” tab.
Check the box next to thinBackup.
Click “Download now and install after restart.”
Downgrade thinBackup Back to 1.8

From the Dashboard, click “Manage Jenkins.”
Click “Manage Plugins.”
Click the “Installed” tab.
Click the “Downgrade to 1.8” button for thinBackup.
Uninstall thinBackup

From the Dashboard, click “Manage Jenkins.”
Click “Manage Plugins.”
Click the “Installed” tab.
Click the “Uninstall” button for thinBackup.
#########################################################################
NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Basic Instructions

Add a new Freestyle Project, called “Parametererized Project Madlibs.”
Add three string parameters:
NAME
ADJECTIVE
PLURAL_NOUN
Add a Build step that echoes the environment variable “JOB_NAME”, and then echoes the sentence “${NAME} has ${ADJECTIVE} ${PLURAL_NOUN}!” (e.g “Brandon has bad jokes!”)
Trigger the build with parameters.
############
NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Procedure

From the Jenkins Dashboard, click “New Item” in the left panel.
Add the name “Parametererized Project Madlibs”.
Select “Freestyle Project”
Click “OK”
Check the “This project is parameterized” check box in the “General” config secion
Add three string parameters by clicking “Add Parameter” and selecting “String Parameter” from the list three times.
7.Set the name of the three parameters as follows:
NAME
ADJECTIVE
PLURAL_NOUN
Add a Build step that looks like this:
echo $JOB_NAME
echo “${NAME} has ${ADJECTIVE} ${PLURAL_NOUN}”
Click “Save”
Click “Build with Parameters”
Make up your own words to pass. (e.g. “Brandon”, “bad”, “jokes”)
Click “Build”
############################################################
Exercise: Build a Simple Pipeline without SCM

NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Basic Instructions

Add a new Pipeline project called “Pipeline without an SCM”
Use the Jenkinsfile syntax in the Pipeline definition field to accomplish the following:
Use the ‘master’ agent
Add three stages; “PRINT”, “WRITE”, “READ”
In “PRINT”, add a step to echo the job name
In “WRITE”, write the build number to a file called “build_number” in the workspace.
In “READ”, read the build number from the file.
Make a post section and archive and fingerprint the file created in the “WRITE” stage in it.
Trigger a Project Build and review the console output and generated Artifact.

####
NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Procedure

From the Jenkins Dashboard, select “New Item”.
Enter “Pipeline without an SCM” for the name.
Select “Pipeline”
Click “OK”
Scroll down to the Pipeline section.
Your Jenkinsfile should look like the following:
pipeline {
agent {
label ‘master’
}
stages {
stage(‘PRINT’) {
steps {
sh ‘echo $JOB_NAME’
}
}
stage(‘WRITE’) {
steps {
sh ‘echo $BUILD_NUMBER >> build_number’
}
}
stage(‘READ’) {
steps {
sh ‘cat build_number’
}
}
}
post {
success {
archiveArtifacts artifacts: ‘build_number’, fingerprint: true
}
}
}
Click “Save”
Click “Build Now”
Click the build number when it shows in the Executor view on the left.
Follow the console ouptput to view the output of your pipeline.
Click back on the “Build View” and ensure your artifact has been created.
##########################################################
Exercise: Configure Notifications in a Pipeline
NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Basic Instructions

Configure your SMTP server for “aspmx.l.google.com”.
Add a new pipeline project called Notification Pipeline.
Use the pipeline definition field to add a one stage/one step pipeline that prints “Notification Time!”.
Add a post directive to the Jenkinsfile that sends a notification to yourself whenever the stages complete successfully.
Make the Notification look like this:
Subject: “Notification Pipeline [Build Number] Ran!”
body: “Notification Pipeline [Build Number] Ran!”
“Check console output at Notification Pipeline [Build Number]”
Run the build and see if you get an email.
###########
NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Procedure

From the Jenkins Dashboard, select “Manage Jenkins”.
Select “Configure System”
Scroll to “Extended E-mail Notification”
Set “SMTP Server” to “aspmx.l.google.com”
Click “Advanced”
Set “SMTP Port” to “25”.
Click “Save.”
From the Jenkins Dashboard, select “New Item”.
Enter “Notification Pipeline” for the name.
Select “Pipeline”
Click “OK”
Scroll down to the Pipeline section.
Your Jenkinsfile should look like the following:
pipeline {
agent {
label ‘master’
}
stages {
stage(‘Greeting’) {
steps {
sh ‘echo “Notification Time!”‘
}
}
}
post {
success {
emailext(
subject: “${env.JOB_NAME} [${env.BUILD_NUMBER}] Ran!”,
body: “””<p>’${env.JOB_NAME} [${env.BUILD_NUMBER}]’ Ran!”:</p>
<p>Check console output at <a href=’${env.BUILD_URL}’>${env.JOB_NAME} [${env.BUILD_NUMBER}]/a></p>”””,
to: “your@email.com”
)
}
}
}
Click “Save”
Click “Build Now”
Click the build number when it shows in the Executor view on the left.
Follow the console ouptput to view the output of your pipeline.
Check for an email (it may have been filtered to spam).
########################################################
Exercise: Using the Jenkins CLI
NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Basic Instructions

Setup an RSA key pair if necessary
Download the jenkins-cli.jar file from the Jenkins master
Set an alias and environment variable for the Jenkins CLI.
Run the following commands:
who-ami
help
version
Install the thinBackup plugin via the Jenkins CLI.
View the console output for any job via the Jenkins CLI.
###########
NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Setup a Pub Key for Your User

Generate a rsa key pair for your user.
Go to:
http://yourserver.com/me/configure
Copy the contents of your id_rsa.pub file into the “SSH Public Keys” field.
Download the CLI Jar file

wget -P /var/lib/jenkins/ http://:8080/jnlpJars/jenkins-cli.jar
Setting Up your Environment

Set the JENKINS_URL environment variable.
echo “JENKINS_URL=’http://localhost:8080′&#8221; >> /etc/environment
Set an alias for your bash shell.
echo “alias jenkins-cli=’java -jar /var/lib/jenkins/jenkins-cli.jar'” >> ~/.bashrc
Logout and log back in.
The Jenkins CLI can now be executed via jenkins-cli
Run a Few Commands

Run ‘who-am-i’
jenkins-cli who-am-i
Run ‘help’
jenkins-cli help
Run ‘version’
jenkins-cli version
Install ‘thinBackup’
jenkins-cli install-plugin thinBackup
View the console output from a build.
//varies, here’s an example
jenkins-cli console “Freestyles/My Freestyle Project” 51

###################################

Exercise: Using the Jenkins REST API

NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Basic Instructions

Retrive the Administrative user’s API token
User curl and Jenkins’ REST API for the following actions:
Trigger a job build
Retrieve the config.xml file for a Project
Disable a Project
Enable a Project
##########
NOTE:

Be sure you are using the same server version and distribution from the course for these topics.
Prerequisite

Must have a configured Jenkins Master
Procedure

From the Jenkins Dashboard, click “Manage Jenkins.
Click “Manage Users”.
Click your the configure button (gear-shaped) next to your administrative user.
Click the “API Token” button.
With your username, and the api token, the basic curl syntax will look as follows:
curl http://<your-server&gt;:8080/<some REST endpoint> –user <username>:<api token>
Trigger a build:
//example
curl -X POST http://brandon4231.mylabserver.com:8080/job/Freestyles/job/My%20Freestyle%20Project/build –user brandon:acaab63811fab6a5990eb7c6904bb6bf
Retrieve a project config.xml file
//example
curl http://brandon4231.mylabserver.com:8080/job/Freestyles/job/My%20Freestyle%20Project/config.xml –user brandon:acaab63811fab6a5990eb7c6904bb6bf
Disable a project
//example
curl -X POST http://brandon4231.mylabserver.com:8080/job/Freestyles/job/My%20Freestyle%20Project/disable –user brandon:acaab63811fab6a5990eb7c6904bb6bf
Enable a project
//example
curl -X POST http://brandon4231.mylabserver.com:8080/job/Freestyles/job/My%20Freestyle%20Project/enable –user brandon:acaab63811fab6a5990eb7c6904bb6bf

 

Upgrading kernel version

[root@mypc ~]# egrep ^menuentry /etc/grub2.cfg | cut -f 2 -d \’
CentOS Linux (3.10.0-514.6.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-123.4.2.el7.x86_64) 7 (Core)
CentOS Linux, with Linux 3.10.0-123.el7.x86_64
CentOS Linux, with Linux 0-rescue-11264912be38456483e63dfd21d402f4

[root@mypc ~]# yum update |grep kernel-tools
—> Package kernel-tools.x86_64 0:3.10.0-514.6.1.el7 will be updated
—> Package kernel-tools.x86_64 0:3.10.0-514.21.1.el7 will be an update
—> Package kernel-tools-libs.x86_64 0:3.10.0-514.6.1.el7 will be updated
—> Package kernel-tools-libs.x86_64 0:3.10.0-514.21.1.el7 will be an update
kernel-tools             x86_64 3.10.0-514.21.1.el7        updates       4.0 M
kernel-tools-libs        x86_64 3.10.0-514.21.1.el7        updates       3.9 M

[root@mypc ~]# yum update -y

[root@mypc ~]# egrep ^menuentry /etc/grub2.cfg | cut -f 2 -d \’
CentOS Linux 7 Rescue d0faf3b598364cbfadf9ac867b7b6979 (3.10.0-514.21.1.el7.x86_64)
CentOS Linux (3.10.0-514.21.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-514.6.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-123.4.2.el7.x86_64) 7 (Core)
CentOS Linux, with Linux 3.10.0-123.el7.x86_64
CentOS Linux, with Linux 0-rescue-11264912be38456483e63dfd21d402f4

 

Terminal short cuts

ctrl+a to come to start of line in terminal
ctrl+b to move backward one character
ctrl+c to terminate process
crtl+d to terminate session
ctrl+e to come to end of line in terminal
ctrl+f to move forward by one character
ctrl+h to erase one character to your left of cursor
ctrl+k to remove the character after the cursor pointing i,e to the right of screen
crtl+l to clear screen
ctrl+m to confirm command(enter)
ctrl+n to check next command just like down arrow
ctrl+p to check last command
ctrl+r search for matching command which u excuted in past
ctrl+t to two swap letters
crtl+u to clear all characters which are on ur left side of ur cursor
ctrl+w to clear last word

 

Nagios xi

https://assets.nagios.com/downloads/nagiosxi/docs/Installing-Nagios-XI-Manually-on-Linux.pdf

[root@jenkinsslave ~]# curl https://assets.nagios.com/downloads/nagiosxi/install.sh | sh

http://130.211.135.148/nagiosxi/install.php

nagios1nagios2nagios3nagios4

Nagios XI Installer
Welcome to the Nagios XI installation. Just answer a few simple questions and you’ll be ready to go.
General Program Settings
Program URL:
http://130.211.135.148/nagiosxi/
Administrator Name:
Nagios Administrator
Administrator Email Address:
root@localhost
Administrator Username:
nagiosadmin
Administrator Password:
ojFvdaq5rnXjkLqkSXNM
Timezone Settings
Timezone:
Install

[root@jenkinsslave ~]# cat install.sh 
#!/bin/sh
DIR=nagiosxi
DLDIR=nagiosxi
FILE=xi-latest.tar.gz
# Check whether we have sufficient privileges
if [ $(id -u) -ne 0 ]; then
    echo “This script needs to be run as root/superuser.” >&2
    exit 1
fi
if ! which wget &>/dev/null; then
    yum install wget -y
fi
cd /tmp
echo “check if /tmp/nagiosxi exists”
if [ -d /tmp/$DIR ]; then
    rm -rf /tmp/$DIR
fi
echo “Downloading latest Nagios XI release”
tar xzf /tmp/$FILE
cd /tmp/$DIR
chmod +x ./fullinstall
./fullinstall

CPU & CORE & THREAD

CPU
A central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions.

CORE
A core is usually the basic computation unit of the CPU – it can run a single program context (or multiple ones if it supports hardware threads such as hyperthreading on Intel CPUs).

THREAD
A thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system.

Hyper-Threading
Hyper-Threading is a technology used by some Intel microprocessor s that allows a single microprocessor to act like two separate processors to the operating system

Image result for difference between CPU and CORE

 

Image result for difference between CPU and CORE

Image result for difference between CPU and CORE

Image result for difference between CPU and CORE

Image result for difference between CPU and CORE

 Quad-Core                                                             Octa-Core

 

mohammedrafi@NOC-RAFI:~$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 42
Stepping: 7
CPU MHz: 930.761
BogoMIPS: 4988.78
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
NUMA node0 CPU(s): 0-3

################################

1)CPU(s): 4
2)Core(s) per socket: 2
3)Thread(s) per core: 2
4)Socket(s):   1

(No of Sockets) * (Core per socket) * (Theards per core)      1         *        2          *         2  = 4 threads
cpu    =4; threads=4;So its not Hyper-Threading
If No of threads is double than the cpu’s then its “Hyper-Threading”

nproc command shows the number of processing units available:

mohammedrafi@NOC-RAFI:~$ nproc
4

mohammedrafi@NOC-RAFI:~$ cat /proc/cpuinfo |grep processor

processor : 0processor : 0processor : 1processor : 2processor : 3

################################

mohammedrafi@NOC-RAFI:~$ sudo dmidecode |grep -i cpu

Socket Designation: CPU 1 Version: Intel(R) Core(TM) i5-2450M CPU @ 2.50GHz
###################

In General i3 has 2 cores,4 threads it’s a “Hyper-Threading”           

i5 has 4 cores,4 threads it’s a “Not Hyper-Threading”       

 i7 has 4 cores,8 threads it’s a “Hyper-Threading” 

###################

mohammedrafi@NOC-RAFI:~$ lscpu | grep -i ‘socket’

Core(s) per socket:    2Socket(s):             1

CPUs = Threads per core X cores per socket X sockets
mohammedrafi@NOC-RAFI:~$ lscpu | egrep ‘^Thread|^Core|^Socket|^CPU\(‘

CPU(s):                4Thread(s) per core:    2Core(s) per socket:    2Socket(s):             1
mohammedrafi@NOC-RAFI:~$ egrep ‘processor|core id’ /proc/cpuinfo

processor : 0 core id : 0 processor : 1 core id : 0 processor : 2 core id : 1 processor : 3 core id : 1
mohammedrafi@NOC-RAFI:~$ grep -m 1 ‘cpu cores’ /proc/cpuinfo

cpu cores : 2
mohammedrafi@NOC-RAFI:~$ echo Cores = $(( $(lscpu | awk ‘/^Socket/{ print $2 }’) * $(lscpu | awk ‘/^Core/{ print $4 }’) ))

Cores = 2
mohammedrafi@NOC-RAFI:~$ sudo dmidecode -t 4 | egrep ‘Socket Designation|Count’

Socket Designation: CPU 1 Core Count: 2 Thread Count: 4
mohammedrafi@NOC-RAFI:~$ sudo dmidecode -t 4 | egrep -i “Designation|Intel|core|thread” Socket Designation: CPU 1 Family: Core i5 Manufacturer: Intel(R) Corporation HTT (Multi-threading) Version: Intel(R) Core(TM) i5-2450M CPU @ 2.50GHz Asset Tag: Intel(R) Genuine processor Core Count: 2 Core Enabled: 2 Thread Count: 4
mohammedrafi@NOC-RAFI:~$ sudo dmidecode  | grep CPU Socket

Designation: CPU 1 Version: Intel(R) Core(TM) i5-2450M CPU @ 2.50GHz
mohammedrafi@NOC-RAFI:~$ sudo dmidecode | grep -i product Product

Name: HP ProBook 4430s Product Name: 167E

You need to be aware of sockets, cores and threadsYou need to be careful of the term CPU as it means different things in different contexts

 

 

Certified Jenkins Engineer (CJE) – 2017 Objective sample

1) You are tasked by management to explain Continuous Deployment. Your explanation will be used in the company’s annual report so you need to ensure you understand it properly. Which is the best definition
A software development practice where contributors are integrating their work very frequently to production in an automated fashion.

Explanation
Continuous Deployment is and extension of Continuous Integration and Continuous Delivery. It means that changes are deployed automatically to production after passing the automated continuous integration process.

2) You have now been given charge over an existing Jenkins system install that was previously started by a colleague who is gone on a business trip. He left it in the install wizard phase. Unfortunately, your colleague did not not note the initialAdminPassword. Where on the system would this be located?
/var/lib/jenkins/secrets/InitialAdminPassword

Explanation
You’ll find the initialAdminPassword that is asked for during the install wizard at this path.

3) What type of views can be configured in Jenkins?
All of the others

Explanation
All of these are view options.

4) What does the Build Queue contain?
Builds waiting for an executor

Explanation
The status of executors show in the Build Queue.

5) What’s not true about Build History?
Super Stable shows as gold

Explanation
There is no concept of “gold” or Super Stable in Jenkins.

6) Which is a valid agent declaration?
All of these

Explanation
These are all valid invocations of agents in a Jenkinsfile.

7) You have a friend who wants fame and fortune from being a developer working with Jenkins. He wants to write code for it and doesn’t know anything about it yet. He wants to know what language Jenkins is written in. You of course, know the answer and are delighted to tell them. Which is it? Which of the following do you tell them it’s written in?
Java

Explanation
Jenkins core is written in Java.

8) What’s the difference between Pushing and Pulling code from a CI perspective?
When the source informs the build system of a code change, that’s pushing. When the build system asks if there are changes to the source code, that’s pulling.

Explanation
Pushing is generally preferred as it assures the action is only performed as changes are made.

9) You are interested in integrating CI/CD in your work environment, but your team lead & several of your team members do not understand the concept. Your boss believes that Continuous Delivery will mean that someone (namely, you) will need to be on-call around the clock for constant monitoring and deployment of new code. You have a 2-minute window to quickly explain what Continuous Delivery means. Out of the following answers, what is the clearest, and most succinct correct answer?
Continuous Delivery is a software development discipline where software is built so that it can be released to production at any time, while continuous deployment means it is released into production continuously. CI/CD is meant to shorten the amount of time needed for QA while empowering developers to keep code as up-to-date as possible.

Explanation
Continuous Delivery means software is integrated continuously and is ready to be released at any time.

10) What best describes DevOps?
An organizational culture where developers and operations are working in harmony

Explanation
DevOps is a cultural movement that helps facilitate Continuous Integration concepts by bring Dev and Ops into harmony, where they may have been separate perviously.

11) Jenkins, along with several other automation platforms, provides developers & operators with tools that allow them to automate the deployment of environments with preconfigured source code, including the CI Pipeline. What is this called?
Infrastructure as Code

Explanation
“Infrastructure as code” means some or all of the state of your infrastructure is defined in source code. This is achieved through configuration management tools, like Puppet and Chef. The Continuous Integration component of infrastructure is defined in the Jenkinsfile, which lives amongst the project source code, illustrating infrastructure as code.

12) Which is a common branching strategy?
All are common

Explanation
These are all common branching strategies.

13) What is Jenkins Auditing?
Tracking who did what on your Jenkins server

Explanation
Auditing is the practice of determining what a user has been doing in Jenkins.

14) How can you install plugins?
Both the CLI and the Plugin Manager

Explanation
There are multiple ways to install plugins with the CLI and the Plugin Manager being the most common.

15) How do you use the Jenkins CLI?
java -jar /var/lib/jenkins/jenkins-cli.jar -s http://:8080/

16) How do you archive artifacts from a Jenkinsfile?
archiveArtifacts

Explanation
archiveArtifacts can be used to set an archiving strategy for a project in a Jenkinsfile.

17) Which is an invalid Matrix-based Security Realm?
Drive

Explanation
The Drive security realm does not exist. All of the others do.

18) What’s a benefit of incremental updates over a clean checkout?
It’s faster.

Explanation
Instead of deleting and re-cloning a project repository into a workspace, you can save time by just pulling any changes. This can be problematic when files produced during the build may persist, and cause issues in future builds.

19) What’s the difference between an Upstream Project and a Downstream Project?
An Upstream Project triggers a Downstream Project.

Explanation
The relationship between the projects is what determines Downstream versus Upstream. The Upstream job triggers the downstream job, and vice versa.

20) You are working on a project that will build a Docker image. When the job completes it needs to trigger a build to deploy a container to your development environment for integration testing. How would you go about triggering the second build?
Both “Triggered by another project” and the “Parameterized Trigger Plugin”

Explanation
Both are options for triggering upstream and downstream projects.

21) Which is a valid Environment variable string interpolation?
Both “${VARIABLE}” and $VARIABLE

Explanation
These are both valid forms of interpolation.

22) You’re setting up a folder config in Jenkins and you set a couple of items called “child item with worst health” and also recursive. A colleague turns to you, since you know the answers to the rest of his questions, and asks you, “how do I determine the health of the folder with those items set?” What’s your answer?
Items in nested sub-folders are used to calculate the folders health.

Explanation
When a folder is configured to recursive, and “child item with worst health” is selected, the item in the recursive folder structure will be used to determine the folder’s health.

23) Your team is working on a new project to build a CI/CD pipeline for your Docker containers. You have discovered a new plugin to help with your image build. Where would you navigate to install this new plugin?
Dashboard Left Panel -> Manage Jenkins -> Manage Plugins

Explanation
This is how you can reach the Plugin Manager from the dashboard.

24) How do you add a new folder?
Click “New Item” in the left dashboard panel, then select “Folder”.

Explanation
This is how you could add a new folder using the dashboard.

25) You have a stage called “Promote to Green”, and you only want to run it when “development” is the current branch in a Multibranch Pipeline. How would you use the “when” declarative to accomplish this?
when { branch ‘master’ }

Explanation
Example: stage(‘Promote to Green’) { agent { label ‘apache’ } when { branch ‘master’ } steps { sh “cp /var/www/html/rectangles/all/${env.BRANCH_NAME}/rectangle_${env.MAJOR_VERSION}.${env.BUILD_NUMBER}.jar /var/www/html/rectangles/green/rectangle_${env.MAJOR_VERSION}.${devBuild}.jar” } }

26) Your supervisor has asked you to explain to the company’s development team how using Jenkins will implement continuous integration. You want to make sure the developers understand what that means to them. Which of the following is the most succinct explanation?
The software development team will increase the frequency at which code changes are committed to the build, test and deployment cycle.

Explanation
A central tenant of continuous integration is integrating work frequently.

27) What types of Notification Integrations are there?
All of these

Explanation
All of these notification integrations are available.

28) Your company’s DevOps staff is new to the continuous integration methodology. Some users are unclear on best practices, and you’re hearing all kinds of feedback from them, some of it wrong. Which of the following statements is NOT a CI best practice?
Build everything manually, because that prevents errors.

Explanation
CI relies heavily on automation, especially build automation, to enable the sped-up pace of code commits. If things are built manually, the pipeline is slowed down, and the ability to maintain a fast change pace is inhibited.

29) You can set permissions for Anonymous users with Matrix-based Security.
True

Explanation
Anonymous users can receive permissions in the Jenkins configuration.

30) How do you set a freestyle project to be parameterized?
From Project Configuration, select “This project is parameterized.”

31) You’re a DevOps engineer in charge of your team’s Jenkins server. You have a particular stage in your pipeline that you want to run on a particular build node. You have to have Apache installed on this node. Assuming the node has been appropriately labeled, “apache”, how would you ensure this stage ran on that node?
agent { label ‘apache’ }

Explanation
Example: stage(‘deploy’) { agent { label ‘apache’ } steps { sh “if ![ -d ‘/var/www/html/rectangles/all/${env.BRANCH_NAME}’ ]; then mkdir /var/www/html/rectangles/all/${env.BRANCH_NAME}; fi” sh “cp dist/rectangle_${env.MAJOR_VERSION}.${env.BUILD_NUMBER}.jar /var/www/html/rectangles/all/${env.BRANCH_NAME}/” } }

32) You want to archive and track a build artifact, “build/mybuilt.jar”, with a fingerprint from your team’s build. How would you invoke the archiveArtifacts function to accomplish this?
archiveArtifacts artifacts: ‘build/mybuilt.jar’, fingerprint: true

Explanation
Example with Context: post { success { archiveArtifacts artifacts: ‘build/mybuilt.jar’, fingerprint: true } }

33) How would you invoke a Docker agent?
agent { docker ‘openjdk’}

Explanation
Docker-based agents can be invoked this way. You’d get the latest ‘openjdk’ docker image with this syntax.

34) Where does the Jenkinsfile live?
With your source code

Explanation
The Jenkinsfile typically lives in the root of your project SCM repository.

35) What type of file extension do plugin files have?
.hpi

Explanation
Jenkins plugins exist in .hpi files.

36) You are working on a project that will build a Docker image. When the job completes it needs to trigger a build to deploy a container to your development environment for integration testing. How would you go about triggering the second build?
Both “Triggered by another project” and the “Parameterized Trigger Plugin”

Explanation
Both are options for triggering upstream and downstream projects.

37) What’s a fingerprint?
An md5sum for an artifact that’s tracked between projects

Explanation
Fingerprints are an md5sum that is used to track artifacts.

38) Where is the initialAdminPassword stored on the system?
/var/lib/jenkins/secrets/

Explanation
The initialAdminPassword file, referenced in the install wizard, is stored in /var/lib/jenkins/secrets/.

39) How do you configure a junit report in a Pipeline?
junit “path to xml file”

40) You’ve just become the administrator for the Jenkins server and the feedback you’ve received from the users is they’re having trouble locating their jobs to run them. You’ve determined that folders are the most logical solution and wish to move the jobs into the new folders you’ve created. How would you accomplish this?
All of the above

Explanation
All of these options are ways to add projects to folders.

41) Which of the following is not a default environment variable in a Jenkins project?
START_TIME

Explanation
The other options are all default environment variables.

42) How do you setup the GitHub Plugin Git hook from a Project Configuration Perspective?
Set the “GitHub hook trigger for GITScm polling” trigger

43) What programming language is Jenkins built in, and is required to be installed as a prerequisite to install?
Java

Explanation
Jenkins is built with Java, which is required as a prerequisite to install.

44) Your organization has been carefully and painstakingly performing builds on specific commits which the development team deems as releases or release candidates and subsequently only testing major releases. You have been placed in charge of a new project in which continuous integration is the primary goal. Which part of your organizations existing process do you need to modify in the furtherance of the goal of continuous integration?
Building everything manually with great care.

Explanation
A good CI process should contain all the above except it should no longer contain a manual build process.

45) What’s the SDLC?
Software Development Life Cycle

Explanation
The SDLC is contained in CI/CD concepts.

46) Which answer best describes Continuous Integration?
A software development practice where contributors are integrating their work very frequently.

Explanation
Continuous Integration doesn’t mean that software can or will be released or pushed to production continuously.

47) Which answer best describes Continuous Deployment?
A software development discipline where software is released continuously as part of an automated pipeline.

Explanation
Continuous Deployment is an extension of Continuous Integration and Continuous Delivery. It’s important to know the difference. Deployment indicated that the code is continuous deployed to production.

48) You want to deploy a Jenkins pipeline but you are concerned about the total amount of time it will take for the deployment to complete, and you aren’t concerned with files left in the working directory after a build. What should you do?
Choose not to use a clean checkout.

Explanation
You can save time by not choosing to have a clean checkout for each build.

49) What directive do you use fore a declarative pipeline?
pipeline {..}

Explanation
Declarative pipelines use the “pipeline” directive.

50) How do you configure notifications in a Jenkinsfile?
emailext

51) What’s an Integration Test?
It tests components coming together.

52) What’s a Unit Test?
It tests a small piece of functionality, usually at the class method level.

Explanation
Unit testing is used to test very small units of functionality.

53) What determines what can be seen in a “My View”?
It shows all items a user has permission to view in Jenkins.

Explanation
“My View” shows items that the user has permission to view.

54) How can you organize Jenkins Projects (or Jobs)?
Folders and Views

Explanation
Projects can be organized in either Folders or Views.

55) You’re at a job interview and the interviewer looks at you, trying to make you nervous. He looks down at his paper, looks up at you and asks, “How would you describe continuous integration?” You think to yourself. Which of the following answers is best?
A software development practice where contributors are integrating their work very frequently.

Explanation
Continuous integration means contributors are integrating work frequently, but not necessarily releasing or promoting to production.

56) In a JUnit report, if we set the “Health report amplification factor” to 2 and there’s a 1% failure, what’s the health score?
98%

Explanation
You would take 100% then subtract (1%*2), thus 98%.

57) What’s a benefit of clean checkout over an incremental update?
It ensures the working directory is clean and not tampered with.

Explanation
A user, or previous build, could create or change a file in a workspace which could cause problems with future builds.

58) Which is not a common stage of CI/CD?
Kill

Explanation
Build, Deploy, and Test are all common stages of a CI/CD process.

59) How can you assign Projects to specific agent nodes?
Using an expression that matches a node’s name or label

Explanation
You can configure a project to run on a node, or subset of nodes, based on expression matching of a node’s labels.

60) Which is not a function provided by the Jenkins CLI?
db-dump

Explanation
The others are all common functions of the CLI.

61) What’s Infrastructure as Code?
Infrastructure is defined in source code, like the Jenkinsfile for a pipeline.

Explanation
This is also shown in config management tools, like Puppet and Chef. The Jenkinsfile represents a “pipeline as code” concept, specifically.

62) Which plugin provides Git hook functionality
Both the Git and GitHub Plugins

Explanation
Both of these very common plugins provide hook functionality.

63) Which answer best describes Continuous Delivery?
A software development discipline where software is built so that it can be released to production at any time.

Explanation
Continuous Delivery is an extension of Continuous Integration. It means that the software CAN be released at any time.

64) What is an SCM?
Source Code (or Control) Management

Explanation
The use of Source Code Management is a central tenant of Continuous Integration.

65) You can set project-based security for users.
True

Explanation
You can set security on a per-project basis.

66) During a proposal, you need to quickly retrieve configuration for a demo of a project named “My Freestyle Project”, which is in the “Freestyles” folder, in a neat & easily-readable .xml format using a ReST API call. What is the correct syntax for the call?
http://:8080/job/Freestyles/job/My%20Freestyle%20Project/config.xml

Explanation
You can retrieve a config.xml with all of the project configuration with a “GET” on that url. Two are incorrect because they don’t show the containing folder in the path.

67) What’s the difference between Continuous Integration and Continuous Delivery?
Continuous Integration is just the practice of integrating code continuously. It doesn’t necessarily mean that it can be released at any time like Continuous Delivery.

Explanation
Continuous Integration doesn’t mean that builds are available for release as Continuous Delivery implies.

68) You can use Jenkins’ database to manage users.
True

Explanation
This is the default mode for user management.

69) What is Jenkins Matrix-based Security?
Allows the administrator to grant very specific permissions to users and groups.

Explanation
Jenkins permissions are defined in a matrix-like structure, in which the administrator can set security based on a user or user group.

70) What interval syntax could I use to trigger a build 30 minutes after every hour?
30 * * * *

71) What’s a Functional Test?
It tests the functionality of the project as a whole .

72) You’ve been promoted at work and are now in charge of the system administrators that look after the Jenkins platform for your company. You no longer want to get notifications from Jenkins, but your subordinates don’t know what settings to change in a Jenkinsfile. You give them a look, and say one of the following is what needs to be changed:
emailext

Explanation
The emailext function can be used from a Jenkinsfile to configure notifications.

73) How do you setup the Git Plugin Git hook from a Project Configuration Perspective?
Set the “Poll scm” trigger, but you don’t have to specify an interval

74) You are given the task to install Jenkins and its prerequisites as part of an initiative to implement CI/CD. After installing the operating system you will need to make sure what language is available for Jenkins to utilize before you can successfully install it?
Java

Explanation
Jenkins requires Java be installed on the system.

75) What interval syntax could I use to trigger a build every 15 minutes?
H/15 * * * *

76) Which is NOT a Continuous Integration best practice?
Do everything manually with great care.

Explanation
Automation is a central tenant of Continuous Integration. Any manual process should be automated to the extent possible.

77) If you set “Child item with worst health” and recursive in the Folder config, how is the health determined for the folder?
Items in nested sub-folders are used to calculate the folder’s health.

Explanation
Nested items determine folder health in this configuration.

78) You are a DevOps engineer in charge of your team’s Jenkins server. Your project is on major version “1”. You want to ensure that this variable, MAJOR_VERSION, is available through out the your Pipeline that is defined in a Jenkinsfile. Which of the following ways could you accomplish that using the “environment” directive?
environment { MAJOR_VERSION = 1 }

Explanation
Example: pipeline { agent none environment { MAJOR_VERSION = 1 }

79) How can you trigger a downstream build?
Both “Triggered by another project” and the “Parameterized Trigger Plugin”

Explanation
Both of these options will work. “Triggered by another project” is configured by the downstream project, and the “Parameterized Trigger Plugin” is invoked from the upstream project. You’d only want to use one of these options.

80) The QA department has been having issues locating their work on the Jenkins server. As a result their manager has asked for the jobs (projects) to be better organized. Which most correct answer can you use to organize projects?
Folders and Views

Explanation
You can organize projects in Jenkins with Folders and Views.

81) What do you need to configure an SSH agent?
Correct

Correct answer
SSH keys, with the Master’s pub key set as an authorized key on the agent node

Explanation
You’ll need to configure key-based auth to utilize the SSH agent.

82) How do you navigate to the Matrix-based Security section from Jenkins?
Manage Jenkins -> Configure Global Security

Explanation
This is how you would use the dashboard to navigate to the Security section.

83) How can you move Items to Folders?
All of the others

Explanation
You can move a folder in all of these ways.

84) Which is an invalid default project parameter?
md5

Explanation
“md5” is not a common project parameter.

85) Which step is NOT part of a traditional Continuous Integration workflow?
Ask your manager for permission to commit code.

Explanation
Asking for permission is against the central tenant of Continuous Integration, “Automate Everything.”

86) Which is an example of a Jenkins plugin?
All of the others

Explanation
There are hundreds of Jenkins plugins available, and they can provide a multitude of functionality.

87) What’s the Jenkinsfile?
It contains the definition for a pipeline

Explanation
The Jenkinsfile lives with project source code and portrays the “Pipeline as code” concept.

88) What is Ant?
A Java build tool

Explanation
Ant is a lightweight build platform for Java projects.

89) You have started work on a project with Docker integration. What is the proper syntax to invoke a Docker agent?
agent {docker ‘openjdk’}

Explanation
“agent {docker ‘openjdk’}” is the correct syntax to invoke a docker agent with the ‘openjdk’ image. This syntax will provide the latest version.

90) What types of credentials does Jenkins support?
All of these

Explanation
Jenkins supports all of these credential types.

91) Which isn’t a common Jenkins build tool?
Excel

Explanation
Excel is not used to build things.

92) Which is not a Git Plugin provided environment variable?
GIT_ORIGIN_URL

Explanation
The others are all default Git Plugin provided Environment variables.

93) Which is not an option in the Install Wizard?
Add SSH Credentials

Explanation
By default, you are only prompted to add user/password credentials for the administrator user.

94) What’s the difference between Continuous Delivery and Continuous Deployment?
Continuous Delivery means the code CAN be released at any time, while Continuous Deployment means it is released continuously.

Explanation
Deployment means the promotion to production is automatic.

95) Your co-worker was installing a jenkins instance and went home for the night. When they returned the next day they notice their machine had rebooted and the scroll back of the installation was unretrievable and they’ve come to you for help. In what location will you need to look at it to retrieve the initialAdminPassword?
/var/lib/jenkins/secrets/

Explanation
You’ll find it in /var/lib/jenkins/secrets/.

96) Your team is responsible for maintaining a Java base application. Over the last few months you’ve been adding unit test to ensure that the application is as solid as possible before going to QA. To ensure that everything is being tested properly you decided to add in code coverage. What is the commonly used Java Code Coverage Plugin?
Cobertura

Explanation
Cobertura is the most common Java code coverage tool.

97) What is the Plugin Manager?
The UI to install and configure plugins in the Jenkins WebGUI

Explanation
The Plugin Manager can be used to add/remove/upgrade/downgrade plugins in Jenkins. These can be used to add functionality to Jenkins.

98) You have taken over a build environment where Jenkins is responsible for scanning for changes made in a GIT repository every house and then apply that build to a few dozen web servers. As part of that job, Jenkins also is responsible for completing some scripted performance based tests and, depending on the defined results, marking the build as complete and notifying the team OR rolling back the build, notifying the team and marking it as failed. You notice that the build itself is now taking longer than an hour to complete, which is affecting the push of new changes on occasion. What changes could you make to your Jenkins environment to alleviate the amount of time it takes to complete and test a build so that the hourly schedule is more likely to be kept?

Add one or more slave nodes to the environment, this will allow us to deploy code to multiple end points at one time. Break up the single job into multiple jobs so that each one can be triggered by the successful completion of the previous job in order to eliminate scheduling problems on beginning a new build in the event subsequent steps run longer than the intended time period.

Explanation
As the load of your Jenkins setup expands, you can distribute the burden by adding one or more slave nodes.

99) What’s an Acceptance Test?
It tests against the product specifications.

100) What’s an example of a Cloud-based SCM?
All of the others

Explanation
All of these options are cloud-based SCM providers.

101) What can a Plugin do?
All of the others

Explanation
Plugins can really be used to add any functionality to Jenkins.

102) Where do you configure a shared global library?
Configure System -> Global Pipeline Libraries

Explanation
You can establish inter-project functionality in a shared global library.

103) You were down in the datacenter working on something else and received a phone call to make a change on the Jenkins server. As the rack does not have sufficient power or internet for you to plug in your laptop you decide to just use the console on the server itself. How would you access the CLI?
java -jar /var/lib/jenkins/jenkins-cli.jar -s http://localhost:8080/

Explanation
You can utilize the jenkins cli for system administration with this command.

104) What is a code coverage test?
It tests how well your code is tested.

105) What’s the difference between Authentication and Authorization?
Authentication identifies a user, while Authorization dictates what a user is allowed to do.

Explanation
Authorization and Authentication are different, but related concepts. They don’t vary or indicate a level of strictness.

106) Which menu option do you select to add a Pipeline or other Project?
New Item

Explanation
Selecting “New Item” from the dashboard will take you to menu where you can add a new project.

107) How can you downgrade a plugin?
You can downgrade a plugin with any of these methods.

108) How do you Navigate to the Plugin Manager?
Dashboard Left Panel -> Manage Jenkins -> Manage Plugins

Explanation
This is the most common way to navigate to the Plugin Manager.

109) What’s a big priority of Continuous Integration?

Explanation
When a build breaks, fixing it should be the priority of everyone on the project.

110) What interval syntax could I use to trigger a build every day?
All the others

111) What’s the term for an item produced and retained from a build?
artifact
Explanation
Artifacts are items produced by a build.

 

112) What’s not true about a Build Executor?
The default number of executors on a node is 4.

Explanation
The default number of executors on a node is 2.

113) How do you configure the distribution of builds on each node?
It happens largely automatically, although you can configure the number of executors on a node.

Explanation
Distribution of tasks will happen automatically, but could depend on the configuration of projects and project-based node assignment.

114) How do you retrieve the config in an xml format for a project using the REST API?
http://jenkins-server.com:8080/job/Freestyles/job/My%20Freestyle%20Project/config.xml

Explanation
You can view the config in xml format by hitting this endpoint.

115) You’re a DevOps engineer in charge of configuring Jenkins for your team’s CI/CD Pipeline. You’re utilizing the Jenkins Pipeline and the Jenkinsfile. You want to define a stage for your build, that utilizes the locally installed Ant and build.xml in the root of your project repository. Which is the correct syntax?
stage(‘build’) { steps { sh ‘ant -f build.xml -v’ } }

Explanation
The steps declarative must exist inside the corresponding stage. stage(‘build’) { steps { sh ‘ant -f build.xml -v’ } }

116) Over the last 6 months a large number of projects have been added to Jenkins. You have been asked by your manager to do some cleanup by creating some list views. How would you go about creating these list views?
All of the others

Explanation
You can add projects to list views with all of these options.

117) What’s an example of SCM software?
Both Git and Subversion

Explanation
Git and Subversion are common SCM software choices.

118) You have a series of tasks which require execution across different software components to prove the compatibility and functionality of your build and the components as a whole. Which of the following describes the type of testing you will perform?
Integration Testing
Explanation
Testing the integrated functions of multiple components is integration testing.

119) Your team installed an update to one of your plugins. Suddenly several of your builds start to fail. After a few minutes of troubleshooting you discover that the issue was caused by upgrading the plugin. What would you do to downgrade the plugin?
All of these

Explanation
All of these options can be used to downgrade a plugin.

120) In a JUnit report, if we set the “Health report amplification factor” to 3 and there’s a 10% failure, what’s the health score?
70%
Explanation
You would take 100% then subtract (10%*3), thus 70%.

121) What’s the commonly used Java Code Coverage Plugin?
Cobertura

122) You need to grant permissions, via matrix-based security, to run a specific command in a Jenkins pipeline. The project parameters will not allow you to create a specific user to run this task, and you have no relevant groups. Which of the following is an option?
Set permissions in Jenkins to allow anonymous users to run the command

Explanation
You can set permissions for anonymous users with matrix-based security. Since this requires the least impact on existing workflows, it’s the best solution.

123) You are in charge of a new development process and find that the current branching strategies may have room for improvement. Which types of common strategies might you recommend?
Developer Branching and Feature Branching and Release Branching are common branching strategies.

Explanation
All of these (Developer, Feature and Release) are common branching strategies.

124) What type of Agent Nodes are there?
All of the others

Explanation
All of these are common types of agents.

125) What’s the Jenkins Hook URL for the Git Plugin?
http://your-jenkins-server:8080/

126) When would you use distributed builds?
When you want to preserve resources on the master node

Explanation
You can distribute the resource load produced by your builds by adding one or more Jenkins slaves.

127) What’s a Jenkins Job (or Project)?
Can be scoped to any typical IT task, defined in Jenkins

Explanation
A project is really any unit of work in a typical IT environment that’s configured to run in Jenkins.

128) Which pipeline syntax runs a shell command?
sh

Explanation
“sh” can be used to run the specified shell command. It’s one of the most common Pipeline functions.

129) How can you filter List Views?
All of the others

Explanation
All of these are options for filtering items for views.

130) What’s the Jenkins Hook URL for the GitHub Plugin?
http://your-jenkins-server:8080/github-webhook/

 

Git Quick Start

github-git-cheat-sheet

gitcs_1475526882

[root@ansible ~]# yum install epel-release -y

[root@ansible ~]# yum install git -y

[root@ansible ~]# git –version
git version 1.8.3.1

[root@ansible ~]# git help
usage: git [–version] [–help] [-c name=value]
[–exec-path[=<path>]] [–html-path] [–man-path] [–info-path]
[-p|–paginate|–no-pager] [–no-replace-objects] [–bare]
[–git-dir=<path>] [–work-tree=<path>] [–namespace=<name>]
<command> [<args>]

The most commonly used git commands are:
add Add file contents to the index
bisect Find by binary search the change that introduced a bug
branch List, create, or delete branches
checkout Checkout a branch or paths to the working tree
clone Clone a repository into a new directory
commit Record changes to the repository
diff Show changes between commits, commit and working tree, etc
fetch Download objects and refs from another repository
grep Print lines matching a pattern
init Create an empty Git repository or reinitialize an existing one
log Show commit logs
merge Join two or more development histories together
mv Move or rename a file, a directory, or a symlink
pull Fetch from and merge with another repository or a local branch
push Update remote refs along with associated objects
rebase Forward-port local commits to the updated upstream head
reset Reset current HEAD to the specified state
rm Remove files from the working tree and from the index
show Show various types of objects
status Show the working tree status
tag Create, list, delete or verify a tag object signed with GPG

‘git help -a’ and ‘git help -g’ lists available subcommands and some
concept guides. See ‘git help <command>’ or ‘git help <concept>’
to read about a specific subcommand or concept.
[root@ansible ~]# useradd mshaik
[root@ansible ~]# passwd mshaik
Changing password for user mshaik.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.

[root@ansible ~]# su – mshaik
Last login: Sun May 14 18:11:25 IST 2017 on pts/0
[mshaik@ansible ~]$

[mshaik@ansible ~]$ which vim
/bin/vim
[mshaik@ansible ~]$ git config –system core.editor “/bin/vim”
error: could not lock config file /etc/gitconfig: Permission denied

[mshaik@ansible ~]$ sudo git config –system core.editor “/bin/vim”

[mshaik@ansible ~]$ cat /etc/gitconfig
[core]
editor = /bin/vim

[mshaik@ansible ~]$ git config –global user.name rafi494
[mshaik@ansible ~]$ git config –global user.email mohammedrafi494@gmail.com
[mshaik@ansible ~]$ ls -la
total 24
drwx——. 3 mshaik mshaik 4096 Jun 2 01:09 .
drwxr-xr-x. 8 root root 83 May 21 17:07 ..
drwxrwxr-x 2 mshaik mshaik 37 May 15 00:21 .aws
-rw——-. 1 mshaik mshaik 113 May 14 19:55 .bash_history
-rw-r–r–. 1 mshaik mshaik 18 Nov 20 2015 .bash_logout
-rw-r–r–. 1 mshaik mshaik 193 Nov 20 2015 .bash_profile
-rw-r–r–. 1 mshaik mshaik 231 Nov 20 2015 .bashrc
-rw-rw-r– 1 mshaik mshaik 58 Jun 2 01:09 .gitconfig

[mshaik@ansible ~]$ cat .gitconfig
[user]
name = rafi494
email = mohammedrafi494@gmail.com

[mshaik@ansible ~]$ mkdir repo
[mshaik@ansible ~]$ cd repo/
[mshaik@ansible repo]$ ls -la
total 4
drwxrwxr-x 2 mshaik mshaik 6 Jun 2 01:22 .
drwx——. 4 mshaik mshaik 4096 Jun 2 01:22 ..

[mshaik@ansible repo]$ git init .
Initialized empty Git repository in /home/mshaik/repo/.git/
[mshaik@ansible repo]$ ls -la
total 4
drwxrwxr-x 3 mshaik mshaik 17 Jun 2 01:23 .
drwx——. 4 mshaik mshaik 4096 Jun 2 01:22 ..
drwxrwxr-x 7 mshaik mshaik 111 Jun 2 01:23 .git
[mshaik@ansible repo]$ ls -la .git/
total 16
drwxrwxr-x 7 mshaik mshaik 111 Jun 2 01:23 .
drwxrwxr-x 3 mshaik mshaik 17 Jun 2 01:23 ..
drwxrwxr-x 2 mshaik mshaik 6 Jun 2 01:23 branches
-rw-rw-r– 1 mshaik mshaik 92 Jun 2 01:23 config
-rw-rw-r– 1 mshaik mshaik 73 Jun 2 01:23 description
-rw-rw-r– 1 mshaik mshaik 23 Jun 2 01:23 HEAD
drwxrwxr-x 2 mshaik mshaik 4096 Jun 2 01:23 hooks
drwxrwxr-x 2 mshaik mshaik 20 Jun 2 01:23 info
drwxrwxr-x 4 mshaik mshaik 28 Jun 2 01:23 objects
drwxrwxr-x 4 mshaik mshaik 29 Jun 2 01:23 refs

####################################################################

[mshaik@ansible repo]$ echo “This is a Readme file” > README.MD
[mshaik@ansible repo]$ echo “/* This is a readme file*/” > source.c
[mshaik@ansible repo]$ ls -la
total 12
drwxrwxr-x 3 mshaik mshaik 48 Jun 2 01:30 .
drwx——. 4 mshaik mshaik 4096 Jun 2 01:22 ..
drwxrwxr-x 7 mshaik mshaik 111 Jun 2 01:23 .git
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 01:29 README.MD
-rw-rw-r– 1 mshaik mshaik 27 Jun 2 01:30 source.c
[mshaik@ansible repo]$ git status
# On branch master
#
# Initial commit
#
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# README.MD
# source.c
nothing added to commit but untracked files present (use “git add” to track)

[mshaik@ansible repo]$ git add .

[mshaik@ansible repo]$ git status
# On branch master
#
# Initial commit
#
# Changes to be committed:
# (use “git rm –cached <file>…” to unstage)
#
# new file: README.MD
# new file: source.c
#

[mshaik@ansible repo]$ echo “Test file” > file1

[mshaik@ansible repo]$ git status
# On branch master
#
# Initial commit
#
# Changes to be committed:
# (use “git rm –cached <file>…” to unstage)
#
# new file: README.MD
# new file: source.c
#
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# file1

[mshaik@ansible repo]$ git add file1

[mshaik@ansible repo]$ git status
# On branch master
#
# Initial commit
#
# Changes to be committed:
# (use “git rm –cached <file>…” to unstage)
#
# new file: README.MD
# new file: file1
# new file: source.c
#

[mshaik@ansible repo]$ git commit -m “this is my first commit”
[master (root-commit) d1ce129] this is my first commit
3 files changed, 3 insertions(+)
create mode 100644 README.MD
create mode 100644 file1
create mode 100644 source.c

[mshaik@ansible repo]$ git status
# On branch master
nothing to commit, working directory clean

[mshaik@ansible repo]$ cat >> README.MD
this is newly added data

[mshaik@ansible repo]$ git status
# On branch master
# Changes not staged for commit:
# (use “git add <file>…” to update what will be committed)
# (use “git checkout — <file>…” to discard changes in working directory)
#
# modified: README.MD
#
no changes added to commit (use “git add” and/or “git commit -a”)

[mshaik@ansible repo]$ echo “This is second file in repo” > file2

[mshaik@ansible repo]$ git status
# On branch master
# Changes not staged for commit:
# (use “git add <file>…” to update what will be committed)
# (use “git checkout — <file>…” to discard changes in working directory)
#
# modified: README.MD
#
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# file2
no changes added to commit (use “git add” and/or “git commit -a”)

[mshaik@ansible repo]$ git commit -a
saving changes to README.MD
# Please enter the commit message for your changes. Lines starting
# with ‘#’ will be ignored, and an empty message aborts the commit.
# On branch master
# Changes to be committed:
# (use “git reset HEAD <file>…” to unstage)
#
# modified: README.MD
#
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# file2

[master 9679830] saving changes to README.MD
1 file changed, 1 insertion(+)

[mshaik@ansible repo]$ git status
# On branch master
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# file2
nothing added to commit but untracked files present (use “git add” to track)

[mshaik@ansible repo]$ git add .

[mshaik@ansible repo]$ git status
# On branch master
# Changes to be committed:
# (use “git reset HEAD <file>…” to unstage)
#
# new file: file2
#

[mshaik@ansible repo]$ git commit -m “New text file”
[master 73e953c] New text file
1 file changed, 1 insertion(+)
create mode 100644 file2

[mshaik@ansible repo]$ rm -rf file2
[mshaik@ansible repo]$ git status
# On branch master
# Changes not staged for commit:
# (use “git add/rm <file>…” to update what will be committed)
# (use “git checkout — <file>…” to discard changes in working directory)
#
# deleted: file2
#
no changes added to commit (use “git add” and/or “git commit -a”)
[mshaik@ansible repo]$ git commit -m “Removed file2”
# On branch master
# Changes not staged for commit:
# (use “git add/rm <file>…” to update what will be committed)
# (use “git checkout — <file>…” to discard changes in working directory)
#
# deleted: file2
#
no changes added to commit (use “git add” and/or “git commit -a”)

[mshaik@ansible repo]$ git status
# On branch master
# Changes not staged for commit:
# (use “git add/rm <file>…” to update what will be committed)
# (use “git checkout — <file>…” to discard changes in working directory)
#
# deleted: file2
#
no changes added to commit (use “git add” and/or “git commit -a”)

[mshaik@ansible repo]$ ls -l
total 12
-rw-rw-r– 1 mshaik mshaik 10 Jun 2 01:34 file1
-rw-rw-r– 1 mshaik mshaik 47 Jun 2 01:39 README.MD
-rw-rw-r– 1 mshaik mshaik 27 Jun 2 01:30 source.c

[mshaik@ansible repo]$ git rm file2
rm ‘file2’
[mshaik@ansible repo]$ git status
# On branch master
# Changes to be committed:
# (use “git reset HEAD <file>…” to unstage)
#
# deleted: file2
#

[mshaik@ansible repo]$ git commit -m “Removed file2”
[master 0eb35f8] Removed file2
1 file changed, 1 deletion(-)
delete mode 100644 file2

####################################################################

[mshaik@ansible repo]$ git log
commit 0eb35f82d91fc231b0735a3a85c6b53c379332bc
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 02:00:46 2017 +0530

Removed file2

commit 73e953c0643061c55344bb3839e39c58547b3472
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:48:11 2017 +0530

New text file

commit 967983011f6533cfdb4cc9450292224a3306011b
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:42:59 2017 +0530

saving changes to README.MD

commit d1ce1299710f0bec5f4ecca7dc3f7e5cd52c4edb
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:37:33 2017 +0530

this is my first commit

[mshaik@ansible repo]$ git log –oneline
0eb35f8 Removed file2
73e953c New text file
9679830 saving changes to README.MD
d1ce129 this is my first commit

[mshaik@ansible repo]$ git log -p
commit 0eb35f82d91fc231b0735a3a85c6b53c379332bc
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 02:00:46 2017 +0530

Removed file2

diff –git a/file2 b/file2
deleted file mode 100644
index 5413ef1..0000000
— a/file2
+++ /dev/null
@@ -1 +0,0 @@
-This is second file in repo

commit 73e953c0643061c55344bb3839e39c58547b3472
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:48:11 2017 +0530

New text file

diff –git a/file2 b/file2
new file mode 100644
index 0000000..5413ef1
— /dev/null
+++ b/file2
@@ -0,0 +1 @@
+This is second file in repo

commit 967983011f6533cfdb4cc9450292224a3306011b
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:42:59 2017 +0530

saving changes to README.MD

diff –git a/README.MD b/README.MD
index adacbf9..278bd10 100644
— a/README.MD
+++ b/README.MD
@@ -1 +1,2 @@
This is a Readme file
+this is newly added data

commit d1ce1299710f0bec5f4ecca7dc3f7e5cd52c4edb
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:37:33 2017 +0530

this is my first commit

diff –git a/README.MD b/README.MD
new file mode 100644
index 0000000..adacbf9
— /dev/null
+++ b/README.MD
@@ -0,0 +1 @@
+This is a Readme file
diff –git a/file1 b/file1
new file mode 100644
index 0000000..524acff
— /dev/null
+++ b/file1
@@ -0,0 +1 @@
+Test file
diff –git a/source.c b/source.c
new file mode 100644
index 0000000..1f47dc4
— /dev/null
+++ b/source.c
@@ -0,0 +1 @@
+/* This is a readme file*/

[mshaik@ansible repo]$ git log — file2
commit 0eb35f82d91fc231b0735a3a85c6b53c379332bc
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 02:00:46 2017 +0530

Removed file2

commit 73e953c0643061c55344bb3839e39c58547b3472
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:48:11 2017 +0530

New text file

[mshaik@ansible repo]$ git log — file1
commit d1ce1299710f0bec5f4ecca7dc3f7e5cd52c4edb
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:37:33 2017 +0530

this is my first commit

[mshaik@ansible repo]$ git log — file3

[mshaik@ansible repo]$ git log –oneline — file2
0eb35f8 Removed file2
73e953c New text file

[mshaik@ansible repo]$ git log –oneline — file1
d1ce129 this is my first commit

[mshaik@ansible repo]$ git log –author=”rafi494″
commit 0eb35f82d91fc231b0735a3a85c6b53c379332bc
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 02:00:46 2017 +0530

Removed file2

commit 73e953c0643061c55344bb3839e39c58547b3472
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:48:11 2017 +0530

New text file

commit 967983011f6533cfdb4cc9450292224a3306011b
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:42:59 2017 +0530

saving changes to README.MD

commit d1ce1299710f0bec5f4ecca7dc3f7e5cd52c4edb
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:37:33 2017 +0530

this is my first commit

[mshaik@ansible repo]$ git log –grep=”change”
commit 967983011f6533cfdb4cc9450292224a3306011b
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:42:59 2017 +0530

saving changes to README.MD

[mshaik@ansible repo]$ git log –graph
* commit 0eb35f82d91fc231b0735a3a85c6b53c379332bc
| Author: rafi494 <mohammedrafi494@gmail.com>
| Date: Fri Jun 2 02:00:46 2017 +0530
|
| Removed file2
|
* commit 73e953c0643061c55344bb3839e39c58547b3472
| Author: rafi494 <mohammedrafi494@gmail.com>
| Date: Fri Jun 2 01:48:11 2017 +0530
|
| New text file
|
* commit 967983011f6533cfdb4cc9450292224a3306011b
| Author: rafi494 <mohammedrafi494@gmail.com>
| Date: Fri Jun 2 01:42:59 2017 +0530
|
| saving changes to README.MD
|
* commit d1ce1299710f0bec5f4ecca7dc3f7e5cd52c4edb
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 01:37:33 2017 +0530

this is my first commit

[mshaik@ansible repo]$ git log –graph –decorate

[mshaik@ansible repo]$ man git log

############################################################################

[mshaik@ansible repo]$ cd
[mshaik@ansible ~]$ mkdir workingdir
[mshaik@ansible ~]$ cd workingdir/
[mshaik@ansible workingdir]$

[mshaik@ansible workingdir]$ git clone /home/mshaik/repo/ .
Cloning into ‘.’…
done.
[mshaik@ansible workingdir]$ ls -la
total 16
drwxrwxr-x 3 mshaik mshaik 60 Jun 2 02:44 .
drwx——. 5 mshaik mshaik 4096 Jun 2 02:41 ..
-rw-rw-r– 1 mshaik mshaik 10 Jun 2 02:44 file1
drwxrwxr-x 8 mshaik mshaik 152 Jun 2 02:44 .git
-rw-rw-r– 1 mshaik mshaik 47 Jun 2 02:44 README.MD
-rw-rw-r– 1 mshaik mshaik 27 Jun 2 02:44 source.c

[mshaik@ansible workingdir]$ git add .

[mshaik@ansible workingdir]$ git status
# On branch master
# Changes to be committed:
# (use “git reset HEAD <file>…” to unstage)
#
# modified: file1
#

[mshaik@ansible workingdir]$ git commit -m “modifyed local repo”
[master b0b648b] modifyed local repo
1 file changed, 5 insertions(+)
[mshaik@ansible workingdir]$ git status
# On branch master
# Your branch is ahead of ‘origin/master’ by 1 commit.
# (use “git push” to publish your local commits)
#
nothing to commit, working directory clean
[mshaik@ansible workingdir]$ cat file1
Test file
2
3
4
5
6

[mshaik@ansible workingdir]$ cat ../repo/file1
Test file

mohammedrafi@NOC-RAFI:~/remote$ git clone mshaik@192.168.183.128:repo .
Cloning into ‘.’…
mshaik@192.168.183.128’s password:
remote: Counting objects: 12, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 12 (delta 2), reused 0 (delta 0)
Receiving objects: 100% (12/12), done.
Resolving deltas: 100% (2/2), done.
Checking connectivity… done.

mohammedrafi@NOC-RAFI:~/remote$ ls -la
total 24
drwxrwxr-x 3 mohammedrafi mohammedrafi 4096 Jun 2 02:55 .
drwxr-xr-x 48 mohammedrafi mohammedrafi 4096 Jun 2 02:53 ..
-rw-rw-r– 1 mohammedrafi mohammedrafi 10 Jun 2 02:55 file1
drwxrwxr-x 8 mohammedrafi mohammedrafi 4096 Jun 2 02:55 .git
-rw-rw-r– 1 mohammedrafi mohammedrafi 47 Jun 2 02:55 README.MD
-rw-rw-r– 1 mohammedrafi mohammedrafi 27 Jun 2 02:55 source.c
mohammedrafi@NOC-RAFI:~/remote$ cat file1
Test file

#######################################################################################

[mshaik@ansible repo]$ git status
# On branch master
nothing to commit, working directory clean

[mshaik@ansible repo]$ echo “new file” > newfile

[mshaik@ansible repo]$ git status
# On branch master
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# newfile
nothing added to commit but untracked files present (use “git add” to track)

[mshaik@ansible repo]$ git add .

[mshaik@ansible repo]$ git commit -m “newly added file in to dir”
[master 750062c] newly added file in to dir
1 file changed, 1 insertion(+)
create mode 100644 newfile

[mshaik@ansible repo]$ git status
# On branch master
nothing to commit, working directory clean

[mshaik@ansible repo]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/mshaik/.ssh/id_rsa):
Created directory ‘/home/mshaik/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/mshaik/.ssh/id_rsa.
Your public key has been saved in /home/mshaik/.ssh/id_rsa.pub.
The key fingerprint is:
25:80:b5:4a:de:b2:c9:21:fa:ba:9c:91:06:95:0d:c4 mshaik@ansible
The key’s randomart image is:
+–[ RSA 2048]—-+
| oo oo |
| E+. o |
| o o . . . |
| . o o o |
|. . = . S |
|…o = |
|.+ + |
|o.o |
|o=. |
+—————–+
[mshaik@ansible repo]$ cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDs/d8g0HSrf1KyBIbxB/C/3I2czC7npZ5XSKj3rttOZLUzH7dy1LFXFkt1TvXBO5p47yrGn1dI6ZBPDS2UQLwocFUdE1xV+t9s0lGXWThR/V5GfEDD5+fnM50VJggeOb3iA75XnaOCy/MY9rP/5PSLfdCV43Z8y1/8upc5qut+VaakgeBIBLzmpRTpspA6kD9MjPlIV/Aa5VDcL3HKWqH4Laaakyf1GVUZxoAugZAjm+Q3mXGoBDX7jbzB8M5m1pxob9SQGDhBYO5u3a0lEly0ysVNA8KoOXpGMH/UlHoDY6TZvNtpjAVj80K5bsgRGUlIMEvlDOdXlNEHFBSWdaaH mshaik@ansible

https://github.com/settings/ssh

[mshaik@ansible repo]$ ssh -T git@github.com
The authenticity of host ‘github.com (192.30.253.112)’ can’t be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘github.com,192.30.253.112’ (RSA) to the list of known hosts.
Hi rafi494! You’ve successfully authenticated, but GitHub does not provide shell access.

https://github.com/new

https://github.com/rafi494/quickstart

[mshaik@ansible repo]$ git clone git@github.com:rafi494/quickstart.git
Cloning into ‘quickstart’…
warning: You appear to have cloned an empty repository.

[mshaik@ansible repo]$ git add .

[mshaik@ansible repo]$ git commit -m “pushing to quick start repo”

[mshaik@ansible repo]$ cd quickstart/
[mshaik@ansible quickstart]$ ls -la
total 0
drwxrwxr-x 3 mshaik mshaik 17 Jun 2 04:03 .
drwxrwxr-x 4 mshaik mshaik 91 Jun 2 04:03 ..
drwxrwxr-x 7 mshaik mshaik 111 Jun 2 04:04 .git
[mshaik@ansible quickstart]$ ls -la .git/
total 16
drwxrwxr-x 7 mshaik mshaik 111 Jun 2 04:04 .
drwxrwxr-x 3 mshaik mshaik 17 Jun 2 04:03 ..
drwxrwxr-x 2 mshaik mshaik 6 Jun 2 04:03 branches
-rw-rw-r– 1 mshaik mshaik 262 Jun 2 04:04 config
-rw-rw-r– 1 mshaik mshaik 73 Jun 2 04:03 description
-rw-rw-r– 1 mshaik mshaik 23 Jun 2 04:03 HEAD
drwxrwxr-x 2 mshaik mshaik 4096 Jun 2 04:03 hooks
drwxrwxr-x 2 mshaik mshaik 20 Jun 2 04:03 info
drwxrwxr-x 4 mshaik mshaik 28 Jun 2 04:03 objects
drwxrwxr-x 4 mshaik mshaik 29 Jun 2 04:03 refs

[mshaik@ansible quickstart]$ git remote
origin

[mshaik@ansible quickstart]$ echo “into quick start repo” >> first_file
[mshaik@ansible quickstart]$ git status
# On branch master
#
# Initial commit
#
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# first-fileinto quick start repo
# first_file
nothing added to commit but untracked files present (use “git add” to track)

[mshaik@ansible quickstart]$ git add .

[mshaik@ansible quickstart]$ git commit -m “Iniitial commit”
[master (root-commit) a75d028] Iniitial commit
2 files changed, 1 insertion(+)
create mode 100644 first-fileinto quick start repo
create mode 100644 first_file

[mshaik@ansible quickstart]$ git remote
origin

[mshaik@ansible quickstart]$ git push origin master
Warning: Permanently added the RSA host key for IP address ‘192.30.255.113’ to the list of known hosts.
Counting objects: 4, done.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (4/4), 296 bytes | 0 bytes/s, done.
Total 4 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/quickstart.git
* [new branch] master -> master

[mshaik@ansible quickstart]$ git status
# On branch master
nothing to commit, working directory clean

[mshaik@ansible quickstart]$ git log
commit a75d02817be2c77002760f5d6e920230f95e1c4f
Author: rafi494 <mohammedrafi494@gmail.com>
Date: Fri Jun 2 04:09:43 2017 +0530

Iniitial commit
[mshaik@ansible quickstart]$ git log –oneline
a75d028 Iniitial commit

[mshaik@ansible quickstart]$ cat .git/config
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote “origin”]
url = git@github.com:rafi494/quickstart.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch “master”]
remote = origin
merge = refs/heads/master

[mshaik@ansible quickstart]$ git config –global core.excludesfile ‘/etc/gitignore’

[mshaik@ansible quickstart]$ git config –global core.excludesfile ‘/etc/gitignore’
[mshaik@ansible quickstart]$ cat /etc/gitignore
cat: /etc/gitignore: No such file or directory
[mshaik@ansible quickstart]$ sudo vim /etc/gitignore
#globally ignore complied .out binary files
*.out

[mshaik@ansible quickstart]$ cat /etc/gitignore
#globally ignore complied .out binary files
*.out
[mshaik@ansible quickstart]$ cp first_file second.out
[mshaik@ansible quickstart]$ ll
total 8
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:09 first_file
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:25 second.out
[mshaik@ansible quickstart]$ git status
# On branch master
nothing to commit, working directory clean

[mshaik@ansible quickstart]$ cp first_file third.txt
[mshaik@ansible quickstart]$ git status
# On branch master
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# third.txt
nothing added to commit but untracked files present (use “git add” to track)

[mshaik@ansible quickstart]$ git add .
[mshaik@ansible quickstart]$ git commit -m “new .txt file”
[master d29c67c] new .txt file
1 file changed, 1 insertion(+)
create mode 100644 third.txt
[mshaik@ansible quickstart]$ git status
# On branch master
# Your branch is ahead of ‘origin/master’ by 1 commit.
# (use “git push” to publish your local commits)
#
nothing to commit, working directory clean

[mshaik@ansible quickstart]$ mkdir dir1
[mshaik@ansible quickstart]$ cat >> dir1/test
hello added sufessfully new file in dir
[mshaik@ansible quickstart]$ git status
# On branch master
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# dir1/
nothing added to commit but untracked files present (use “git add” to track)
[mshaik@ansible quickstart]$ git add .
[mshaik@ansible quickstart]$ git commit -m “new file in dir1 added to repo”
[master 023b5f9] new file in dir1 added to repo
1 file changed, 1 insertion(+)
create mode 100644 dir1/test
[mshaik@ansible quickstart]$ git push origin master
Counting objects: 5, done.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (4/4), 354 bytes | 0 bytes/s, done.
Total 4 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To git@github.com:rafi494/quickstart.git
8a2415c..023b5f9 master -> master

[mshaik@ansible quickstart]$ git branch
* master

[mshaik@ansible quickstart]$ git branch dev
[mshaik@ansible quickstart]$ git branch
dev
* master
[mshaik@ansible quickstart]$ git checkout dev
Switched to branch ‘dev’
[mshaik@ansible quickstart]$ git branch
* dev
master

[mshaik@ansible quickstart]$ git status
# On branch dev
nothing to commit, working directory clean

[mshaik@ansible quickstart]$ ls -l
total 16
drwxrwxr-x 2 mshaik mshaik 17 Jun 2 04:39 dir1
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:09 first_file
-rw-rw-r– 1 mshaik mshaik 11 Jun 2 04:36 new file in dir1
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:25 second.out
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:26 third.txt

[mshaik@ansible quickstart]$ rm -rf new\ file\ in\ dir1

[mshaik@ansible quickstart]$ ls -l
total 12
drwxrwxr-x 2 mshaik mshaik 17 Jun 2 04:39 dir1
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:09 first_file
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:25 second.out
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:26 third.txt

[mshaik@ansible quickstart]$ git rm new\ file\ in\ dir1
rm ‘new file in dir1’

[mshaik@ansible quickstart]$ git commit -m “removed one file”
[dev 7b9e765] removed one file
1 file changed, 1 deletion(-)
delete mode 100644 new file in dir1

[mshaik@ansible quickstart]$ git branch
* dev
master

[mshaik@ansible quickstart]$ git push origin dev
Counting objects: 3, done.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 221 bytes | 0 bytes/s, done.
Total 2 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To git@github.com:rafi494/quickstart.git
* [new branch] dev -> dev
[mshaik@ansible quickstart]$ git checkout -b production
Switched to a new branch ‘production’
[mshaik@ansible quickstart]$ git status
# On branch production
nothing to commit, working directory clean
[mshaik@ansible quickstart]$ git branch
dev
master
* production

[mshaik@ansible quickstart]$ echo “file one” > one
[mshaik@ansible quickstart]$ echo “file two” > two
[mshaik@ansible quickstart]$ echo “file three” > three

[mshaik@ansible quickstart]$ git add .
[mshaik@ansible quickstart]$ git commit -m “added production branch and couple of new files”
[production 29c11fb] added production branch and couple of new files
3 files changed, 3 insertions(+)
create mode 100644 one
create mode 100644 three
create mode 100644 two
[mshaik@ansible quickstart]$ git branch
dev
master
* production
[mshaik@ansible quickstart]$ git push origin production
Counting objects: 6, done.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (5/5), 435 bytes | 0 bytes/s, done.
Total 5 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/quickstart.git
* [new branch] production -> production
[mshaik@ansible quickstart]$ git status
# On branch production
nothing to commit, working directory clean

[mshaik@ansible quickstart]$ git branch
dev
master
* production
[mshaik@ansible quickstart]$ ll
total 24
drwxrwxr-x 2 mshaik mshaik 17 Jun 2 04:39 dir1
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:09 first_file
-rw-rw-r– 1 mshaik mshaik 9 Jun 2 05:07 one
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:25 second.out
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:26 third.txt
-rw-rw-r– 1 mshaik mshaik 11 Jun 2 05:08 three
-rw-rw-r– 1 mshaik mshaik 9 Jun 2 05:08 two
[mshaik@ansible quickstart]$ git checkout master
Switched to branch ‘master’
[mshaik@ansible quickstart]$ ll
total 16
drwxrwxr-x 2 mshaik mshaik 17 Jun 2 04:39 dir1
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:09 first_file
-rw-rw-r– 1 mshaik mshaik 11 Jun 2 05:16 new file in dir1
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:25 second.out
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:26 third.txt
[mshaik@ansible quickstart]$ git checkout dev
Switched to branch ‘dev’
[mshaik@ansible quickstart]$ ll
total 12
drwxrwxr-x 2 mshaik mshaik 17 Jun 2 04:39 dir1
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:09 first_file
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:25 second.out
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:26 third.txt

################################################################

[mshaik@ansible quickstart]$ git push origin –all
Total 0 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/quickstart.git
* [new branch] qa -> qa
* [new branch] tmp -> tmp
[mshaik@ansible quickstart]$ git branch
* dev
master
production
qa
tmp

[mshaik@ansible quickstart]$ git checkout master
Switched to branch ‘master’

[mshaik@ansible quickstart]$ git merge qa
Updating 023b5f9..7b9e765
Fast-forward
new file in dir1 | 1 –
1 file changed, 1 deletion(-)
delete mode 100644 new file in dir1

[mshaik@ansible quickstart]$ git status
# On branch master
# Your branch is ahead of ‘origin/master’ by 1 commit.
# (use “git push” to publish your local commits)
#
nothing to commit, working directory clean

[mshaik@ansible quickstart]$ git push origin master
Total 0 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/quickstart.git
023b5f9..7b9e765 master -> master
[mshaik@ansible quickstart]$ git branch
dev
* master
production
qa
tmp
[mshaik@ansible quickstart]$ ll
total 12
drwxrwxr-x 2 mshaik mshaik 17 Jun 2 04:39 dir1
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:09 first_file
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:25 second.out
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:26 third.txt

[mshaik@ansible quickstart]$ git merge production
Updating 7b9e765..29c11fb
Fast-forward
one | 1 +
three | 1 +
two | 1 +
3 files changed, 3 insertions(+)
create mode 100644 one
create mode 100644 three
create mode 100644 two
[mshaik@ansible quickstart]$ git push origin master
Total 0 (delta 0), reused 0 (delta 0)
To git@github.com:rafi494/quickstart.git
7b9e765..29c11fb master -> master

[mshaik@ansible quickstart]$ ll
total 24
drwxrwxr-x 2 mshaik mshaik 17 Jun 2 04:39 dir1
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:09 first_file
-rw-rw-r– 1 mshaik mshaik 9 Jun 2 06:12 one
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:25 second.out
-rw-rw-r– 1 mshaik mshaik 22 Jun 2 04:26 third.txt
-rw-rw-r– 1 mshaik mshaik 11 Jun 2 06:12 three
-rw-rw-r– 1 mshaik mshaik 9 Jun 2 06:12 two

###################################################################

git config –global core.editor vim
git config –global core.pager ‘more’
git config –global core.excludesfile ~/.gitignore_global
git config –list
git log –stat
git log -p -2
git log –pretty=oneline
git log –pretty=format:”%h: %an, &ae,%cn ,%ce,%cd, – %s” –graph

git tag tag1
git tag
echo “for tagging concept” > posttag1
git add posttag1
git commit -m “tag1 commited”
git show tag1
git describe –tags
git tag -a v1 -m “version one releasd”
git tag
echo “this is tag2” >> posttag2
git add posttag2
git commit -m “commit after tag2”
git show v1
git describe –tags

git merge development –no-ff

AWS Certified SysOps Administrator – Associate Level Objective’s

 

1) Which of the following metrics do not get automatically reported to Amazon CloudWatch from Amazon EC2? (Choose 3)
The amount of memory being used, The amount of swap space used, How much disk space is available

Explanation
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/mon-scripts.html

2) What are the two different kinds of status checks when it comes to Amazon EC2 instances?
System status check and Instance status check

Explanation
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html

3) What best describes burstable performance for t2.micro instances?
Burstable performance gives you a baseline performance and CPU credits that allow you to burst above this baseline if needed.

Explanation
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html#t2-instances-cpu-credits

4) You currently have Nginx webservers on EC2 instances which receive requests from your ELB. Those Nginx webservers return results from your PHP application. This application connects to an RDS database instance to read and write data. However, a few months ago, you realized that ElastiCache with the Redis caching engine could reduce the load on your RDS database by caching some of the popular data. Fast-forward to today, and your ElastiCache Redis cluster is under a lot of load and needs to scale. Which of these is the best way to scale your cluster?
If the load is read-heavy, scale by adding read replicas to your cache cluster. If the load is write-heavy, scale vertically by increasing the node size.

Explanation
The Redis engine in ElastiCache does not support scaling horizontally for write-heavy workloads because data is not partitioned across nodes (unlike Memcached). All of the data needs to fit in the master node. However, we can add read replicas to our cluster in order to scale for read-heavy workloads. http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Scaling.RedisStandalone.html

5) Which of these are true when it comes to the differences between EBS-backed storage and SSD-backed instance store?
SSD-backed instance store is usually faster because it is physically attached to the host computer, while EBS volumes transfer data over the network which adds latency., SSD-backed instance store is ephemeral while EBS-backed storage is persistent.

Explanation
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html

1) You’ve created a CloudWatch alarm to monitor ElastiCache evictions. The CloudWatch alarm begins to alert you that the number of evictions has surpassed your applications requirements. How might you go about resolving the high eviction amount issue?
Increaseing the size of the ElastiCache instance, Adding another node to the ElastiCache cluster

Explanation
Increased evictions generally means there is a low amount of free memory in order to cache new information. Evictions mean that the caching engine must remove old data to make room for new data. In order to resolve this issue you will need to increase capacity or add nodes to the caching cluster.

2) One of your instances is not responding. After investigation you see that the instance system status checks indicates a problem. What would be the best method for attempting to fix a failing system status check?
Stop and then start the instance so it can be launched on a new host

Explanation
The best way to resolve system check failure issues is to stop the instance and start it. By stopping and starting the instance, the instance will most likely be launched on a different physical host. Since system failures are related to the physical host then this is the best method for resolving the failure. Just rebooting the instance will not cause the instance to be launched on a different host.

3) AWS Allows billing metrics across all consolidated billing accounts from the payer account.
True

4) You’ve been tasked with optimizing costs in your companies AWS environment. After logging in, you discover that there are 3 unused elastic IP addresses, 6 RDS instances that have not had a DB connection for over 7 days, 5 instances that are running at an average CPU utilization of < 5% and one EC2 instance running at 80% utilization. Your company has not purchased any reserved instances but is highly concerned over AWS costs. As a SysOps administrator you know that you can easily help reduce costs and make the company happy again, select all of the statements below that you might do in order to optimize costs quickly.
Remove all unassigned Elastic IP addresses and create snapshots of all unused EBS volumes and terminate the volumes, Reduce instance size for underutilized instances or combined the instances and terminate the unused, Create a snapshot of RDS instances that have had 0 DB connections after 7 days and terminate the RDS instances

Explanation
Cost optimization includes the process of terminating or stopping unused resources such as idle Elastic Load Balancers that do not have any backing instances, removing unassociated Elastic IP Addresses, resizing instances, and purchasing reserved instances. There are many more ways of optimizing costs; please review the link video.

 

5) In order to monitor operating system-level metrics such as disk usage, swap usage, and memory usage, you must install EC2 monitoring scripts. These scripts put custom metric data into Amazon CloudWatch. What do you need to do in order to give the instance permissions to put those custom metrics in CloudWatch?
Assign a role to the EC2 instance which will be sending custom metrics to CloudWatch

Explanation
The question asks what permissions are required to give the “instance” permissions to put metric data on CloudWatch. In order for an instance to have this permission, you would need to assign a role to the EC2 instance. The role needs to have permissions to “put” data on Amazon CloudWatch. The answer “Create a user with access keys and secret access keys for the script to put the custom metrics onto CloudWatch” will allow the script to communicate with CloudWatch and put custom metrics. However, the question specifically asks for instance permissions, and best practice recommends using roles with instances instead of access keys.

1) Your organization is running an application on EC2 instances which transfers large amounts of data to their respective EBS volumes. You’ve noticed that the data being transferred from some instances is exceeding bandwidth capacity which is causing performance issues. Which of these solutions would help the most?

Change the instance size and type. Bandwidth capacity is dependent upon the instance size and the instance type., Change to an EBS-optimized instance type and enable EBS Optimization if it is not already enabled.

2) You just joined an established company and are in charge of finding ways to optimize costs without compromising performance or causing downtime. You notice that the company is using Auto Scaling to keep a minimum amount of instances running at all times, while also providing the possibility to add more instances if the ELB latency metric increases over a period of 1 minute. At that point, the Auto Scaling group will add 2 more instances in order to ensure that there are plenty of extra resources to handle more load. This is great, but it is not optimized for cost yet. What can you do to reduce costs without losing elasticity or causing downtime?

Purchase reserved instances for the minimum amount of instances and then use on-demand instances for instances launched by Auto Scaling beyond the minimum requirement.

Explanation
This is the best option to both meet the requirements (elasticity and no downtime) and also to lower costs over time.

3) Your current application architecture is not scalable. You’re running one large EC2 instance for your application, webserver, and database. Whenever that instance has an issue or receives more traffic than it can handle, your users are unable to access your e-commerce platform, which is costing thousands of dollars each time there is an outage. So, to solve this, you break apart the database and setup Amazon RDS. You then configure Auto Scaling for your backend instances with an Elastic Load Balancer to distribute the load, and you soon realize that users are constantly losing their sessions and shopping cart information while browsing your e-commerce website. What is causing this issue and how can you solve the problem?

You realize that each time a page loads (and a request is sent), the ELB sends the request to a different backend instance, and since sessions are stored on each instance, the user gets logged out and loses their session. You fix the issue by storing the session information in ElastiCache since this gives us a central location for the application to get and set sessions.

Explanation
Setting this up with ElastiCache is the best option, because that way we can still spread the load evenly across backend instances without losing session data, and we also don’t put a lot more load on our RDS database. Storing session data and retrieving it from the database is usually not recommended. Setting up ELB stickiness would not evenly distribute the load, and so it is best to avoid it if we have other options.

1) What are conditions that can trigger an Amazon RDS failover to happen?
Loss of network connectivity to the primary instance, Storage failure on the primary database, Some kind of resource failure with the underlying virtual resources

2) We currently have one Bastion host instance in one of our public subnets, but we’re worried about availability if the bastion host or the availability zone that it’s in goes down. What can we do?
Set up two Bastion hosts in two separate Availability Zones and assign an Elastic IP Address to the main Bastion host.

Explanation
Using an Elastic IP address makes it easy to switch to the failover Bastion host if the main one becomes unavailable

3) Which of these services give us access to the underlying Operating System?
Amazon EMR, EC2

4) What is the best practice to setup and implement a Bastion host in a VPC?
Create the instance in your public subnet and assign it a public IP address. Then, use ssh-agent forwarding or OpenSSH ProxyCommand to connect to your private instances.

Explanation
The Bastion Host needs to be in your public subnet otherwise you won’t be able to assign it a public IP address to connect to it and access the rest of your infrastructure hidden in private subnets. Using ssh-agent forwarding or OpenSSH ProxyCommand is recommended (versus uploading the PEM SSH key to the instance), because if someone gets access to the instance, they would have the key to your infrastructure. Instead, with the recommended methods, we don’t have to upload the private key.

1) What error code does the Elastic Load Balancer return if it does not have enough resources to handle large spikes in traffic?
HTTP 503

2) One of your customers needs business analytics from data stored in your Amazon RDS database. The problem is, this database is also used to serve data to other customers. The business analytics queries require a lot of table joins and expensive calculations that can’t be offloaded to the client, and so you’re worried about degrading performance. What can you do?
Create a read replica specifically for this customer and their business analytics queries. Give them the replica’s endpoint.

3) We’re restoring a volume from a snapshot and we need maximum performance as soon as we put the volume in production. How can we ensure maximum performance?
Initialize (pre-warm) the volume by reading from every single block.

#############################################

1) You manage a social media website on EC2 instances in an Auto Scaling group. You have configured your Auto Scaling group to deploy one new EC2 instance when CPU utilization is greater than 90% for 3 consecutive periods of 10 minutes. You notice that between 6:00 pm and 10:00 pm every night, you see a gradual increase in traffic to your website. Although Auto Scaling launches several new instances every night, some users complain they are seeing timeouts when trying to load the index page during those hours. What is the least cost-effective way to resolve this problem?

Increase the minimum number of instances in the AutoScaling group

Explanation
Increasing the minimum number of instances in the AutoScaling group will keep more instances running around the clock, thus making it a very inefficient way to manage cost. The other options all increase the AutoScaling group’s sensitivity to an increase in load and enable it to respond quicker to increased load by spinning up instances as soon as they become necessary.

2) You maintain an application on AWS to provide development and test platforms for your developers. Currently, both environments consist of an m1.small EC2 instance. Your developers notice performance degradation as they increase network load in the test environment. How would you mitigate these performance issues in the test environment?

Upgrade the m1.small to a larger instance type

 

3) How might you assign permissions to an EC2 instance so that the EC2 custom CloudWatch metric scripts can send the required data to Amazon CloudWatch?

Assign an IAM role to the EC2 instance at creation time with permissions to write to CloudWatch

4) A successful systems administrator does not need to create a script for:
Automating backups of RDS databases

Explanation
AWS offers automated backups of RDS, thus it is not a requirement to script this task.

5) You have enabled a CloudWatch metric on your Redis ElastiCache cluster. Your alarm is triggered due to an increased amount of evictions. How might you go about solving the increased eviction errors from the ElastiCache cluster?
Increase the size of your node

6) You see an increased load on an EC2 instance that is used as a web server. You decide to place the server behind an Elastic Load Balancer and deploying an additional instance to help meet this increased demand. You deploy the ELB, configure it to listen for traffic on port 80, bring up a second EC2 instance, move both instances behind the load balancer, and provide customers with the ELB’s URL – https://mywebapp-1234567890.us-west-2.elb.amazonaws.com. You immediately begin receiving complaints that customers cannot connect to the web application via the ELB’s URL. Why?

You specified https:// in the ELB’s URL, but the ELB is not configured to listen on port 443.

Explanation
Specifying https:// directs web traffic to port 443. If you only configured a listener for port 80 on the ELB, traffic on port 443 will not be accepted.

7) Your company has decided to deploy a “Pilot Light” AWS environment to keep minimal resources in AWS with the intention of rapidly expanding the environment in the event of a disaster in your on-premises Datacenter. Which of the following services will you likely not make use of?

A Gateway-Cached implementation of Storage Gateway for storing snapshot copies of on-premises data

Explanation
A Gateway-Cached implementation of Storage Gateway stores all of your data in AWS and caches your frequently-accessed data on premises. Keeping all data in AWS is not a minimal AWS implementation. A Gateway-Stored implementation of Storage Gateway would be preferred for a “Pilot Light” AWS environment, as it would allow you retain your data on-premises but take snapshot copies of the data to AWS, so it could be accessed in the event of an on-premises disaster.

8) In order for reserved instances to reduce the cost of running instances, those instances must match the exact specifications of the reserved instance including: Region, Availability Zone, and instance type.

True

Explanation
AWS announced late in 2016 that you could now apply a reserved instance to a region in order to get cost benefits across all AZs. Before this announcement, that was not the case. Because they do not update certification exams with every new feature announcement, and the SysOps course is training for the exam, we need to keep the question the way it is until they update it. With that being said, this is no longer true for “Availability Zone.”

9) Your company is setting up an application that is used to share files. Because these files are important to the sales team, the application must be highly available. Which AWS-specific storage option would you set up for low cost, reliability, and security?

Use Amazon S3, which can be accessed by end users with signed URLs.

10) You have been tasked by your manager to build a tiered storage setup for database backups and their logs. These backups must be archived to a durable solution. After 10 days, the backups can then be archived to a lower priced storage tier. The data, however, must be retained for compliance policies. Which tiered storage solution would help you save cost, and still meet this compliance policy?

Set up an independent EBS volume where we can store daily backups and then copy these files over to S3, where we configure a bucket that has a lifecycle policy to archive files older than 10 days to AWS Glacier

11) You manage a popular blog website on EC2 instances in an Auto Scaling group. You notice that between 8:00 am and 8:00 pm, you see a 50% increase in traffic to your website. In addition, there are occasional random 1 to 2-hour spikes in traffic and some users are seeing timeouts when trying to load the index page during those spikes. What is the least cost-effective way to manage this Auto Scaling group?

Use reserved instances for the instances needed to handle the load during traffic spikes

Explanation
Reserved instances become cost-effective when they are in use for greater than 30% of the time. Using reserved instances to handle the brief spikes in traffic would not be cost effective.

12) In your LAMP application, you have some developers that say they would like access to your logs. However, since you are using an AWS Auto Scaling group, your instances are constantly being re-created. What would you do to make sure that these developers can access these log files?

Set up a central logging server that you can use to archive your logs; archive these logs to an S3 bucket for developer-access.

13) What is the most likely reason you are being charged for an instance you launched from a free-tier eligible AMI?
Your account has passed the one-year trial period

14) Your website is hosted on 10 EC2 instances in five regions around the globe, with two instances per region. How could you configure your site to maintain availability with minimum downtime if one of the five regions was to lose network connectivity for an extended period?

Create a Route 53 Latency Based Routing Record Set that resolves to an Elastic Load Balancer in each region and has the Evaluate Target Health flag set to true.

15) Your company is ready to start migrating its application over to the cloud, but you cannot afford any downtime. Your manager asks you to come up with a plan of action. She also wants a solution that offers the flexibility to test the application on AWS with only a subset of users, but with the ability to increase the number of users over time. Which of these options are you most likely to recommend?

Implement a Route53 weighted routing policy that distribute the traffic between your on-premises application and the AWS application depending on weight.

Explanation
This option works great because we can modify the weight of one record set over the other to increase or decrease the amount of traffic. If the application on AWS is behaving properly, we can slowly increase the number of users that get routed to that application and slowly phase out the on-premises application. Otherwise, we can revert back to the on-premises application.

16) Best practice is to pre-warm:

EBS volumes newly created from snapshots. Pre-warm by accessing each block once.

Explanation
The read and write back method is used to pre-warm EBS volumes created from a snapshot. Fresh EBS volumes do require read or write back during pre-warming. Elastic load balancers should be pre-warmed prior to an anticipated large spike in traffic, but this is done by contacting AWS to provision additional back-end resources, not by a read and write back command.

17) What AWS services give you access to the underlying operating system? (Choose three)

EC2, Amazon EMR, Elastic Beanstalk

18) Which of the following is a security best practice for an AWS environment?

Enable MFA on the root user for your AWS account and use IAM users rather than the root user for administrative tasks.

Explanation
IAM user accounts should not be used for executing automated scheduled tasks on EC2 instances, and automated tasks do not use MFA. The default VPC is built for ease of use, not security. IAM user credentials should not be stored on AMIs; EC2 instances that need permission to perform actions on AWS resources should use IAM roles.

19) You want to run a web application in which application servers on EC2 instances are in an Auto Scaling group spread across two Availability Zones. After monitoring for six months, we notice that only one of our web servers is needed to handle our minimum load. During our core utilization hours (8:00am-8:00pm Monday-Friday), five to six web servers are needed to handle the minimum load. Four to five days a year, the number of web servers required can go up to 18 servers. What choice would reduce our costs the most while providing the highest availability?

Five Reserved Instances (heavy utilization), the rest covered by on-demand instances

Explanation
Different levels of utilization for reserved instances (heavy, medium, light) have been phased out. This might still show up on the exam, however, so it’s a good idea to be familiar with the concept.

20) What item, when attached to a subnet, will allow the internal subnet to communicate to external networks? (Choose two)

Internet Gateway (IGW), Virtual Private Gateway

21) You patch the operating system on an EC2 instance and issue a reboot command from inside the instance’s OS. After disconnecting from the instance and waiting several minutes, you notice that you still cannot successfully ping the instance’s public IP address. What is the most likely reason for this?

Changes made during OS patching caused a problem with the instance’s NIC driver.

22) Which option below is part of the failover process for a Multi-AZ deployment with an RDS instance?

The DNS for the primary DB instance is switched to the standby DB instance

23) Which of the following statements is true?

You can customize your AWS deployments using the Ruby programming language with OpsWorks templates., You can customize your AWS deployments using JSON templates in CloudFormation.

24) In your infrastructure, you are running a corporate application using a T2.Small instance. You are also using a NAT instance so that your private instances can reach out to the internet without being publicly available. What is one thing that we should do to speed up bandwidth and performance?

Increase your T2.Small instance to a M3.Small or M3.Medium

Explanation
Instance size has a direct influence on the amount of data your instance can send and receive. If your AWS environment has many instances using NAT availability, a network bottleneck could occur. Increasing the instance size will increase the available network throughput.

25) Your AWS application is set up to use Auto Scaling with an ELB. To be sure that your application is performing its best and the page loads quickly, what, precisely, could you monitor in CloudWatch?

Monitor your ELB latency using CloudWatch metrics

Explanation
CloudWatch provides latency metrics which monitor the time it takes for the request to go from the Elastic Load Balancer to the instance and back. Latency is a good metric to determine if our Elastic Load Balancer is healthy.

26) We have developed a mobile application that gets downloaded several hundred times a week. What authentication method should we enable for the mobile clients to access images that are stored in an AWS S3 bucket that provides us with the highest flexibility and rotates credentials?

Identity Federation based on AWS STS using an AWS IAM policy for the respective S3 bucket

27) What might be the cause of an EC2 instance not launching in an auto-scaling group?

The Availability zone is no longer supported, Invalid EBS device mapping, The key pair associated with EC2 instance does not exist

28) You have decided to extend your on-site data center to Amazon Web Services by creating a VPC. You already have multiple DNS servers on-premises. You are using these DNS servers to host DNS records for your internal applications. You have a corporate security network policy that says that a DNS name for an internal application can only be resolved internally and never publicly over the internet. Your existing on-premises data center is already connected to your VPC using IPSec VPN. You are deploying new applications within your AWS service that need to resolve these new applications by name. How might you set up the scalable DNS architecture?

Create a DNS option set that includes both the DHCP options with domain-name-servers=AmazonProvidedDNS and your internal DNS servers

29) You have enabled a CloudWatch metric on your Memcached ElastiCache cluster. Your alarm is triggered due to an increased amount of evictions. How might you go about solving the increased eviction errors from the ElastiCache cluster? (Choose Two)

Increase the node size, Add a node to the cluster

30) A colleague noticed that CloudWatch was reporting that there has not been any connections to one of your MySQL databases for several months. You decided to terminate the database. Two months after the database was terminated, you get a phone call from a very upset user who needs information from that database to run end-of-year reports. You are hopeful that you can restore the database to full functionality from a snapshot, but your database administrator is not quite as confident. Why?

The MySQL database was not using a transactional database engine such as InnoDB and may not restore properly.

31) Which of the following can be overridden at the EC2 instance level?

The choice to not use dedicated tenancy at the VPC level., An IAM policy explicitly allowing a user the right to terminate all EC2 instances.

Explanation
The default option for a VPC is to not use dedicated tenancy, but that can be overridden at the instance level. If the option to use dedicated tenancy is explicitly set at the VPC level, however, it cannot be overridden at the instance level. Explicit denies in IAM policies always trump explicit allows, so a user who is allowed to terminate all EC2 instances in an account can be denied the permission to terminate a particular instance.

32) Your RDS database is experiencing high levels of read requests during the business day and performance is slowing down. You have already verified that the source of the congestion is not from backups taking place during the business day, as automatic backups are not enabled. Which of the following is the first step you can take toward resolving the issue?

Enable automated backups of the database.

Explanation
A read replica of the database cannot be created until automated backups are enabled. Your first step should be to enable automated backups. Once automated backups are enabled, you can proceed with creating a read replica of the database and offloading some client read requests to .

33) True or False: Multi-AZ RDS replications use asynchronous data replication.

False

Explanation
Data replication is synchronous

34) If you configure a VPC with an Internet gateway that has a private and a public subnet, with each subnet in a different Availability Zone. The VPC also has a dual-tunnel VPN between the Virtual Private Gateway and the router in the private data center. You want to make sure that you do not have a potential single point of failure in this design. What could you do to make sure we achieve this above environment?

You set up a secondary router in your private data center to establish another dual-tunnel VPN connection with your Virtual Private Gateway.

35) Select all that apply: Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:

may be performed by the customer against their own instances with prior authorization from AWS

36) Your RDS instance is consistently maxed out on its resource utilization. What are multiple ways to solve this issue? (Choose three)

Fire up an ElastiCache cluster in front of your RDS instance., Increase RDS instance size., Offload read-only activity to a read replica if the application is read-intensive.

37) True or False: If Multi-AZ is enabled and automated backups occur on your instance, your application will experience performance issues due to the increased I/O operations caused by the automated backup.

False

Explanation
Automated backups are performed on the backup instance instead of the source database instance in order to avoid this performance degradation.

38) Your company’s compliance department mandates that within your multi-national organization, all data for customers in the UK must never leave UK servers and networks. Similarly, US data must never leave US servers and networks without explicit authorization first. What do we have to do to comply with this requirement in our web-based applications running on AWS in EC2? The user has already set up a user profile that states their geographic location.

We can run EC2 instances in multiple regions, and leverage a third-party data provider to determine whether a user should be redirected to the appropriate region based on that user’s profiles.

39) True or False: By default, there is no route between the subnets in a VPC.

False

40) You are running an EC2 instance serving a website with an SSL certificate. Your CPU utilization is constantly high. How might you resolve this issue?

Offload the SSL cert from the EC2 instance and configure it on the Elastic Load Balancer

41) Assuming you have kept the default settings and have taken manual snapshots, which of the following manual snapshots will be retained?

A snapshot of an EBS root volume when the EC2 instance is terminated, A snapshot of an RDS database when the RDS instance is terminated

Explanation
Manual snapshots of RDS databases and EBS volumes persist after instance termination. You cannot snapshot an EC2 instance store volume.

42) Which of the following will cause a noticeable performance impact on an RDS Multi-AZ deployment?

None of these

43) True or False: In a Network ACL an explicit Deny always overrides an explicit Allow.

False

Explanation
Rules are evaluated in order depending on the rule number. As soon as a matching rule is found, it is applied, even if there is another rule contradicting the first rule.

44) What is the result of the following bucket policy? { “Statement”: [ { “Sid”: “Sid2”, “Action”: “s3:*”, “Effect”: “Allow”, “Resource”: “arn:aws:s3:::mybucket/*.”, “Condition”: { “ArnEquals”: { “s3:prefix”: “finance_” } }, “Principal”: { “AWS”: [ “*” ] } } ] }

It will allow all actions only against objects with the prefix finance_

45) A deny overrides an allow in which circumstances?

An explicit allow is set in an IAM policy governing S3 access and an explicit deny is set on an S3 bucket via an S3 bucket policy.

46) You need to establish a secure backup and archiving solution for your company, using AWS. Documents should be immediately accessible for three months and available for five years for compliance reasons. Which AWS service fulfills these requirements in the most cost-effective way?

Upload data to S3 and use lifecycle policies to move the data into Glacier for long-term archiving.

47) Your infrastructure does not have an Internet Gateway attached to any of the subnets. What might you do in order to SSH into your EC2 instances? All other configurations are correct.

Create a VPN connection

48) What is the result of the following bucket policy? { “Statement”: [ { “Sid”: “SID1”, “Effect”: “Allow”, “Principal”: { “AWS”: “*” }, “Action”: “s3:*”, “Resource”: “arn:aws:s3:::mybucket/*”, “Condition”: { “IpAddress”: { “aws:SourceIp”: “50.97.0.0/32” } } } ] }

It will deny all access to the S3 mybucket bucket except for requests coming from the IP 50.97.0.0

49) You have created an application that utilizes Auto Scaling behind an Elastic Load Balancer. You notice that user’s sessions are not evenly distributed on the newly spun up instances. What could be a reason that your users’ web sessions are stuck on one instance and not using others?

Your ELB is sending all the sessions to the old instance and not evenly sending sessions to all new instances that are spun up during Auto Scaling because of sticky sessions

Explanation
If sticky sessions are enabled on the Elastic Load Balancer then the load balancer will “remember” what instance that request was sent to and will continue to send that request to the same instance.

 

50) You notice that several of your AWS environment’s CloudWatch metrics consistently have a value of zero. Which of these are you most likely to be concerned about and take action on?

RDS DatabaseConnections

Explanation
Zero connections to a database for a long period of time may mean you are paying for a database that is not in use. If you cannot find anyone with a legitimate use case for the database, you may want to consider taking a snapshot of it and terminating it. Zero is an ideal value for the other metrics listed.

51) You have been asked to maintain a small AWS environment consisting of five on-demand EC2 web server instances. Traffic from the Internet is distributed to these servers via an Elastic Load Balancer. Your supervisor is not pleased with a recent AWS bill. Assuming a consistent, moderately high load on the web servers, what option should you recommend to reduce the cost of this environment without negatively affecting availability?

Use reserved EC2 instances rather than on-demand instances.

Explanation
Auto Scaling can often save money in environments with variable load, but would likely not help reduce costs in an environment with a consistent high load spread across all servers. Reserved instances are recommended for instances with a consistently high load. Removing the ELB or using spot instances would save money, but could decrease availability.

52) We have a two-tiered application with the following components. We have an ELB, three web and application servers on EC2, and one MySQL RDS database. When our load grows, the database queries take longer and slow down the overall response time for the user request. Which three options would we choose to speed up performance?

We can shard the database and distribute the load between shards, We can create an RDS read-replica and redirect half of the database read requests to it, We can cache our database queries with ElastiCache

53) True or False: Read replicas can be created from a read replica of another read replica.

True

54) For which of the following reasons would you not contact AWS?

Request consolidated billing for multiple AWS accounts owned by your company

55) When working with Amazon RDS, by default, AWS is responsible for implementing which two management-related activities?

Installing and periodically patching the database software, If automated backups are enabled, creating and maintaining automated database backups with a point-in-time recovery of up to five minutes

56) You have multiple AWS users with access to an Amazon S3 bucket. These users have permission to add and delete objects. If you wanted to prevent accidental deletions, what might you do to prevent these users from performing accidental deletions of an object?

You can use MFA to prevent accidental deletions of an object

57) Your supervisor sends you a list of several processes in your AWS environment that she would like you to automate via scripts. Which of the following list items should you set as the highest priority?

Implement CloudWatch alerts for EC2 instances’ memory usage

58) Your company’s website is hosted on several EC2 instances behind an Elastic Load Balancer. Every time the development team deploys a new upgrade to the web application, the support desk begins receiving calls from customers being disconnected from their sessions. Customers’ session data is very important, as it contains their shopping cart information, and this information is lost when the customers’ sessions are disconnected. Which of the following steps can be taken to prevent customers’ shopping cart data from being lost without affecting website availability? (Choose Two)

Use ElastiCache to store session state., Enable connection draining and remove instances from the Elastic Load Balancer prior to upgrading the application on those instances.

Explanation
Storing session state in ElastiCache will allow an instance to become unavailable without losing session data. Removing instances from the Elastic Load Balancer prior to upgrading them will prevent users from establishing new sessions on instances that are about to receive the application upgrade.

59) Your applications in AWS need to authenticate against LDAP credentials that are in your on-premises data center. You need low latency between the AWS app authenticating and your credentials. How can you achieve this?

If you don’t already have a secure tunnel, create a VPN between your on-premises data center and AWS. You can then spin up a secondary LDAP server that replicates from the on-premises LDAP server.

60) You have an Elastic Load Balancer with an Auto Scaling group for your application. You also have 4 running instances and you have Auto Scaling enabled. Some of those instances are running in one Availability Zone, and others are in a different Availability Zone. Some instances within one of the zones are not available to the ELB. What could be the cause?

The ELB isn’t configured for that Availability Zone