Вы находитесь на странице: 1из 269

© 2016-2019 Tecmint.

com – Last revised: January 2019 – All rights reserved


DISCLAIMER:
This book covers the topics most likely to appear given the delivery technology, performance-based
questions and time constraints of the exam.

We hope you will enjoy reading this ebook as much as we enjoyed writing it and formatting it for
distribution in PDF format.

You will probably think of other ideas that can enrich this material. If so, feel free to drop us a note
at admin@tecmint.com or one of our social network profiles:

http://twitter.com/tecmint

https://www.facebook.com/TecMint

https://plus.google.com/+Tecmint

In addition, if you find any typos or errors in this book, please let us know so that we can correct
them and improve the material. Questions and other suggestions are appreciated as well – we look
forward to hearing from you!

Important: All the commands used to perform administrative tasks (adding, updating, or removing
users / groups, changing permissions, managing packages, and so forth) should be preceded by
sudo if you are using Ubuntu.

Last, but not least, please consider buying your exam voucher using the following links to earn us a
small commission. This will help us keep this book updated.

Become a Linux Certified System Administrator at Training.LinuxFoundation.org!


Become a Linux Certified Engineer at Training.LinuxFoundation.org!

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Table of Contents
Chapter 1: How to Use Git Version Control System in Linux...........................................................12
Learn Version Control with Git.................................................................................................13
Creates a New Git Repository......................................................................................................14
.......................................................................................................................................................16
Clone a Git Repository..................................................................................................................16
Check a Git Status Summary........................................................................................................18
Git Stage Changes and Commit....................................................................................................18
Publish Local Commits to Remote Git Repository......................................................................19
Create a New Git Branch...............................................................................................................20
Merge Changes From One Branch to Another..............................................................................21
Download Changes From Remote Central Repository..................................................................21
Inspect Git Repository and Perform Comparisons.......................................................................22
Summary........................................................................................................................................23

Chapter 2: Processing Text Streams in Linux.....................................................................................24


Using sed........................................................................................................................................24
uniq Command...............................................................................................................................30
grep Command..............................................................................................................................32
tr Command Usage.......................................................................................................................32
cut Command Usage.....................................................................................................................35
Summary........................................................................................................................................37

Chapter 3: How to Run Multiple Commands on Multiple Linux Servers.........................................38


Install PSSH or Parallel SSH on Linux.........................................................................................38
How do I Use pssh?.......................................................................................................................39
Run Command on Multiple Servers using pssh.............................................................................39
Summary........................................................................................................................................41

Chapter 4: How to Monitor System Usage, Outages and Troubleshoot Linux Servers ....................42
Storage space utilization................................................................................................................42
Example 1: Reporting disk space usage in bytes and human-readable format...........42
Example 2: Inspecting inode usage by file system in human-readable format with:. 42

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Example 3: Finding and / or deleting empty files and directories..............................43
Example 4: Examining disk usage by directory.........................................................44
Memory and CPU utilization........................................................................................................44
Example 6: Inspecting physical memory usage..........................................................46
.....................................................................................................................................46
A closer look at Linux processes...................................................................................................46
Example 7: Displaying the whole process list in your system with ps (full standard
format).........................................................................................................................46
Example 8: Customizing and sorting the output of ps................................................47
Example 9: Pausing the execution of a running process and resuming it in the
background..................................................................................................................48
Example 10: Killing by force a process “gone wild”.................................................52
So… what happened / is happening?.............................................................................................52
Example 11: Examining logs for errors in processes..................................................53
Example 12: Examining the logs for hardware failures..............................................53
Summary.......................................................................................................................................54

Chapter 5: Network Performance, Security, and Troubleshooting.....................................................55


What services are running and why?.............................................................................................55
Investigating socket connections with ss.......................................................................................55
Example 1: Showing ALL TCP ports (sockets) that are open on our server..............55
Example 2: Displaying ALL active TCP connections with their timers.....................56
Example 3: Filtering connections by socket...............................................................56
Protecting against port scanning with nmap..................................................................................57
Example 4: Displaying information about open ports................................................57
Example 5: Displaying information about a specific port in a local or remote system
.....................................................................................................................................59
Example 7: Scanning several ports or hosts simultaneously......................................59
Reporting usage and performance on your network......................................................................60
1. Nmon Utility...........................................................................................................60
2. Vnstat Utility...........................................................................................................60
Transferring files securely over the network.................................................................................61
Example 8: Transferring files with scp (secure copy)................................................61
Example 9: Receiving files with scp (secure copy)....................................................61

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Example 10: Sending and receiving files with SFTP.................................................62
Configuring SSH servers and Clients............................................................................................62
Configuring SSH Passwordless Login...........................................................................................63
Summary........................................................................................................................................64

Chapter 6: Monitor Linux Process Resource Usages.........................................................................65


Reporting Processors Statistics......................................................................................................65
Reporting Linux Processes............................................................................................................69
Setting Resource Limits on a Per-User Basis................................................................................71
Other Linux Process Management Tools.......................................................................................72
Linux Cron Management...............................................................................................................73
Summary........................................................................................................................................75

Chapter 7: Update the Kernel and Ensure the System is Bootable....................................................76


Checking Installed Kernel Version................................................................................................76
Upgrading Linux Kernel Version...................................................................................................77
Upgrading Kernel in CentOS....................................................................................................77
Upgrading Kernel in Ubuntu....................................................................................................77
Set Default Kernel Version...........................................................................................................78
Summary........................................................................................................................................78

Chapter 8: How to Use udev for Device Detection and Management...............................................79


Learn Basics of Udev in Linux......................................................................................................79
How to Work with Udev Rules in Linux......................................................................................84
Summary.......................................................................................................................................86

Chapter 9: SELinux and AppArmor...................................................................................................87


Introduction to SELinux and How to Use it on CentOS 7.............................................................87
EXAMPLE 1: Changing the default port for the sshd daemon..................................88
EXAMPLE 2: Choosing a DocumentRoot outside /var/www/html for a virtual host
.....................................................................................................................................91
Introduction to AppArmor and How to Use it on Ubuntu.............................................................92
Summary........................................................................................................................................95

Chapter 10: User Management, Special Attributes, and PAM...........................................................96


Adding User Accounts...................................................................................................................96

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Understanding /etc/passwd.......................................................................................................96
Understanding /etc/group.........................................................................................................97
.......................................................................................................................................................97
Modifying User Accounts..............................................................................................................97
Deleting User Accounts..............................................................................................................100
Group Management.....................................................................................................................101
Special File Permissions.........................................................................................................101
SETUID..................................................................................................................................102
SETGID..................................................................................................................................102
STICKY BIT...........................................................................................................................103
.................................................................................................................................................104
Special File Attributes.............................................................................................................104
Accessing the root Account Using sudo......................................................................................105
PAM (Pluggable Authentication Modules)..................................................................................107
Summary......................................................................................................................................110

Chapter 11: Install OpenLDAP Server for Centralized Authentication............................................111


Installing LDAP Server................................................................................................................112
Configuring LDAP Server...........................................................................................................113
Configuring LDAP Database......................................................................................................114
Summary......................................................................................................................................117

Chapter 12: Install OpenLDAP Server for Centralized Authentication...........................................118


Installing LDAP Client in Ubuntu...............................................................................................118
Configure LDAP Client in CentOS 7..........................................................................................127
Summary......................................................................................................................................128

Chapter 13: How to Configure and Use PAM in Linux...................................................................129


How to Check a Program is PAM-aware....................................................................................129
How to Configure PAM in Linux................................................................................................129
Understanding PAM Management Groups and Control-flags.....................................................130
How to Restrict root Access to SSH Service Via PAM..............................................................131
How to Configuring Advanced PAM in Linux...........................................................................132
Summary......................................................................................................................................134

Chapter 14: How to Create SSH Tunneling or Port Forwarding in Linux.......................................135

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Local SSH Port Forwarding........................................................................................................135
Remote SSH Port Forwarding.....................................................................................................139
Dynamic SSH Port Forwarding...................................................................................................140
Summary......................................................................................................................................141

Chapter 15: How to Install and Configure Firewalld.......................................................................141


The Basics About Firewalld.........................................................................................................141
Understanding Important Firewalld Features..............................................................................142
Installing Firewalld......................................................................................................................143
Managing Firewalld....................................................................................................................143
Working with Firewalld Zones....................................................................................................144
Enable or Disable Ports in Firewalld...........................................................................................146
Enable or Disable Services in Firewalld......................................................................................146
Enable or Disable IP Masquerading Using Firewalld..................................................................147
Enable or Disable IMCP Request in Firewalld............................................................................148
Pass Raw iptables Rules in Firewalld..........................................................................................148
Using Rich Language in Firewalld..............................................................................................149
Enable or Disable Panic Mode in Firewalld................................................................................149
Lockdown Firewalld....................................................................................................................152
Summary.....................................................................................................................................153

Chapter 16: How to Setup Apache with Name-Based Virtual Hosting with SSL Certificate..........153
Installing Apache Web Server......................................................................................................153
Configuring Apache.....................................................................................................................154
Serving Pages in a Standalone Web Server.................................................................................155
Restrict Access to a Web Page with Apache................................................................................156
Setting Up Name-Based Virtual Hosts........................................................................................157
Installing and Configuring SSL with Apache..............................................................................160
Summary......................................................................................................................................164

Chapter 17: How to Setup Nginx with Name-Based Virtual Hosting with SSL Certificate............165
Installing Nginx Web Server........................................................................................................165
Configuring Nginx Web Server...................................................................................................169
Serving Pages in a Standalone Web Server.................................................................................169
Restrict Access to a Web Page with Nginx................................................................................172

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Setting Up Name-Based Virtual Hosts........................................................................................173
Installing and Configuring SSL with Nginx................................................................................175
Summary......................................................................................................................................179

Chapter 18: Setting Up Time Synchronization Server NTP.............................................................180


Install and Configure NTP...........................................................................................................180
Summary......................................................................................................................................182

Chaper 19: Setting Up Centralized Log Server with Rsyslog..........................................................182


Installing and Configuring Rsyslog Server..................................................................................183
Installing and Configuring Rsyslog Client..................................................................................186
Monitor Remote Logging on the Rsyslog Serve.........................................................................188
Summary......................................................................................................................................189

Chapter 20: Setting Up DHCP Server and Client.............................................................................190


Installing DHCP Server...............................................................................................................190
Configuring DHCP Server...........................................................................................................191
Configuring DHCP Clients..........................................................................................................193
DHCP Client Setup on CentOS................................................................................194
DHCP Client Setup on Ubuntu.................................................................................194
Summary......................................................................................................................................196

Chapter 21: Setting Up Mail Server.................................................................................................197


Installing Mail Server..................................................................................................................197
The Process of Sending and Receiving Email Messages............................................................198
Configuring Postfix Mail Server - SMTP....................................................................................200
Restricting Access to SMTP Server.............................................................................................202
The Postfix configuration parameters page may come in handy in order to further explore the
available options. Configuring Dovecot......................................................................................202
Configuring Mail Client for Sending and Receiving Emails.......................................................204
Summary......................................................................................................................................208

Chapter 22: Setting Up Squid HTTP Proxy Server..........................................................................209


How Proxy Server Works............................................................................................................209
What is Squid Proxy....................................................................................................................210
Installing Squid Server.................................................................................................................210

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Configuring Squid as an HTTP Proxy.........................................................................................211
Add Squid ACLs.....................................................................................................................211
Open Ports in Squid Proxy......................................................................................................212
Squid Proxy Client Authentication.........................................................................................212
Block Websites on Squid Proxy..............................................................................................216
Configure Client to Use Squid Proxy.....................................................................................217
Verifying Client Accessing Internet........................................................................................217
Restricting Access by Client...................................................................................................218
Fine Tuning Squid Proxy........................................................................................................219
Restricting Access by User Authentication.............................................................................220
Using Cache to Speed Up Data Transfer................................................................................221
Configuring Squid Proxy for CLI Browsers...........................................................................223
Summary......................................................................................................................................224

Chapter 23: Setting Up SquidGuard for Squid Proxy......................................................................224


Blacklists – The Basics................................................................................................................225
Installing Blacklists......................................................................................................................226
Removing Restrictions.................................................................................................................229
Whitelisting Specific Domains and URLs...................................................................................230
Summary......................................................................................................................................231

Chapter 24: Implement and Configure a PXE Boot Server on CentOS 7........................................231
Install and Configure DNSMASQ Server...................................................................................232
Installing SysLinux Bootloaders..................................................................................................234
Installing TFTP-Server...............................................................................................................234
Setting Up PXE Configuration....................................................................................................234
Adding CentOS 7 Boot Images to PXE.......................................................................................236
Creating CentOS 7 Local Mirror Installation Source..................................................................236
Testing FTP Installation Source...................................................................................................237
Configure Clients to Boot from PXE Network............................................................................240
Summary......................................................................................................................................243

Chapter 25: Implement and Configure a PXE Boot Server on Ubuntu...........................................244


Install and Configure DNSMASQ Server...................................................................................244
Install TFTP Netboot Files...........................................................................................................245

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Prepare Local Installation Source Files......................................................................................246
Setup PXE Server Configuration File..........................................................................................247
Install Ubuntu with Local Sources via PXE................................................................................249
Summary......................................................................................................................................256

Chapter 26: Setting Up a Caching DNS Server................................................................................257


Introducing Name Resolution......................................................................................................257
Installing and Configuring a DNS Server....................................................................................258
Configuring DNZ Zones..............................................................................................................260
Testing the DNS Server...............................................................................................................263
Summary......................................................................................................................................266

Chapter 27: Logical Volume Management – LVM..........................................................................267


Creating physical volumes, volume groups, and logical volumes...............................................267
Resizing logical volumes and extending volume groups.............................................................269
Mounting logical volumes on boot and on demand.....................................................................270
Summary .....................................................................................................................................271

Chapter 28: Setting Up Network Share (Samba & NFS) Filesystems.............................................272


Mounting Filesystem...................................................................................................................272
Mount options..............................................................................................................................273
Unmounting Devices...................................................................................................................274
Mounting Networked Filesystems...............................................................................................275
Installing and Mounting Samba Share....................................................................................275
Installing and Mounting NFS Share........................................................................................277
Mounting Filesystems Persistently..............................................................................................277
Mount Examples..........................................................................................................................278
Summary......................................................................................................................................278

Chapter 29: Configure and Maintain High Availability/Clustering..................................................279


Configuring Local DNS Settings on Each Server.......................................................................279
Installing Nginx Web Server........................................................................................................280
Installing and Configuring Corosync and Pacemaker.................................................................281
Creating the Cluster.....................................................................................................................281
Configuring Cluster.....................................................................................................................283
Adding a Cluster Service.............................................................................................................284

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Testing High Availability/Clustering...........................................................................................285
Summary......................................................................................................................................287

Chapter 30: Install, Create and Manage LXC (Linux Containers)...................................................288


Installing LXC Virtualization......................................................................................................288
Create and Manage LXC Containers..........................................................................................289
Summary......................................................................................................................................292

Chapter 31: Installing and Configuring a Database Server..............................................................293


Installing and Securing a MariaDB Server..................................................................................293
Configuring the Database Server.................................................................................................294
Checking and Tuning Database Configuration............................................................................296
Summary......................................................................................................................................297

Chapter 32: Turn a Linux Server into a Router................................................................................299


IP and Network Device Configuration........................................................................................299
Summary......................................................................................................................................305

Chapter 33: Managing and Configuring Virtual Machines and Containers.....................................306


Managing and configuring virtual machines...............................................................................306
CPU extensions.......................................................................................................................306
Virtualization tools..................................................................................................................306
Useful commands....................................................................................................................306
Managing and configuring containers.........................................................................................307
Installing Docker.....................................................................................................................307
Setting up an Apache container...............................................................................................308
Summary......................................................................................................................................309

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 1: How to Use Git Version Control System
in Linux
Version Control (revision control or source control) is a way of recording changes to a file or
collection of files over time so that you can recall specific versions later.

A version control system (or VCS in short) is a tool that records changes to files on a filesystem.

There are many version control systems out there, but Git is currently the most popular and
frequently used, especially for source code management.

Version control can actually be used for nearly any type of file on a computer, not only source code.

Version control systems/tools offer several features that allow individuals or a group of people to:

• create versions of a project.


• track changes accurately and resolve conflicts.
• merge changes into a common version.
• rollback and undo changes to selected files or an entire project.
• access historical versions of a project to compare changes over time.
• see who last modified something that might be causing a problem.
• create a secure offsite backup of a project.
• use multiple machines to work on a single project and so much more.

A project under a version control system such as Git will have mainly three sections, namely:

• a repository: a database for recording the state of or changes to your project files. It contains
all of the necessary Git metadata and objects for the new project. Note that this is normally
what is copied when you clone a repository from another computer on a network or remote
server.
• a working directory or area: stores a copy of the project files which you can work on (make
additions, deletions and other modification actions).
• a staging area: a file (known as index under Git) within the Git directory, that stores
information about changes, that you are ready to commit (save the state of a file or set of
files) to the repository.

There are two main types of VCSs, with the main difference being the number of repositories:

• Centralized Version Control Systems (CVCSs): here each project team member gets their
own local working directory, however, they commit changes to just a single central
repository.
• Distributed Version Control Systems (DVCSs): under this, each project team member gets
their own local working directory and Git directory where they can make commits. After an

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


individual makes a commit locally, other team members can’t access the changes until
he/she pushes them to the central repository. Git is an example of a DVCS.

In addition, a Git repository can be bare (repository that doesn’t have a working directory) or non-
bare (one with a working directory).

Shared (or public or central) repositories should always be bare – all Github repositories are bare.

Learn Version Control with Git

Git is a free and open source, fast, powerful, distributed, easy to use, and popular version control
system that is very efficient with large projects, and has a remarkable branching and merging
system.

It is designed to handle data more like a series of snapshots of a mini filesystem, which is stored in
a Git directory.

The workflow under Git is very simple: you make modifications to files in your working directory,
then selectively add just those files that have changed, to the staging area, to be part of your next
commit.

Once you are ready, you do a commit, which takes the files from staging area and saves that
snapshot permanently to the Git directory.

To install Git in Linux, use the appropriate command for your distribution of choice:

$ sudo apt install git [On Debian/Ubuntu]


$ sudo yum install git [On CentOS/RHEL]

After installing Git, it is recommended that you tell Git who you are by providing your full name
and email address, as follows:

$ git config --global user.name “Aaron Kili”


$ git config --global user.email “aaronkilik@gmail.com”

To check your Git settings, use the following command.

$ git config --list

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Creates a New Git Repository
Shared repositories or centralized workflows are very common and that is what we will demonstrate
here.
For example, we assume that you have been tasked to setup a remote central repository for system
administrators/programmers from various departments in your organization, to work on a project
called bashscripts, which will be stored under /projects/scritpts/ on the server.
SSH into the remote server and create the necessary directory, create a group called sysadmins (add
all project team members to this group e.g user admin), and set the appropriate permissions on this
directory.

# mkdir-p /projects/scripts/
# groupadd sysadmins
# usermod -aG sysadmins admin
# chown :sysadmins -R /projects/scripts/
# chmod 770 -R /projects/scripts/

Then initialize a bare project repository.

# git init --bare /projects/scripts/bashscripts

At this point, you have successfully initialized a bare Git directory which is the central storage
facility for the project.
Try to do a listing of the directory to see all the files and directories in there:

# ls -la /projects/scripts/bashscripts/

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Clone a Git Repository
Now clone the remote shared Git repository to your local computer via SSH (you can also clone
via HTTP/HTTPS if you have a web server installed and appropriately configured, as is the case
with most public repositories on Github), for example:

$ git clone ssh://admin@remote_server_ip:/projects/scripts/bashscripts

To clone it to a specific directory (~/bin/bashscripts), use the command below.

$ git clone ssh://admin@remote_server_ip:/projects/scripts/bashscripts ~/bin/bashscripts

You now have a local instance of the project in a non-bare repository (with a working directory),
you can create the initial structure of the project (i.e add a README.md file, sub-directories for
different categories of scripts e.g recon to store reconnaissance scripts, sysadmin ro store sysadmin
scripts etc.):

$ cd ~/bin/bashscripts/
$ ls -la

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Check a Git Status Summary
To display the status of your working directory, use the status command which will shows you any
changes you have made; which files are not being tracked by Git; those changes that have been
staged and so on.

$ git status

Git Stage Changes and Commit


Next, stage all the changes using the add command with the -A switch and do the initial commit.
The -a flag instructs the command to automatically stage files that have been modified, and -m is
used to specify a commit message:

$ git add -A
$ git commit -a -m "Initial Commit"

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Publish Local Commits to Remote Git Repository
As the project team lead, now that you have created the project structure, you can publish the
changes to the central repository using the push command as shown.

$ git push origin master

Right now, your local git repository should be up-to-date with the project central repository (origin),
you can confirm this by running the status command once more.

$ git status

You can also inform you colleagues to start working on the project by cloning the repository to their
local computers.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Create a New Git Branch
Branching allows you to work on a feature of your project or fix issues quickly without touching the
codebase (master branch). To create a new branch and then switch to it, use
the branch and checkout commands respectively.

$ git branch latest


$ git checkout latest

Alternatively, you can create a new branch and switch to it in one step using the checkout
command with the -b flag.

$ git checkout -b latest

You can also create a new branch based on another branch, for instance.

$ git checkout -b latest master

To check which branch you are in, use branch command (an asterisk character indicates the active
branch):

$ git branch

After creating and switching to the new branch, make some changes under it and do some commits.

$ vim sysadmin/topprocs.sh
$ git status
$ git commit add sysadmin/topprocs.sh
$ git commit -a -m 'modified topprocs.sh'

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Merge Changes From One Branch to Another
To merge the changes under the branch test into the master branch, switch to the master branch and
do the merge.

$ git checkout master


$ git merge test

If you no longer need a particular branch, you can delete it using the -d switch.

$ git branch -d test

Download Changes From Remote Central Repository


Assuming your team members have pushed changes to the central project repository, you can
download any changes to your local instance of the project using the pull command.

$ git pull origin


OR
$ git pull origin master #if you have switched to another branch

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Inspect Git Repository and Perform Comparisons
In this last section, we will cover some useful Git features that keep track of all activities that
happened in your repository, thus enabling you to view the project history.
The first feature is Git log, which displays commit logs:

$ git log

Another important feature is the show command which displays various types of objects (such as
commits, tags, trees etc..):

$ git show

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The third vital feature you need to know is the diff command, used to compare or show difference
between branches, display changes between the working directory and the index, changes between
two files on disk and so much more.
For instance to show the difference between the master and latest branch, you can run the following
command.

$ git diff master latest

Summary
Git allows a team of people to work together using the same file(s), while recording changes to the
file(s) over time so that they can recall specific versions later.

This way, you can use Git for managing source code, configuration files or any file stored on a
computer. You may want to refer to the Git Online Documentation for further documentation.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 2: Processing Text Streams in Linux
Linux treats the input to and the output from programs as streams (or sequences) of characters. To
begin understanding redirection and pipes, we must first understand the three most important types
of I/O (Input and Output) streams, which are in fact special files (by convention in UNIX and
Linux, data streams and peripherals, or device files, are also treated as ordinary files).

The difference between > (redirection operator) and | (pipeline operator) is that while the first
connects a command with a file, the latter connects the output of a command with another
command.

# command > file


# command1 | command2

Since the redirection operator creates or overwrites files silently, we must use it with extreme
caution, and never mistake it with a pipeline. One advantage of pipes on Linux and UNIX systems
is that there is no intermediate file involved with a pipe – the stdout of the first command is not
written to a file and then read by the second command.

For the following practice exercises we will use the poem “A happy child” (anonymous author).

Using sed
The name sed is short for stream editor. For those unfamiliar with the term, a stream editor is used
to perform basic text transformations on an input stream (a file or input from a pipeline).
The most basic (and popular) usage of sed is the substitution of characters. We will begin by
changing every occurrence of the lowercase y to UPPERCASE Y and redirecting the output
to ahappychild2.txt.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The g flag indicates that sed should perform the substitution for all instances of term on every line
of file. If this flag is omitted, sed will replace only the first occurrence of term on each line.

Basic syntax:

# sed ‘s/term/replacement/flag’ file

Our example:

# sed ‘s/y/Y/g’ ahappychild.txt > ahappychild2.txt

Should you want to search for or replace a special character (such as / , \ , & ) you need to escape
it, in the term or replacement strings, with a backward slash.
For example, we will substitute the word and for an ampersand. At the same time, we will replace
the word I with You when the first one is found at the beginning of a line.

# sed 's/and/\&/g;s/^I/You/g' ahappychild.txt

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In the above command, a ^ (caret sign) is a well-known regular expression that is used to represent
the beginning of a line.
As you can see, we can combine two or more substitution commands (and use regular expressions
inside them) by separating them with a semicolon and enclosing the set inside single quotes.

Another use of sed is showing (or deleting) a chosen portion of a file. In the following example, we
will display the first 5 lines of /var/log/messages from Jun 8.

# sed -n '/^Jun 8/ p' /var/log/messages | sed -n 1,5p

Note that by default, sed prints every line. We can override this behaviour with the -n option and
then tell sed to print (indicated by p) only the part of the file (or the pipe) that matches the pattern
(Jun 8 at the beginning of line in the first case and lines 1 through 5 inclusive in the second case).
Finally, it can be useful while inspecting scripts or configuration files to inspect the code itself and
leave out comments. The following sed one-liner deletes (d) blank lines or those starting
with # (the | character indicates a boolean OR between the two regular expressions).

# sed '/^#\|^$/d' apache2.conf

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


uniq Command
The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by
default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is
commonly used along with a preceding sort (which is used to sort lines of text files).
By default, sort takes the first field (separated by spaces) as key field. To specify a different key
field, we need to use the -k option.
Examples:
The du –sch /path/to/directory/* command returns the disk space usage per sub-directories and files
within the specified directory in human-readable format (also shows a total per directory), and does
not order the output by size, but by sub-directory and file name. We can use the following command
to sort by size.

# du -sch /var/* | sort –h

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


You can count the number of events in a log by date by telling uniq to perform the comparison
using the first 6 characters (-w 6) of each line (where the date is specified), and prefixing each
output line by the number of occurrences (-c) with the following command.

# cat /var/log/mail.log | uniq -c -w 6

Finally, you can combine sort and uniq (as they usually are). Consider the following file with a list
of donors, donation date, and amount.
Suppose we want to know how many unique donors there are. We will use the following command
to cut the first field (fields are delimited by a colon), sort by name, and remove duplicate lines.

# cat sortuniq.txt | cut -d: -f1 | sort | uniq

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


grep Command
grep searches text files or (command output) for the occurrence of a specified regular expression
and outputs any line containing a match to standard output.
Examples:
Display the information from /etc/passwd for user gacanepa, ignoring case.

# grep -i gacanepa /etc/passwd

Show all the contents of /etc whose name begins with rc followed by any single number.

# ls -l /etc | grep rc[0-9]

tr Command Usage
The tr command can be used to translate (change) or delete characters from stdin, and write the
result to stdout.
Examples:
Change all lowercase to uppercase in sortuniq.txt file.

# cat sortuniq.txt | tr [:lower:] [:upper:]

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Squeeze the delimiter in the output of ls -l to only one space.

# ls -l | tr -s ' '

cut Command Usage


The cut command extracts portions of input lines (from stdin or files) and displays the result on
standard output, based on number of bytes (-b option), characters (-c), or fields (-f).
In this last case (based on fields), the default field separator is a tab, but a different delimiter can be
specified by using the -d option.
Examples:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Extract the user accounts and the default shells assigned to them from /etc/passwd (the -d option
allows us to specify the field delimiter, and the -f switch indicates which field(s) will be extracted.

# cat /etc/passwd | cut -d: -f1,7

Summing up, we will create a text stream consisting of the first and third non-blank files of the
output of the last command. We will use grep as a first filter to check for sessions of user gacanepa,
then squeeze delimiters to only one space (tr -s ‘ ‘).
Next, we’ll extract the first and third fields with cut, and finally sort by the second field (IP
addresses in this case) showing unique.

# last | grep gacanepa | tr -s ' ' | cut -d' ' -f1,3 | sort -k2 | uniq

The above command shows how multiple commands and pipes can be combined so as to obtain

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


filtered data according to our desires. Feel free to also run it by parts, to help you see the output that
is pipelined from one command to the next (this can be a great learning experience, by the way!).

Summary
Although this example (along with the rest of the examples in the current tutorial) may not seem
very useful at first sight, they are a nice starting point to begin experimenting with commands that
are used to create, edit, and manipulate files from the Linux command line. Feel free to leave your
questions and comments below – they will be much appreciated!

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 3: How to Run Multiple Commands on
Multiple Linux Servers
If you are managing multiple Linux servers, and you want to run multiple commands on all the
Linux servers, but you have no idea about how to do it. There is no need to worry, in this simple
server management chapter, we will show you how to run multiple commands on multiple Linux
servers simultaneously.
To achieve, this you can use the pssh (parallel ssh) program, a command line utility for executing
ssh in parallel on a number of hosts. With it, you can send input to all of the ssh processes, from a
shell script.

PSSH tool includes parallel versions of OpenSSH and related tools such as:

• pssh – is a program for running ssh in parallel on a multiple remote hosts.


• pscp – is a program for copying files in parallel to a number of hosts.
• prsync – is a program for efficiently copying files to multiple hosts in parallel.
• pnuke – kills processes on multiple remote hosts in parallel.
• pslurp – copies files from multiple remote hosts to a central host in parallel.

These tools are good for System Administrators who find themselves working with large collections
of nodes on a network.

Install PSSH or Parallel SSH on Linux


we shall look at steps to install the latest version of PSSH program on distributions such
as CentOS and Ubuntu using pip command.
The pip command is a small program for installing and managing Python software packages index.

On CentOS:

# yum install python-pip


# pip install pssh

On Ubuntu:

$ sudo apt-get install python-pip


$ sudo pip install pssh

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


How do I Use pssh?
When using pssh you need to create a host file with the number of hosts along with IP address and
port number that you need to connect to remote systems using pssh.

The lines in the host file are in the following form and can also include blank lines and comments.

192.168.0.10:22
192.168.0.11:22

Run Command on Multiple Servers using pssh


You can execute any single command on different or multiple Linux hosts on a network by running
a pssh command. There are many options to use with pssh as described below:
We shall look at a few ways of executing commands on a number of hosts using pssh with different
options.

1. To read hosts file, include the -h host_file-name or –hosts host_file_name option.


2. To include a default username on all hosts that do not define a specific user, use the -
l username or –user username option.
3. You can also display standard output and standard error as each host completes. By using
the -i or –inline option.
4. You may wish to make connections time out after the given number of seconds by including
the -t number_of_seconds option.
5. To save standard output to a given directory, you can use the -o /directory/path option.
6. To ask for a password and send to ssh, use the -A option.

Let’s see few examples and usage of pssh commands:

1. To execute echo “Hello TecMint” on the terminal of the multiple Linux hosts by root user and
prompt for the root user’s password, run this command below.

Important: Remember all the hosts must be included in the host file.

# pssh -h pssh-hosts -l root -A echo "Hello TecMint"

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Note: In the above command “pssh-hosts” is a file with list of remote Linux servers IP address and
SSH port number that you wish to execute commands.

2. To find out the disk space usage on multiple Linux servers on your network, you can run a single
command as follows.

# pssh -h pssh-hosts -l root -A -i "df -hT"

3. If you wish to know the uptime of multiple Linux servers at one go, then you can run the
following command.

# pssh -h pssh-hosts -l root -A -i "uptime"

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


You can view the manual entry page for the pssh command to get many other options to find out
more ways of using pssh.

# pssh --help

Summary
Parallel SSH or PSSH is a good tool to use for executing commands in an environment where
a System Administrator has to work with many servers on a network. It will make it easy for
commands to be executed remotely on different hosts on a network.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 4: How to Monitor System Usage,
Outages and Troubleshoot Linux Servers
Although Linux is very reliable, wise system administrators should find a way to keep an eye on the
system’s behavior and utilization always. Ensuring an uptime as close to 100% as possible and the
availability of resources are critical needs in many environments. Examining the past and status of
the system will allow us to foresee and most likely prevent possible issues.

In this chapter we will present a list of a few tools that are available in most upstream distributions
to check on the system status, analyze outages, and troubleshoot ongoing issues. Specifically, of the
myriad of available data, we will focus on CPU, storage space and memory utilization, basic
process management, and log analysis.

Storage space utilization


There are 2 well-known commands in Linux that are used to inspect storage space usage: df and du.

The first one, df (which stands for disk free), is typically used to report overall disk space usage by
file system.
Example 1: Reporting disk space usage in bytes and human-readable format
Without options, df reports disk space usage in bytes. With the -h flag it will display the same
information using MB or GB instead. Note that this report also includes the total size of each file
system (in 1-K blocks), the free and available spaces, and the mount point of each storage device:

# df
# df -h

That’s certainly nice – but there’s another limitation that can render a file system unusable, and that
is running out of inodes. All files in a file system are mapped to an inode that contains its metadata.

Example 2: Inspecting inode usage by file system in human-readable format with:

# df -hTi

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


you can see the amount of used and available inodes:

According to the above image, there are 146 used inodes (1%) in /home, which means that you can
still create 226K files in that file system.

Example 3: Finding and / or deleting empty files and directories

Note that you can run out of storage space long before running out of inodes, and vice-versa. For
that reason, you need to monitor not only the storage space utilization but also the number of inodes
used by file system.
Use the following commands to find empty files or directories (which occupy 0B) that are using
inodes without a reason:

# find /home -type f -empty


# find /home -type d -empty

Also, you can add the --delete flag at the end of each command if you also want to delete those
empty files and directories:

# find /home -type f -empty --delete


# find /home -type f -empty

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


As you can see, there are 142 used inodes now (4 less than before).

Example 4: Examining disk usage by directory

If the use of a certain file system is above a predefined percentage, you can use du (short for disk
usage) to find out what are the files that are occupying the most space.
The example is given for /var, which as you can see in the first image above, is used at its 67%.

# du -sch /var/*

Note: That you can switch to any of the above sub-directories to find out exactly what’s in them and how
much each item occupies. You can then use that information to either delete some files if there are not needed
or extend the size of the logical volume if necessary.

Memory and CPU utilization


The classic tool in Linux that is used to perform an overall check of CPU / memory utilization and
process management is top command. In addition, top displays a real-time view of a running
system. There other tools that could be used for the same purpose, such as htop, but I have settled
for top because it is installed out-of-the-box in any Linux distribution.

Example 5: Displaying a live status of your system with top

To start top, simply type the following command in your command line, and hit Enter.

# top

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Let’s examine a typical top output:

In rows 1 through 5 the following information is displayed:

1. The current time (8:41:32 pm) and uptime (7 hours and 41 minutes). Only one user is logged on
to the system, and the load average during the last 1, 5, and 15 minutes, respectively. 0.00, 0.01, and
0.05 indicate that over those time intervals, the system was idle for 0% of the time (0.00: no
processes were waiting for the CPU), it then was overloaded by 1% (0.01: an average of 0.01
processes were waiting for the CPU) and 5% (0.05). If less than 0 and the smaller the number (0.65,
for example), the system was idle for 35% during the last 1, 5, or 15 minutes, depending where 0.65
appears.

2. Currently there are 121 processes running (you can see the complete listing in 6). Only 1 of them
is running (top in this case, as you can see in the %CPU column) and the remaining 120 are waiting
in the background but are “sleeping” and will remain in that state until we call them. How? You can
verify this by opening a mysql prompt and execute a couple of queries. You will notice how the
number of running processes increases.
Alternatively, you can open a web browser and navigate to any given page that is being served by
Apache and you will get the same result. Of course, these examples assume that both services are
installed in your server.

3. us (time running user processes with unmodified priority), sy (time running kernel processes), ni
(time running user processes with modified priority), wa (time waiting for I/O completion), hi (time
spent servicing hardware interrupts), si (time spent servicing software interrupts), st (time stolen
from the current vm by the hypervisor – only in virtualized environments).

4. Physical memory usage.

5. Swap space usage.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Example 6: Inspecting physical memory usage

To inspect RAM memory and swap usage you can also use free command.

# free

Of course you can also use the -m (MB) or -g (GB) switches to display the same information in
human-readable form:

# free -m

Either way, you need to be aware of the fact that the kernel reserves as much memory as possible
and makes it available to processes when they request it. Particularly, the “-/+ buffers/cache” line
shows the actual values after this I/O cache is taken into account.
In other words, the amount of memory used by processes and the amount available to other
processes (in this case, 232 MB used and 270 MB available, respectively). When processes need
this memory, the kernel will automatically decrease the size of the I/O cache.

A closer look at Linux processes


At any given time, there many processes running on our Linux system. There are two tools that we
will use to monitor processes closely: ps and pstree.
Example 7: Displaying the whole process list in your system with ps (full standard format)
Using the -e and -f options combined into one (-ef) you can list all the processes that are currently
running on your system. You can pipe this output to other tools, such as grep to narrow down the
output to your desired process(es):

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


# ps -ef | grep -i squid | grep -v grep

The process listing above shows the following information: owner of the process, PID, Parent PID
(the parent process), processor utilization, time when command started, tty (the ? indicates it’s a
daemon), the cumulated CPU time, and the command associated with the process.
Example 8: Customizing and sorting the output of ps
However, perhaps you don’t need all that information, and would like to show the owner of the
process, the command that started it, its PID and PPID, and the percentage of memory it’s currently
using - in that order, and sort by memory use in descending order (note that ps by default is sorted
by PID).

# ps -eo user,comm,pid,ppid,%mem --sort -%mem

where the minus sign in front of %mem indicates sorting in descending order.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


If for some reason a process starts taking too much system resources and it’s likely to jeopardize the
overall functionality of the system, you will want to stop or pause its execution passing one of the
following signals using the kill program to it.

Other reasons why you would consider doing this is when you have started a process in the
foreground but want to pause it and resume in the background.

Signal name Signal Description


number
SIGTERM 15 Kill the process gracefully.
SIGINT 2 This is the signal that is sent when we press Ctrl + C. It aims to interrupt
the process, but the process may ignore it.
SIGKILL 9 This signal also interrupts the process but it does so unconditionally (use
with care!) since a process cannot ignore it.
SIGHUP 1 Short for “Hang UP”, this signals instructs daemons to reread its
configuration file without actually stopping the process.
SIGTSTP 20 Pause execution and wait ready to continue. This is the signal that is sent
when we type the Ctrl + Z key combination.
SIGSTOP 19 The process is paused and doesn't get any more attention from the CPU
cycles until it is restarted.
SIGCONT 18 This signal tells the process to resume execution after having received
either SIGTSTP or SIGSTOP. This is the signal that is sent by the shell
when we use the fg or bg commands.

Example 9: Pausing the execution of a running process and resuming it in the background
When the normal execution of a certain process implies that no output will be sent to the screen
while it’s running, you may want to either start it in the background (appending an ampersand at the
end of the command).

# process_name &

or, once it has started running in the foreground, pause it and send it to the background with:

# Ctrl + Z
# kill -18 PID

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Example 10: Killing by force a process “gone wild”
Please note that each distribution provides tools to gracefully stop / start / restart / reload common
services, such as service in SysV-based systems or systemctl in systemd-based systems. If a process
does not respond to those utilities, you can kill it by force by sending it the SIGKILL signal to it.

# ps -ef | grep apache


# kill -9 3821

So… what happened / is happening?


When there has been any kind of outage in the system (be it a power outage, a hardware failure, a
planned or unplanned interruption of a process, or any abnormality at all), the logs in /var/log are
your best friends to determine what happened or what could be causing the issues you’re facing.

# cd /var/log

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Some of the items in /var/log are regular text files, others are directories, and yet others are
compressed files of rotated (historical) logs. You will want to check those with the word error in
their name but inspecting the rest can come in handy as well.
Example 11: Examining logs for errors in processes
Picture this scenario. Your LAN clients are unable to print to network printers. The first step to
troubleshoot this situation is going to /var/log/cups directory and see what’s in there. You can use
the tail command to display the last 10 lines of the error_log file, or tail -f error_log for a real-time
view of the log.

# cd /var/log/cups
# ls
# tail error_log

The above screenshot provides some helpful information to understand what could be causing your
issue. Note that following the steps or correcting the malfunctioning of the process still may not
solve the overall problem, but if you become used right from the start to check on the logs every
time a problem arises (be it a local or a network one) you’ll be definitely on the right track.
Example 12: Examining the logs for hardware failures
Although hardware failures can be tricky to troubleshoot, you should check the dmesg and
messages logs and grep for related words to a hardware part presumed faulty.

The image below is taken from /var/log/messages after looking for the word error using the
following command:

# less /var/log/messages | grep -i error

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


We can see that we’re having a problem with two storage devices: /dev/sdb and /dev/sdc, which in
turn cause an issue with the RAID array.

Summary
In this chapter we have explored some of the tools that can help you to always be aware of your
system’s overall status. In addition, you need to make sure that your operating system and installed
packages are updated to their latest stable versions. And never, ever, forget to check the logs!

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 5: Network Performance, Security, and
Troubleshooting
A sound analysis of a computer network begins by understanding what the available tools are to
perform the task, how to pick the right one(s) for each step of the way, and finally, where to begin.

In this chapter we will review some well-known tools to examine the performance and increase the
security of a network, and what to do when things aren’t going as expected. Please note that this list
does not present to be comprehensive, so feel free to comment on this post using the form at the
bottom if you would like to add another useful utility that we could be missing.

What services are running and why?


One of the first things that a system administrator needs to know about each system is what services
are running and why. With that information in hand, it is a wise decision to disable all those that are
not strictly necessary and shut hosting too many servers in the same physical machine.

For example, you need to disable your FTP server if your network does not require one (there are
more secure methods to share files over a network, by the way).

In addition, you should avoid having a web server and a database server in the same system. If one
component becomes compromised, the rest run the risk of getting compromised as well.

Investigating socket connections with ss


ss is used to dump socket statistics and shows information like netstat, though it can display more
TCP and state information than other tools. In addition, it is listed in man netstat as replacement for
netstat, which is obsolete.

However, in this chapter we will focus on the information related to network security only.
Example 1: Showing ALL TCP ports (sockets) that are open on our server
All services running on their default ports (i.e. http on 80, mysql on 3306) are indicated by their
respective names. Others (obscured here for privacy reasons) are shown in their numeric form.

# ss -t -a

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The first column shows the TCP state, while the second and third column display the amount of data
that is currently queued for reception and transmission. The fourth and fifth columns show the
source and destination sockets of each connection.

On a side note, you may want to check RFC 793 to refresh your memory about possible TCP states
because you also need to check on the number and the state of open TCP connections in order to
become aware of (D)DoS attacks.
Example 2: Displaying ALL active TCP connections with their timers

# ss -t -o

In the output above, you can see that there are 2 established SSH connections. If you notice the
value of the second field of timer, you will notice a value of 36 minutes in the first connection. That
is the amount of time until the next keepalive probe will be sent.

Since it’s a connection that is being kept alive, you can safely assume that is an inactive connection
and thus can kill the process after finding out its PID.

As for the second connection, you can see that it’s currently being used (as indicated by on).
Example 3: Filtering connections by socket
Suppose you want to filter TCP connections by socket. From the server’s point of view, you need to
check for connections where the source port is 80.

# ss -tn sport = :80

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Protecting against port scanning with nmap
Port scanning is a common technique used by crackers to identify active hosts and open ports on a
network. Once a vulnerability is discovered, it is exploited to gain access to the system.

A wise sysadmin needs to check how his or her systems are seen by outsiders, and make sure
nothing is left to chance by auditing them frequently. That is called “defensive port scanning”.
Example 4: Displaying information about open ports
You can use the following command to scan which ports are open on your system or in a remote
host:

# nmap -A -sS [IP address or hostname]

The above command will scan the host for OS and version detection, port information, and
traceroute (-A). Finally, -sS sends a TCP SYN scan, preventing nmap to complete the 3-way TCP
handshake and thus typically leaving no logs on the target machine.

Before proceeding with the next example, please keep in mind that port scanning is not an illegal
activity. What IS illegal is using the results for a malicious purpose.

For example, the output of the above command run against the main server of a local university
returns the following (only part of the result is shown for sake of brevity):

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


As you can see, we discovered several anomalies that we should do well to report to the system
administrators at this local university.

This specific port scan operation provides all the information that can also be obtained by other
commands, such as:
Example 5: Displaying information about a specific port in a local or remote system

# nmap -p [port] [hostname or address]

Example 6: Showing traceroute to, and finding out version of services and OS type, hostname

# nmap -A [hostname or address]

Example 7: Scanning several ports or hosts simultaneously


You can also scan several ports (range) or subnets, as follows:

# nmap -p 21,22,80 192.168.0.0/24

You can check the man page for further details on how to perform other types of port scanning.
Nmap is indeed a very powerful and versatile network mapper utility, and you should be very well
acquainted with it in order to defend the systems you’re responsible for against attacks originated
after a malicious port scan by outsiders.

Reporting usage and performance on your network


Although there are several available tools to analyze and troubleshoot network performance, two of
them are very easy to learn and user friendly. To install both of them on CentOS, you will need to
enable the EPEL repository first.

1. Nmon Utility
nmon is a system tuner and benchmark tool. As such, it can display the CPU, memory, network,
disks, file systems, NFS, top processes, and resources (Linux version & processors). Of course,
we’re mainly interested in the network performance feature.

To install nmon, run the following command on your chosen distribution:

# yum update && yum install nmon # CentOS


# aptitude update && aptitude install nmon # Ubuntu

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Make it a habit to look at the network traffic in real time to ensure that your system is capable of
supporting normal loads and to watch out for unnecessary traffic or suspicious activity.

2. Vnstat Utility
vnstat is a console-based network traffic monitor that keeps a log of hourly (daily or monthly as
well) network traffic for the selected interface(s).

# yum update && yum install vnstat # CentOS


# aptitude update && aptitude install vnstat # Ubuntu

After installing the package, you need to enable the monitoring daemon as follows:

# service vnstat start # SysV-based systems (Ubuntu)


# systemctl start vnstat # systemd-based systems (CentOS)

Once you have installed and enabled vnstat, you can initialize the database to record traffic for eth0
(or other NIC) as follows:

# vnstat -u -i eth0

As I have just installed vnstat in the machine that I’m using to write this chapter, I still haven’t
gathered enough data to display usage statistics:

The vnstatd daemon will continue running in the background and collecting traffic data. Until it
collects enough data to produce output, you can refer to the project’s web site to see what the traffic
analysis looks like.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Transferring files securely over the network
If you need to ensure security while transferring or receiving files over a network, and specially if
you need to perform that operation over the Internet, you will want to resort to 2 secure methods for
file transfers (don’t even think about doing it over plain FTP!).
Example 8: Transferring files with scp (secure copy)
Use the -P flag if SSH on the remote hosts is listening on a port other than the default 22. The -p
switch will preserve the permissions of local_file after the transfer, which will be made with the
credentials of remote_user on remote_hosts. You will need to make sure that
/absolute/path/to/remote/directory is writeable by this user.

# scp -P XXXX -p local_file remote_user@remote_host:/absolute/path/to/remote/directory

Example 9: Receiving files with scp (secure copy)


You can also download files with scp from a remote host:

# scp remote_user@remote_host:myFile.txt /absolute/path/to/local/directory

Or even between two remote hosts (in this case, copy the file myFile.txt from remote_host1 to
remote_host2):

# scp remote_user1@remote_host1:/absolute/path/to/remote/directory1/myFile.txt
remote_user1@remote_host2:/absolute/path/to/remote/directory2/

Don’t forget to use the -P switch if SSH is listening on a port other than the default 22.

You can read more about SCP here.


Example 10: Sending and receiving files with SFTP
Unlike SCP, SFTP does not require previously knowing the location of the file that we want to
download or send.

This is the basic syntax to connect to a remote host using SFTP:

# sftp -oPort=XXXX username@host

where XXXX represents the port where SSH is listening on host, which can be either a hostname or
its corresponding IP address. You can disregard the -oPort flag if SSH is listening on its default port
(22).

Once the connection is successful, you can issue the following commands to send or receive files:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


get -Pr [remote file or directory] # Receive files
put -r [local file or directory] # Send files

In both cases, the -r switch is used to recursively receive or send files, respectively. In the first case,
the -P option will also preserve the original file permissions.

To close the connection, simply type “exit” or “bye”. You can read more about sftp here.

Configuring SSH servers and Clients


As a system administrator you will often have to log on to remote systems to perform a variety of
administration tasks using a terminal emulator. You will rarely sit in front of a real (physical)
terminal, so you need to set up a way to log on remotely to the machines that you will be asked to
manage.

In fact, that may be the last thing that you will have to do in front of a physical terminal. For
security reasons, using Telnet for this purpose is not a good idea, as all traffic goes through the wire
in unencrypted, plain text.

To begin, you will have to install the openssh, openssh-clients and openssh-servers packages. Note
that it’s a good idea to install the server counterparts as you may want to use the same machine as
both client and server at some point or another.

After installation, there is a couple of basic things that you need to consider if you want to secure
remote access to your SSH server. The following settings should be present in the
/etc/ssh/sshd_config file.

1. Change the port where the sshd daemon will listen on from 22 (the default value) to a high port
(2000 or greater), but first make sure the chosen port is not being used.

For example, let’s suppose you choose port 2500. Use netstat to check whether the chosen port is
being used or not:

# netstat -npltu | grep 2500

If netstat does not return anything, you can safely use port 2500 for sshd, and you should change the
Port setting in the configuration file as follows:

Port 2500

2. Only allow protocol 2:

Protocol 2

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


3. Configure the authentication timeout to 2 minutes, do not allow root logins, and restrict to a
minimum the list of users which can login via ssh:

LoginGraceTime 2m
PermitRootLogin no
AllowUsers gacanepa

4. It is strongly recommended to use key-based instead of password authentication:

PasswordAuthentication no
RSAAuthentication yes
PubkeyAuthentication yes

At this point you will need to restart the SSH server to apply the above changes.

Configuring SSH Passwordless Login


#4 assumes that you have already created a key pair with your user name on your client machine
and copied it to your server as explained next:

In this example we will setup SSH password-less automatic login from server 192.168.0.12 as user
tecmint to 192.168.0.11 with user sheena.

First login into server 192.168.0.12 with user tecmint and generate a pair of public keys using
following command.

# ssh-keygen -t rsa

Next, from 192.168.0.12 connect to 192.168.0.11 using sheena as user and create .ssh directory
under /home/sheena:

$ ssh sheena@192.168.0.11
$ mkdir -p .ssh

We are almost there. From 192.168.0.12 we will now upload the newly generated public key
(id_rsa.pub) to server 192.168.0.11 under sheena‘s .ssh directory as a file named authorized_keys.

# cat .ssh/id_rsa.pub | ssh sheena@192.168.0.11 'cat >> .ssh/authorized_keys'

For security, don’t forget to set the following permissions on authorized_keys:

$ chmod 700 .ssh; chmod 640 .ssh/authorized_keys

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


From now onwards you can log into 192.168.0.11 (server) as sheena from tecmint@192.168.0.12
(client) without password:

$ ssh sheena@192.168.0.11

Summary
You may want to complement what we have covered in this chapter with what we’ve already
learned in other chapters. If you know your systems well, you will be able to easily detect malicious
or suspicious activity when the numbers show unusual activity without an apparent reason. You will
also be able to plan for network resources if you’re expecting a sudden increase in their use.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 6: Monitor Linux Process Resource
Usages
Every Linux system administrator needs to know how to verify the integrity and availability of
hardware, resources, and key processes. In addition, setting resource limits on a per-user basis must
also be a part of his / her skill set.

In this chapter we will explore a few ways to ensure that the system -both hardware and the
software is behaving correctly to avoid potential issues that may cause unexpected production
downtime and money loss. Keep in mind that the files in /var/log are your best friends for this.

Reporting Processors Statistics


With mpstat you can view the activities for each processor individually or the system, both as a one-
time snapshot or dynamically. To use this tool, you will need to install sysstat:

# yum update && yum install vnstat # CentOS


# aptitude update && aptitude install vnstat # Ubuntu

One you have installed this tool, you can use it to generate reports of processors statistics.

To display 3 global reports of CPU utilization (-u) for all CPUs (as indicated by -P ALL) at a 2-
second interval, do:

# mpstat -P ALL -u 2 3

To view the same statistics for a specific CPU (CPU 0 in the following example), use:

# mpstat -P 0 -u 2 3

The output of the above commands shows these columns:

• CPU: Processor number as an integer, or the word all as an average for all processors.

• %usr: Percentage of CPU utilization while running user level applications.

• %nice: Same as %usr, but with nice priority.

• %sys: Percentage of CPU utilization that occurred while executing kernel applications. This
does not include time spent dealing with interrupts or handling hardware..

• %iowait: Percentage of time when the given CPU (or all) was idle, during which there was a
resource-intensive I/O operation scheduled on that CPU. A more detailed explanation (with
examples) can be found in http://veithen.github.io/2013/11/18/iowait-linux.html.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• %irq: Percentage of time spent servicing hardware interrupts.

• %soft: Same as %irq, but with software interrupts.

• %steal: Percentage of time spent in involuntary wait (steal or stolen time) when a virtual
machine, as guest, is “winning” the hypervisor’s attention while competing for the CPU(s).
This value should be kept as small as possible. A high value in this field means the virtual
machine is stalling - or soon will be.

• %guest: Percentage of time spent running a virtual processor.

• %idle: percentage of time when CPU(s) were not executing any tasks. If you observe a low
value in this column, that is an indication of the system being placed under a heavy load. In
that case, you will need to take a closer look at the process list, as we will discuss in a
minute, to determine what is causing it.

To put the place the processor under a somewhat high load, run the following commands and then
execute mpstat (as indicated) in a separate terminal:

# dd if=/dev/zero of=test.iso bs=1G count=1


# mpstat -u -P 0 2 3
# ping -f localhost # Interrupt with Ctrl + C after mpstat below completes
# mpstat -u -P 0 2 3

Finally, compare to the output of mpstat under “normal” circumstances:

As you can see in the image above, CPU 0 was under a heavy load during the first two examples, as
indicated by the %idle column.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In the next section we will discuss how to identify these resource-hungry processes, how to obtain
more information about them, and how to take appropriate action.

Reporting Linux Processes


To list processes sorting them by CPU usage, we will use the well-known ps command with the -eo
(to select all processes with user-defined format) and --sort (to specify a custom sorting order)
options, like so:

# ps -eo pid,ppid,cmd,%cpu,%mem --sort=-%cpu

The above command will only show the PID, PPID, the command associated with the process, and
the percentage of CPU and RAM usage sorted by the percentage of CPU usage in descending order.

When executed during the creation of the .iso file, here’s the first few lines of the output:

Once we have identified a process of interest (such as the one with PID=2822), we can navigate to /
proc/PID (/proc/2822 in this case) and do a directory listing.

This directory is where several files and sub-directories with detailed information about this process
are kept while it is running.

For example:

1. /proc/2822/io contains IO statistics for the process (number of characters and bytes read and
written, among others, during IO operations).

2. /proc/2822/attr/current shows the current SELinux security attributes of the process.

3. /proc/2822/cgroup describes the control groups (cgroups for short) to which the process
belongs if the CONFIG_CGROUPS kernel configuration option is enabled, which you can
verify with:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


If# the
cat /boot/config-$(uname
option -r)
is enabled, you should see the |following
grep -i cgroups
output.

CONFIG_CGROUPS=y

Using cgroups you can manage the amount of allowed resource usage on a per-process basis as
explained in Chapters 1 through 4 of the Red Hat Enterprise Linux 7 Resource Management guide,
and in the Control Groups section of the Ubuntu 14.04 Server documentation.

4. proc/2822/fd is a directory that contains one symbolic link for each file descriptor the
process has opened. The following image shows this information for the process that was
started in tty1 (the first terminal) to create the .iso image:

The above image shows that stdin (file descriptor 0), stdout (file descriptor 1), and stderr (file
descriptor 2) are mapped to /dev/zero, /root/test.iso, and /dev/tty1, respectively.

More information about /proc can be found in “The /proc filesystem” document kept and
maintained by Kernel.org, and in the Linux Programmer's Manual.

Setting Resource Limits on a Per-User Basis


If you are not careful and allow any user to run an unlimited number of processes, you may
eventually experience an unexpected system shutdown or get locked out as the system enters an
unusable state. To prevent this from happening, you should place a limit on the number of processes
users can start.

To do this, edit /etc/security/limits.conf and add the following line at the bottom of the file to set the
limit:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


* hard nproc 10

The first field can be used to indicate either a user, a group, or all of them (*), whereas the second
field enforces a hard limit on the number of process (nproc) to 10. To apply changes, logging out
and back in is enough.

Thus, let’s see what happens if a certain user other than root (either a legitimate one or not) attempts
to start a shell fork bomb. If we had not implemented limits, this would initially launch two
instances of a function, and then duplicate each of them in a neverending loop. Thus, it would
eventually bringing your system to a crawl.

However, with the above restriction in place, the fork bomb does not succeed but the user will still
get locked out until the system administrator kills the process associated with it:

TIP: Other possible restrictions made possible by ulimit are documented in the limits.conf file.

Other Linux Process Management Tools


In addition to the tools discussed previously, a system administrator may also need to:

1. Modify the execution priority (use of system resources) of a process using renice. This
means that the kernel will allocate more or less system resources to the process based on the
assigned priority (a number commonly known as “niceness” in a range from -20 to 19). The
lower the value, the greater the execution priority. Regular users (other than root) can only
modify the niceness of processes they own to a higher value (meaning a lower execution
priority), whereas root can modify this value for any process, and may increase or decrease
it.

The basic syntax of renice is as follows:

renice [-n] <new priority> <UID, GID, PGID, or empty> identifier

If the argument after the new priority value is not present (empty), it is set to PID by default. In that
case, the niceness of process with PID=identifier is set to <new priority>.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


2. Interrupt the normal execution of a process when needed. This is commonly known as
“killing” the process. Under the hood, this means sending the process a signal to finish its
execution properly and release any used resources in an orderly manner.

To kill a process, use the kill command as follows:

# kill PID

Alternatively, you can use pkill to terminate all processes of a given owner (-u), or a group owner (-
G), or even those processes which have a PPID in common (-P). These options may be followed by
the numeric representation or the actual name as identifier:

# pkill [options] identifier

For example, the following command will kill all processes owned by group with GID=1000.

# pkill -G 1000

will kill all processes whose PPID is 4993.

# pkill -P 4993

Before running a pkill, it is a good idea to test the results with pgrep first, perhaps using the -l
option as well to list the processes’ names.

It takes the same options but only returns the PIDs of processes (without taking any further action)
that would be killed if pkill is used.

This is illustrated in the next image:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Linux Cron Management
Linux and other Unix-like operating systems include a tool called cron that allows you to schedule
tasks (i.e. commands or shell scripts) to run on a periodic basis. Cron checks every minute the
/var/spool/cron directory for files which are named after accounts in /etc/passwd.

When executing commands, any output is mailed to the owner of the crontab (or to the user
specified in the MAILTO environment variable in the /etc/crontab file, if it exists).

Crontab files (which are created by typing crontab -e and pressing Enter) have the following
format:

Thus, if we
want to
update the
local file
database (which is used by locate to find files by name or pattern) every second day of the month at
2:15 am, we need to add the following crontab entry:

# 15 02 2 * * /bin/updatedb

The above crontab entry reads, “Run /bin/updatedb on the second day of the month, every month of
the year, regardless of the day of the week, at 2:15 am”. As I’m sure you already guessed, the star
symbol is used as a wildcard character.

After adding a cron job, you can see that a file named root was added inside /var/spool/cron, as we
mentioned earlier. That file lists all the tasks that the crond daemon should run:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In the above image, the current user’s crontab can be displayed either using cat /var/spool/cron/root
or crontab -l command.

If you need to run a task on a more fine-grained basis (for example, twice a day or three times each
month), cron can also help you to do that.

For example, to run /my/script on the 1st and 15th of each month and send any output to /dev/null,
you can add two crontab entries as follows:

01 00 1 * * /myscript > /dev/null 2>&1


01 00 15 * * /my/script > /dev/null 2>&1

But for the task to be easier to maintain, you can combine both entries into one:

01 00 1,15 * * /my/script > /dev/null 2>&1

Following the previous example, we can run /my/other/script at 1:30 am on the first day of the
month every three months:

30 01 1 1,4,7,10 * /my/other/script > /dev/null 2>&1

But when you must repeat a certain task every “x” minutes, hours, days, or months, you can divide
the right position by the desired frequency. The following crontab entry has the exact same meaning
as the previous one:

30 01 1 */3 * /my/other/script > /dev/null 2>&1

Or perhaps you need to run a certain job on a fixed frequency or after the system boots, for
example. You can use one of the following string instead of the five fields to indicate the exact time
when you want your job to run:

@reboot Run when the system boots.


@yearly Run once a year, same as 00 00 1 1 *.
@monthly Run once a month, same as 00 00 1 * *.
@weekly Run once a week, same as 00 00 * * 0.
@daily Run once a day, same as 00 00 * * *.
@hourly Run once an hour, same as 00 * * * *.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To have results of a scheduled job mailed to you, use the MAILTO cron environment variable at the
top of your crontab file. For example,

MAILTO=gacanepa@tecmint.com
30 01 1 */3 * /my/other/script > /dev/null 2>&1

will send the results the output of /my/other/script, if any, to gacanepa@tecmint.com. Of course,
this requires that an MTA is installed and configured in the same machine.

Finally, it is important to note that system-wide jobs are usually placed in /etc/crontab. You can
check /var/log/syslog (grep for cron) for more information.

Summary
In this chapter we have explored a few ways to monitor resource usage to verify the integrity and
availability of critical hardware and software components in a Linux system. We have also learned
how to take appropriate action (either by adjusting the execution priority of a given process or by
terminating it) under unusual circumstances.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 7: Update the Kernel and Ensure the
System is Bootable
Although some people use the word Linux to represent the operating system as a whole, it is
important to note that, strictly speaking, Linux is only the kernel. On the other hand, a distribution
is a fully-functional system built on top of the kernel with a wide variety of application tools and
libraries.
During normal operations, the kernel is responsible for performing two important tasks:

1. Acting as an interface between the hardware and the software running on the system.
2. Managing system resources as efficiently as possible.

To do this, the kernel communicates with the hardware through the drivers that are built into it or
those that can be later installed as a module.

For example, when an application running on your machine wants to connect to a wireless network,
it submits that request to the kernel, which in turns uses the right driver to connect to the network.

With new devices and technology coming out periodically, it is important to keep our kernel up to
date if we want to make the most of out them. Additionally, updating our kernel will help us to
leverage new kernel functions and to protect ourselves from vulnerabilities that have been
discovered in previous versions.

Ready to update your kernel on CentOS 7 and Ubuntu? If so, keep reading!

Checking Installed Kernel Version


When we install a distribution it includes a certain version of the Linux kernel. To show the current
version installed on our system we can do:

# uname -sr

If we now go to https://www.kernel.org/, we will see that the latest kernel version is 4.20 at the time
of this writing (other versions are available from the same site).
This new Kernel 4.20 version is a long-term release and will be supported for 6 years, earlier all
Linux Kernel versions were supported for 2 years only.
One important thing to consider is the life cycle of a kernel version – if the version you are
currently using is approaching its end of life, no more bug fixes will be provided after that date. For
more info, refer to the kernel Releases page.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Upgrading Linux Kernel Version
Most modern distributions provide a way to upgrade the kernel using a package management
system such as yum or apt and an officially-supported repository.

However, this will only perform the upgrade to the most recent version available from the
distribution’s repositories or latest one available in the https://www.kernel.org/.

Upgrading Kernel in CentOS

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org


# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
# yum --enablerepo=elrepo-kernel install kernel-ml

Upgrading Kernel in Ubuntu


On 64-Bit System

$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-headers-
4.20.0-042000_4.20.0-042000.201812232030_all.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-headers-
4.20.0-042000-generic_4.20.0-042000.201812232030_amd64.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-image-
unsigned-4.20.0-042000-generic_4.20.0-042000.201812232030_amd64.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-modules-
4.20.0-042000-generic_4.20.0-042000.201812232030_amd64.deb
$ sudo dpkg -i *.deb

On 32-Bit System

$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-headers-
4.20.0-042000_4.20.0-042000.201812232030_all.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-headers-
4.20.0-042000-generic_4.20.0-042000.201812232030_i386.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-image-
4.20.0-042000-generic_4.20.0-042000.201812232030_i386.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-modules-
Finally, reboot your machine to apply the latest kernel, and then select latest kernel from the grub
4.20.0-042000-generic_4.20.0-042000.201812232030_i386.deb
menu.
$ sudo dpkg -i *.deb

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Login as root, and run following command to check the kernel version:

# uname -sr

Set Default Kernel Version


To make the newly-installed version the default boot option, you will have to modify the GRUB
configuration as follows:
Open and edit the file /etc/default/grub and set GRUB_DEFAULT=0. This means that the first
kernel in the GRUB initial screen will be used as default.
Next, run the following command to recreate the kernel configuration.

# grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot and verify that the latest kernel is now being used by default.

Summary
In this chapter we have explained how to easily upgrade the Linux kernel on your system. There is
yet another method which we haven’t covered as it involves compiling the kernel from source,
which would deserve an entire book and is not recommended on production systems.

Although it represents one of the best learning experiences and allows for a fine-grained
configuration of the kernel, you may render your system unusable and may have to reinstall it from
scratch.

If you are still interested in building the kernel as a learning experience, you will find instructions
on how to do it at the Kernel Newbies page.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 8: How to Use udev for Device Detection
and Management
Udev (userspace /dev) is a Linux sub-system for dynamic device detection and management, since
kernel version 2.6. It’s a replacement of devfs and hotplug.

It dynamically creates or removes device nodes (an interface to a device driver that appears in a file
system as if it were an ordinary file, stored under the /dev directory) at boot time or if you add a
device to or remove a device from the system. It then propagates information about a device or
changes to its state to user space.

It’s function is to 1) supply the system applications with device events, 2) manage permissions of
device nodes, and 3) may create useful symlinks in the /dev directory for accessing devices, or even
renames network interfaces.

One of the pros of udev is that it can use persistent device names to guarantee consistent naming of
devices across reboots, despite their order of discovery. This feature is useful because the kernel
simply assigns unpredictable device names based on the order of discovery.

In this chapter, we will learn how to use Udev for device detection and management on Linux
systems. Note that most if not all mainstream modern Linux distributions come with Udev as part of
the default installation.

Learn Basics of Udev in Linux


The udev daemon, systemd-udevd (or systemd-udevd.service) communicates with the kernel and
receives device uevents directly from it each time you add or remove a device from the system, or a
device changes its state.
Udev is based on rules - it’s rules are flexible and very powerful. Every received device event is
matched against the set of rules read from files located in /lib/udev/rules.d and /run/udev/rules.d.
You can write custom rules files in the /etc/udev/rules.d/ directory (files should end with the .rules
extension) to process a device. Note that rules files in this directory have the highest priority.
To create a device node file, udev needs to identify a device using certain attributes such as
the label, serial number, its major and minor number used, bus device number and so much more.
This information is exported by the sysfs file system.

Whenever you connect a device to the system, the kernel detects and initializes it, and a directory
with the device name is created under /sys/ directory which stores the device attributes.

The main configuration file for udev is /etc/udev/udev.conf, and to control the runtime behavior the
udev daemon, you can use the udevadm utility.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To display received kernel events (uevents) and udev events (which udev sends out after rule
processing), run udevadm with the monitor command. Then connect a device to your system and
watch, from the terminal, how the device event is handled.
The following screenshot shows an excerpt of an ADD event after connecting a USB flash disk to
the test system:

$ udevadm monitor

To find the name assigned to your USB disk, use the lsblk utility which reads the sysfs filesystem
and udev db to gather information about processed devices.

$ lsblk

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


From the output of the previous command, the USB disk is named sdb1 (absolute path should be /
dev/sdb1 ). To query the device attributes from the udev database, use the info command.

$ udevadm info /dev/sdb1

How to Work with Udev Rules in Linux


In this section, we will briefly discuss how to write udev rules. A rule comprises of a comma-
separated list of one or more key-value pairs. Rules allow you to rename a device node from the
default name, modify permissions and ownership of a device node, trigger execution of a program
or script when a device node is created or deleted, among others.
We will write a simple rule to launch a script when a USB device is added and when it is removed
from the running system.
Let’s start by creating the two scripts:

$ sudo vim /bin/device_added.sh

Add the following lines in the device_added.sh script.

#!/bin/bash
echo "USB device added at $(date)" >>/tmp/scripts.log

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Open the second script.

$ sudo vim /bin/device_removed.sh

Then add the following lines to device_removed.sh script.

#!/bin/bash
echo "USB device removed at $(date)" >>/tmp/scripts.log

Save the files, close and make both scripts executable.

$ sudo chmod +x /bin/device_added.sh


$ sudo chmod +x /bin/device_removed.sh

Next, let’s create a rule to trigger execution of the above scripts, called /etc/udev/rules.d/80-
test.rules.

$ vim /etc/udev/rules.d/80-test.rules

Add these two following rules in it.

SUBSYSTEM=="usb", ACTION=="add", ENV{DEVTYPE}=="usb_device", RUN+="/bin/device_added.sh"


SUBSYSTEM=="usb", ACTION=="remove", ENV{DEVTYPE}=="usb_device", RUN+="/bin/device_removed.sh"

where:
• "==": is an operator to compare for equality.
• "+=": is an operator to add the value to a key that holds a list of entries.
• SUBSYSTEM: matches the subsystem of the event device.
• ACTION: matches the name of the event action.
• ENV{DEVTYPE}: matches against a device property value, device type in this case.
• RUN: specifies a program or script to execute as part of the event handling.
Save the file and close it.

Then as root, tell systemd-udevd to reload the rules files (this also reloads other databases such as
the kernel module index), by running.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


$ sudo
Now udevadm
connect control
a USB drive --reload
into your machine and check if the device_added.sh script was executed.

First of all the file scripts.log should be created under /tmp.

$ ls -l /tmp/scripts.log

Then the file should have an entry such as “USB device removed at date_time”, as shown in the
screenshot.

$ cat /tmp/scripts.log

For more information on how to write udev rules and manage udev, consult the udev and udevadm
manual entries respectively, by running:

$ man udev
$ man udevadm

Summary
Udev is a remarkable device manager that provides a dynamic way of setting up device nodes in the
/dev directory. It ensures that devices are configured as soon as they are plugged in and discovered.
It propagates information about a processed device or changes to its state, to user space.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 9: SELinux and AppArmor
To overcome the limitations of and to increase the security mechanisms provided by standard
ugo/rwx permissions and access control lists, the United States National Security Agency (NSA)
devised a flexible Mandatory Access Control (MAC) method known as SELinux (short for Security
Enhanced Linux) in order to restrict among other things the ability of processes to access or perform
other operations on system objects (such as files, directories, network ports, etc) to the least
permission possible, while still allowing for later modifications to this model.

Another popular and widely-used MAC is AppArmor, which in addition to the features provided by
SELinux, includes a learning mode that allows the system to “learn” how a specific application
behaves, and to set limits by configuring profiles for safe application usage.

In CentOS 7, SELinux is incorporated into the kernel itself and is enabled in Enforcing mode by
default (more on this in the next section), as opposed to Ubuntu which uses AppArmor.

In this chapter we will explain the essentials of SELinux and AppArmor and how to use one of
these tools for your benefit depending on your chosen distribution.

Introduction to SELinux and How to Use it on CentOS 7


Security Enhanced Linux can operate in two different ways:

• Enforcing: SELinux denies access based on SELinux policy rules, a set of guidelines that
control the security engine.

• Permissive: SELinux does not deny access, but denials are logged for actions that would
have been denied if running in enforcing mode.

SELinux can also be disabled. Although it is not an operation mode itself, it is still an option.
However, learning how to use this tool is better than just ignoring it. Keep it in mind!

To display the current mode of SELinux, use getenforce. If you want to toggle the operation
mode, use setenforce 0 (to set it to Permissive) or setenforce 1 (Enforcing).

Since this change will not survive a reboot, you will need to edit the /etc/selinux/config file and set
the SELINUX variable to either enforcing, permissive, or disabled to achieve persistence across
reboots:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


On a side note, if getenforce returns Disabled, you will have to edit /etc/selinux/config with the
desired operation mode and reboot. Otherwise, you will not be able to set (or toggle) the operation
mode with setenforce.

One of the typical uses of setenforce consists of toggling between SELinux modes (from enforcing
to permissive or the other way around) to troubleshoot an application that is misbehaving or not
working as expected. If it works after you set SELinux to Permissive mode, you can be confident
you’re looking at a SELinux permissions issue.

Two classic cases where we will most likely have to deal with SELinux are:

1. Changing the default port where a daemon listens on.

2. Setting the DocumentRoot directive for a virtual host outside of /var/www/html.

Let’s look at these two cases using the following examples.


EXAMPLE 1: Changing the default port for the sshd daemon
One of the first thing most system administrators do in order to secure their servers is change the
port where the SSH daemon listens on, mostly to discourage port scanners and external attackers.

To do this, we use the Port directive in /etc/ssh/sshd_config followed by the new port number as
follows (we will use port 9999 in this case):

Port 9999

After attempting to restart the service and checking its status we will see that it failed to start:

# systemctl restart sshd


# systemctl status sshd

If we look at /var/log/audit/audit.log, we will see that sshd was prevented from starting on port 9999

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


by SELinux because that is a reserved port for the JBoss Management service (SELinux log
messages include the word "AVC" so that they might be easily identified from other messages):

# cat /var/log/audit/audit.log | grep AVC | tail -1

At this point most people would probably disable SELinux but we won’t. We will see that there’s a
way for SELinux, and sshd listening on a different port, to live in harmony together. Make sure you
have the policycoreutils-python package installed and run:

# yum install policycoreutils-python

To view a list of the ports where SELinux allows sshd to listen on. In the following image we can
also see that port 9999 was reserved for another service and thus we can’t use it to run another
service for the time being:

# semanage port -l | grep ssh

Of course, we could choose another port for SSH, but if we are certain that we will not need to use
this specific machine for any JBoss-related services, we can then modify the existing SELinux rule
and assign that port to SSH instead:

# semanage port -m -t ssh_port_t -p tcp 9999

After that, we can use the first semanage command to check if the port was correctly assigned, or
the -lC options (short for list custom):

# semanage port -lC


# semanage port -l | grep ssh

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


We can now restart SSH and connect to the service using port 9999. Note that this change WILL
survive a reboot.
EXAMPLE 2: Choosing a DocumentRoot outside /var/www/html for a virtual host
If you need to set up a virtual host using a directory other than /var/www/html as DocumentRoot
(say, for example, /websrv/sites/gabriel/public_html):

DocumentRoot “/websrv/sites/gabriel/public_html”

Apache will refuse to serve the content because the index.html has been labeled with the default_t
SELinux type, which Apache can’t access:

# wget http://localhost/index.html
# ls -lZ /websrv/sites/gabriel/public_html/index.html

As with the previous example, you can use the following command to verify that this is indeed a
SELinux-related issue:

# cat /var/log/audit/audit.log | grep AVC | tail -1

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To change the label of /websrv/sites/gabriel/public_html recursively to httpd_sys_content_t, do:

# semanage fcontext -a -t httpd_sys_content_t


"/websrv/sites/gabriel/public_html(/.*)?"

The above command will grant Apache read-only access to that directory and its contents.

Finally, to apply the policy (and make the label change effective immediately), do:

# restorecon -R -v /websrv/sites/gabriel/public_html

Now you should be able to access the directory:

# wget http://localhost/index.html

For more information on SELinux, refer to the Fedora 25 SELinux and Administrator guide.

Introduction to AppArmor and How to Use it on Ubuntu


The operation of AppArmor is based on profiles defined in plain text files where the allowed
permissions and access control rules are set.

Profiles are then used to place limits on how applications interact with processes and files in the
system. A set of profiles is provided out-of-the-box with the operating system, whereas others can
be put in place either automatically by applications when they are installed or manually by the
system administrator.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Like SELinux, AppArmor runs profiles in two modes. In enforce mode, applications are given the
minimum permissions that are necessary for them to run, whereas in complain mode AppArmor
allows an application to take restricted actions and saves the “complaints” resulting from that
operation to a log (/var/log/kern.log, /var/log/audit/audit.log, and other logs inside
/var/log/apparmor).

These logs will show through lines with the word audit in them - errors that would occur should the
profile be run in enforce mode. Thus, you can try out an application in complain mode and adjust its
behavior before running it under AppArmor in enforce mode.

The status of AppArmor can be shown using:

$ sudo apparmor_status

The image above indicates that the profiles /sbin/dhclient, /usr/sbin/, and /usr/sbin/tcpdump are in
enforce mode (that is true by default in Ubuntu).

Since not all applications include the associated AppArmor profiles, the apparmor-profiles package,
which provides other profiles that have not been shipped by the packages they provide confinement
for.

By default, they are configured to run in complain mode so that system administrators can test them
and choose which ones are desired. We will make use of apparmor-profiles since writing our own
profiles is out of the scope of the certification.

AppArmor profiles are stored inside /etc/apparmor.d. Let’s look at the contents of that directory
before and after installing apparmor-profiles:

$ ls /etc/apparmor.d

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


If you execute sudo apparmor_status again, you will see a longer list of profiles in complain mode.
You can now perform the following operations:

To switch a profile currently in enforce mode to complain mode:

$ sudo aa-complain /path/to/file

and the other way around (complain –> enforce):

$ sudo aa-enforce /path/to/file

Wildcards are allowed in the above cases. For example,

$ sudo aa-complain /etc/apparmor.d/*

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


will place all profiles inside /etc/apparmor.d into complain mode, whereas

$ sudo aa-enforce /etc/apparmor.d/*


will switch all profiles to enforce mode.

To entirely disable a profile, create a symbolic link in the /etc/apparmor.d/disabled directory:

$ sudo ln -s /etc/apparmor.d/profile.name /etc/apparmor.d/disable/

For more information on AppArmor, please refer to the official wiki and to the documentation
provided by Ubuntu.

Summary
In this chapter we have gone through the basics of SELinux and AppArmor, two well-known
MACs. When to use one or the other? To avoid difficulties, you may want to consider sticking with
the one that comes with your chosen distribution.

In any event, they will help you place restrictions on processes and access to system resources to
increase the security in your servers.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 10: User Management, Special
Attributes, and PAM
Since Linux is a multi-user operating system (in that it allows multiple users on different computers
or terminals to access a single system), you will need to know how to perform effective user
management: how to add, edit, suspend, or delete user accounts, along with granting them the
necessary permissions to do their assigned tasks.

Adding User Accounts


To add a new user account, you can run either of the following two commands as root:

# adduser [new_account]
# useradd [new_account]

When a new user account is added to the system, the following operations are performed:

1. His/her home directory is created (/home/username by default).

2. The following hidden files are copied into the user’s home directory, and will be used to
provide environment variables for his/her user session.

.bash_logout
.bash_profile
.bashrc

3. A mail spool is created for the user.

4. A group is created and given the same name as the new user account.

Understanding /etc/passwd
The full account information is stored in the /etc/passwd file. This file contains a record per system
user account and has the following format (fields are delimited by a colon):

[username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell]

• Fields [username] and [Comment] are self explanatory.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• The x in the second field indicates that the account is protected by a shadowed password
(in /etc/shadow), which is needed to logon as [username].
• The [UID] and [GID] fields are integers that represent the User IDentification and the
primary Group IDentification to which [username] belongs, respectively.
• The [Home directory] indicates the absolute path to [username]’s home directory, and
• The [Default shell] is the shell that will be made available to this user when he or she logins
the system.

Understanding /etc/group
Group information is stored in the /etc/group file.

[Group name]:[Group password]:[GID]:[Group members]

where

• [Group name] is the name of group.

• An x in [Group password] indicates group passwords are not being used.

• [GID]: same as in /etc/passwd.

• [Group members]: a comma separated list of users who are members of [Group name].

Modifying User Accounts


After adding an account, you can edit the following information (to name a few fields) using the
usermod command, whose basic syntax of usermod is as follows:

usermod [options] [username]

To set the expiry date for an account, use the --expiredate flag followed by a date in YYYY-MM-
DD format.

# usermod --expiredate 2014-10-30 tecmint


© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To add the user to supplementary groups, use the combined -aG, or --append --groups options,
followed by a comma separated list of groups.

# usermod --append --groups root,users tecmint

To change the default location of the user’s home directory, use the -d, or --home options, followed
by the absolute path to the new home directory.

# usermod --home /tmp tecmint

To change the shell the user will use by default. Use --shell, followed by the path to the new shell.

# usermod --shell /bin/sh tecmint

To view the groups an user is a member of, do:

# groups tecmint
# id tecmint

Now let’s execute all the above commands in one go.

# usermod --expiredate 2014-10-30 --append --groups root,users --home /tmp


--shell /bin/sh tecmint

In the example above, we will set the expiry date of the tecmint user account to October 30th, 2014.
We will also add the account to the root and users group. Finally, we will set sh as its default shell
and change the location of the home directory to /tmp:

For existing accounts, we can also do the following:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Disabling account by locking password: use the --l (lowercase L) or the --lock option to lock a
user’s password.

# usermod --lock tecmint

Unlocking password: use the --u or the --unlock option to unlock a user’s password that was
previously blocked.

# usermod --unlock tecmint

Creating a new group for read and write access to files that need to be accessed by several users.

# groupadd common_group # Add a new group


# chmod :common_group common.txt # Change the group owner of common.txt to common_group
# usermod -aG common_group user1 # Add user1 to common_group
# usermod -aG common_group user2 # Add user2 to common_group
# usermod -aG common_group user3 # Add user3 to common_group

Deleting User Accounts


You can delete a group with the following command:

# groupdel [group_name]

If there are files owned by group_name, they will not be deleted, but the group owner will be set to
the GID of the group that was deleted.

You can delete an account (along with its home directory, if it’s owned by the user, and all the files
residing therein, and also the mail spool) using the userdel command with the --remove option:

# userdel --remove [username]

Group Management
Every time a new user account is added to the system, a group with the same name is created with
the username as its only member. Other users can be added to the group later.

One of the purposes of groups is to implement a simple access control to files and other system
resources by setting the right permissions on those resources.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


For example, suppose you have the following users

• user1 (primary group: user1)

• user2 (primary group: user2)

• user3 (primary group: user3)

All of them need read and write access to a file called common.txt located somewhere on your local
system, or maybe on a network share that user1 has created. You may be tempted to do something
like:

# chmod 660 common.txt #or


# chmod u=rw,g=rw,o= common.txt

However, this will only provide read and write access to the owner of the file and to those users
who are members of the group owner of the file (user1 in this case).

Again, you may be tempted to add user2 and user3 to group user1, but that will also give them
access to the rest of the files owned by user user1 and group user1.

This is where groups come in handy, and here’s what you should do in a case like this.

Special File Permissions


Besides the basic read, write, and execute permissions, there are other less used (but not less
important) permission settings, sometimes referred to as “special permissions”.

Like the basic permissions discussed earlier, they are set using an octal file or through a letter
(symbolic notation) that indicates the type of permission.

SETUID
When the setuid permission is applied to an executable file, a user running the program inherits the
effective privileges of the program's owner. Since this approach can reasonably raise security
concerns, the number of files with setuid permission must be kept to a minimum.

You will likely find programs with this permission set when a system user needs to access a file
owned by root. Summing up, it isn’t just that the user can execute the binary file, but also that he
can do so with root’s privileges.

For example, let’s check the permissions of /bin/passwd. This binary is used to change the password
of an account, and modifies the /etc/shadow file.

The superuser can change anyone’s password, but all other users should only be able to change their
own.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Thus, any user should have permission to run /bin/passwd, but only root will be able to specify an
account. Other users can only change their corresponding passwords.

SETGID
When the setgid bit is set, the effective GID of the real user becomes that of the group owner. Thus,
any user can access a file under the privileges granted to the group owner of such file.

In addition, when the setgid bit is set on a directory, newly created files inherit the same group as
the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory.

You will most likely use this approach whenever members of a certain group need access to all the
files in a directory, regardless of the file owner's primary group.

# chmod g+s [filename]

To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions.

# chmod 2755 [directory]

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


STICKY BIT
When the “sticky bit” is set on files, Linux just ignores it, whereas for directories it has the effect of
preventing users from deleting or even renaming the files it contains unless the user owns the
directory, the file, or is root.

# chmod o+t [directory]

To set the sticky bit in octal form, prepend the number 1 to the current (or desired) basic
permissions.

# chmod 1755 [directory]

Without the sticky bit, anyone able to write to the directory can delete or rename files. For that
reason, the sticky bit is commonly found on directories, such as /tmp, that are world-writable.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Special File Attributes
There are other attributes that enable further limits on the operations that are allowed on files. For
example, prevent the file from being renamed, moved, deleted, or even modified. They are set with
the chattr command and can be viewed using the lsattr tool, as follows:

# chattr +i file1
# chattr +a file2

After executing those two commands, file1 will be immutable (which means it cannot be moved,
renamed, modified or deleted) whereas file2 will enter append-only mode (can only be open in
append mode for writing).

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Accessing the root Account Using sudo
One of the ways users can gain access to the root account is by typing:

$ su

and then entering root’s password.

If authentication succeeds, you will be logged on as root with the current working directory as the
same as you were before.

If you want to be placed in root’s home directory instead, run:

$ su -

and then enter root’s password.

The above procedure requires that a normal user knows root’s password, which poses a serious
security risk.

For that reason, the sysadmin can configure the sudo command to allow an ordinary user to execute
commands as a different user (usually the superuser) in a very controlled and limited way.

Thus, restrictions can be set on a user to enable him to run one or more specific privileged
commands and no others.

To authenticate using sudo, the user uses his/her own password. After entering the command, we
will be prompted for our password (not the superuser's) and if the authentication succeeds (and if
the user has been granted privileges to run the command), the specified command is carried out.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To grant access to sudo, the system administrator must edit the /etc/sudoers file. It is recommended
that this file is edited using the visudo command instead of opening it directly with a text editor.

$ visudo

This opens the /etc/sudoers file using vim.

These are the most relevant lines:

Defaults secure_path="/usr/sbin:/usr/bin:/sbin"
root ALL=(ALL) ALL
tecmint ALL=/bin/yum update
gacanepa ALL=NOPASSWD:/bin/updatedb
%admin ALL=(ALL) ALL

Let’s take a closer look at them:

Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin"

This line lets you specify the directories that will be used for sudo, and is used to prevent using
user-specific directories, which can harm the system.

The next lines are used to specify permissions:

root ALL=(ALL) ALL

• The first ALL keyword indicates that this rule applies to all hosts.

• The second ALL indicates that the user in the first column can run commands with the
privileges of any user.

• The third ALL means any command can be run.

tecmint ALL=/bin/yum update

If no user is specified after the = sign, sudo assumes the root user. In this case, user tecmint will be
able to run yum update as root.

gacanepa ALL=NOPASSWD:/bin/updatedb

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The NOPASSWD directive allows user gacanepa to run /bin/updatedb without needing to enter his
password.

Finally,

%admin ALL=(ALL) ALL

The % sign indicates that this line applies to a group called “admin”. The meaning of the rest of the
line is identical to that of a regular user. This means that members of the group “admin” can run all
commands as any user on all hosts.

To see what privileges are granted to you by sudo, use the “-l” option to list them:

$ sudo -l

PAM (Pluggable Authentication Modules)


Pluggable Authentication Modules (PAM) offer the flexibility of setting a specific authentication
scheme on a per-application and / or per-service basis using modules.

This tool – present on all modern Linux distributions - overcame the problem often faced by
developers in the early days of Linux, when each program that required authentication had to be
compiled specially to know how to get the necessary information.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


For example, with PAM, it doesn't matter whether your password is stored in /etc/shadow or on a
separate server inside your network.

For example, when the login program needs to authenticate a user, PAM provides dynamically the
library that contains the functions for the right authentication scheme.

Thus, changing the authentication scheme for the login application (or any other program using
PAM) is easy since it only involves editing a configuration file (most likely, a file named after the
application, located inside /etc/pam.d, and less likely in /etc/pam.conf).

Files inside /etc/pam.d indicate which applications are using PAM natively. In addition, we can tell
whether a certain application uses PAM by checking if it the PAM library (libpam) has been linked
to it:

# ldd $(which login) | grep libpam # login uses PAM


# ldd $(which top) | grep libpam # top does not use PAM

In the above image we can see that the libpam has been linked with the login application. This
makes sense since this application is involved in the operation of system user authentication,
whereas top does not.

Let’s examine the PAM configuration file for passwd – yes, the well-known utility to change user’s
passwords. It is located at /etc/pam.d/passwd:

# cat /etc/pam.d/passwd

The first column indicates the type of authentication to be used with the module-path (third
column). When a hyphen appears before the type, PAM will not record to the system log if the
module cannot be loaded because it could not be found in the system.

The following authentication types are available:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• account: this module type checks if the user or service has supplied valid credentials to
authenticate.
• auth: this module type verifies that the user is who he / she claims to be and grants any
needed privileges.
• password: this module type allows the user or service to update their password.
• session: this module type indicates what should be done before and/or after the
authentication succeeds.

The second column (called control) indicates what should happen if the authentication with this
module fails:

• requisite: if the authentication via this module fails, overall authentication will be denied
immediately.
• required is similar to requisite, although all other listed modules for this service will be
called before denying authentication.
• sufficient: if the authentication via this module fails, PAM will still grant authentication even
if a previous marked as required failed.
• optional: if the authentication via this module fails or succeeds, nothing happens unless this
is the only module of its type defined for this service.
• include means that the lines of the given type should be read from another file.
• substack is similar to includes but authentication failures or successes do not cause the exit
of the complete module, but only of the substack.

The fourth column, if it exists, shows the arguments to be passed to the module.

The first three lines in /etc/pam.d/passwd (shown above), load the system-auth module to check that
the user has supplied valid credentials (account).

If so, it allows him / her to change the authentication token (password) by giving permission to use
passwd (auth).

For example, if you append:

remember=2

to the following line

password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok

in /etc/pam.d/system-auth:

password sufficient pam_unix.so sha512 shadow nullok try_first_pass


use_authtok remember=2

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


the last two hashed passwords of each user are saved in /etc/security/opasswd so that they cannot be
reused:

For more information refer to the Linux-PAM System Administrator’s guide and in man 5
pam.conf.

Summary
Effective user and file management skills are essential tools for any system administrator. In this
chapter we have covered the basics and hope you can use it as a good starting to point to build
upon.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 11: Install OpenLDAP Server for
Centralized Authentication
Lightweight Directory Access Protocol (LDAP in short) is an industry standard, lightweight, widely
used set of protocols for accessing directory services. A directory service is a shared information
infrastructure for accessing, managing, organizing, and updating everyday items and network
resources, such as users, groups, devices, emails addresses, telephone numbers, volumes and many
other objects.

The LDAP information model is based on entries. An entry in a LDAP directory represents a single
unit or information and is uniquely identified by what is called a Distinguished Name (DN). Each of
the entry’s attributes has a type and one or more values.

An attribute is a piece of information associated with an entry. The types are typically mnemonic
strings, such as “cn” for common name, or “mail” for email address. Each attribute is assigned one
or more values consisting in a space-separated list.

The following is an illustration of how information is arranged in the LDAP directory.

In this chapter, we will show how to install and configure OpenLDAP server for centralized
authentication in Ubuntu 16.04/18.04 and CentOS 7.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Installing LDAP Server
First start by installing OpenLDAP, an open source implementation of LDAP and some traditional
LDAP management utilities using the following commands.

# yum install openldap openldap-servers #CentOS 7


$ sudo apt install slapd ldap-utils #Ubuntu 16.04/18.04

On Ubuntu, during the package installation, you will be prompted to enter the password for the
admin entry in your LDAP directory, set a secure password and confirm it.

When the installation is complete, you can start the service as explained next.
On CentOS 7, run the following commands to start the openldap server daemon, enable it to auto-
start at boot time and check if its up and running (on Ubuntu the service should be auto-started
under systemd, you can simply check its status):

$ sudo systemctl start slapd


$ sudo systemctl enable slapd
$ sudo systemctl status slapd

Next, allow requests to the LDAP server daemon through the firewall as shown.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


# firewall-cmd --add-service=ldap #CentOS 7
$ sudo ufw allow ldap #Ubuntu 16.04/18.04

Configuring LDAP Server


Note: It is not recommended to manually edit the LDAP configuration, you need to add the
configurations in a file and use the ldapadd or ldapmodify command to load them to the LDAP
directory as shown below.
Now create a OpenLDAP administrative user and assign a password for that user. In the below
command, a hashed value is created for the given password, take note of it, you will use it in the
LDAP configuration file.

$ slappasswd

Then create an LDIF file (ldaprootpasswd.ldif) which is used to add an entry to the LDAP directory.

$ sudo vim ldaprootpasswd.ldif

Add the following contents in it:

dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}PASSWORD_CREATED

explaining the attribute-value pairs above:

• olcDatabase: indicates a specific database instance name and can be typically found
inside /etc/openldap/slapd.d/cn=config.
• cn=config: indicates global config options.
• PASSWORD: is the hashed string obtained while creating the administrative user.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Next, add the corresponding LDAP entry by specifying the URI referring to the ldap server and the
file above.

$ sudo ldapadd -Y EXTERNAL -H ldapi:/// -f ldaprootpasswd.ldif

Configuring LDAP Database


Now copy the sample database configuration file for slapd into the /var/lib/ldap directory, and set
the correct permissions on the file.

$ sudo cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG


$ sudo chown -R ldap:ldap /var/lib/ldap/DB_CONFIG
$ sudo systemctl restart slapd

Next, import some basic LDAP schemas from the /etc/openldap/schema directory as follows.

$ sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif


$ sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
$ sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif

Now add your domain in the LDAP database and create a file called ldapdomain.ldif for your
domain.

$ sudo vim ldapdomain.ldif

Add the following content in it (replace example with your domain and PASSWORD with the
hashed value obtained before):

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"
read by dn.base="cn=Manager,dc=example,dc=com" read by * none

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=example,dc=com

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=Manager,dc=example,dc=com

dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}PASSWORD

dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by
dn="cn=Manager,dc=example,dc=com" write by anonymous auth by self write by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=Manager,dc=example,dc=com" write by * read

Then add the above configuration to the LDAP database with the following command.

$ sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f ldapdomain.ldif

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In this step, we need to add some entries to our LDAP directory. Create another file
called baseldapdomain.ldif with the following content.

dn: dc=example,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: example com
dc: example

dn: cn=Manager,dc=example,dc=com
objectClass: organizationalRole
cn: Manager
description: Directory Manager

dn: ou=People,dc=example,dc=com
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=example,dc=com
objectClass: organizationalUnit
ou: Group

Save the file and then add the entries to the LDAP directory.

$ sudo ldapadd -Y EXTERNAL -x -D cn=Manager,dc=example,dc=com -W -f baseldapdomain.ldif

The next step is to create a LDAP user for example, tecmint, and set a password for this user as
follows.

$ sudo useradd tecmint


$ sudo passwd tecmint

Then create the definitions for a LDAP group in a file called ldapgroup.ldif with the following
content.

dn: cn=Manager,ou=Group,dc=example,dc=com
objectClass: top
objectClass: posixGroup
gidNumber: 1005

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In the above configuration, gidNumber is the GID in /etc/group for tecmint and add it to the
OpenLDAP directory.

$ sudo ldapadd -Y EXTERNAL -x -W -D "cn=Manager,dc=example,dc=com" -f ldapgroup.ldif

Next, create another LDIF file called ldapuser.ldif and add the definitions for user tecmint.

dn: uid=tecmint,ou=People,dc=example,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: tecmint
uid: tecmint
uidNumber: 1005
gidNumber: 1005
homeDirectory: /home/tecmint
userPassword: {SSHA}PASSWORD_HERE
loginShell: /bin/bash
gecos: tecmint
shadowLastChange: 0
shadowMax: 0
shadowWarning: 0

then load fthe configuration to the LDAP directory.

$ sudo ldapadd -Y EXTERNAL -x -D cn=Manager,dc=example,dc=com -W -f ldapuser.ldif

Once you have setup a central server for authentication, the final part is to enable the client to
authenticate using LDAP as explained in the next chapter:
For more information, see the appropriate documentation from OpenLDAP Software document
catalog and Ubuntu users can refer to the OpenLDAP server guide.

Summary
OpenLDAP is a open source implementation of LDAP in Linux. In this chapter, we have shown
how to install and configure OpenLDAP server for centralized authentication, in Ubuntu
16.04/18.04 and CentOS 7.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 12: Configure OpenLDAP Client to
Connect External Authentication
LDAP (short for Lightweight Directory Access Protocol) is an industry standard, widely used set of
protocols for accessing directory services.
A directory service in simple terms is a centralized, network-based database optimized for read
access. It stores and provides access to information that must either be shared between applications
or is highly distributed.

Directory services play an important role in developing intranet and Internet applications by helping
you share information about users, systems, networks, applications, and services throughout the
network.

A typical use case for LDAP is to offer a centralized storage of usernames and passwords. This
allows various applications (or services) to connect to the LDAP server to validate users.

After setting up a working LDAP server, you will need to install libraries on the client for
connecting to it.

In this chapter, we will show how to configure an LDAP client to connect to an external
authentication source.

Installing LDAP Client in Ubuntu


On the client systems, you will needs to install a few necessary packages to make authentication
mechanism function correctly with an LDAP server.
First start by installing the necessary packages by running the following command.

$ sudo apt update && sudo apt install libnss-ldap libpam-ldap ldap-utils nscd

During the installation, you will be prompted for details of your LDAP server (provide the values
according to your environment).

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Note that the ldap-auth-config package which is auto-installed does the most of the configurations
based on the inputs you enter.

Next, enter the name of the LDAP search base, you can use the components of their domain names
for this purpose as shown in the screenshot.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Also choose the LDAP version to use and click Ok.

Next, disable login requirement to the LDAP database using the next option.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Also define LDAP account for root and click Ok.

Next, enter the password to use when ldap-auth-config tries to login to the LDAP directory using
the LDAP account for root.

The results of the dialog will be stored in the file /etc/ldap.conf. If you want to make any alterations,
open and edit this file using your favorite command line editor.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Next, configure the LDAP profile for NSS by running.

$ sudo auth-client-config -t nss -p lac_ldap

Then configure the system to use LDAP for authentication by updating PAM configurations. From
the menu, choose LDAP and any other authentication mechanisms you need. You should now be
able to log in using LDAP-based credentials.

$ sudo sudo pam-auth-update

In case you want the home directory of the user to be created automatically, then you need to
perform one more configuration in the common-session PAM file.

$ sudo sudo vim /etc/pam.d/common-session

Add this line in it.

session required pam_mkhomedir.so skel=/etc/skel umask=077

Save the changes and close the file.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Then restart the NCSD (Name Service Cache Daemon) service with the following command.

$ sudo systemctl restart nscd


$ sudo systemctl enable nscd

Note: If you are using replication, LDAP clients will need to refer to multiple servers specified
in /etc/ldap.conf. You can specify all the servers in this form:

uri ldap://ldap1.example.com ldap://ldap2.example.com

This implies that the request will time out and if the Provider (ldap1.example.com) becomes
unresponsive, the Consumer (ldap2.example.com) will attempt to be reached to process it.
To check the LDAP entries for a particular user from the server, run the getent command, for
example.

$ getent passwd tecmint

If the above command displays details of the specified user from the /etc/passwd file, your client
machine is now configured to authenticate with the LDAP server, you should be able to log in using
LDAP-based credentials.

Configure LDAP Client in CentOS 7


To install the necessary packages, run the following command. Note that in this section, if you are
operating the system as a non-root administrative user, use the sudo command to run all commands.

# yum update && yum install openldap openldap-clients nss-pam-ldapd

Next, enable the client system to authenticate using LDAP. You can use the authconfig utility, which
is an interface for configuring system authentication resources.
Run the following command and replace example.com with your domain
and dc=example,dc=com with your LDAP domain controller.

# authconfig --enableldap --enableldapauth –ldapserver=ldap.example.com


--ldapbasedn="dc=example,dc=com" --enablemkhomedir --update

In the above command, the --enablemkhomedir option creates a local user home directory at the
first connection if none exists.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Next, test if the LDAP entries for a particular user from the server, for example user tecmint.

# getent passwd tecmint

The above command should display details of the specified user from the /etc/passwd file, which
implies that the client machine is now configured to authenticate with the LDAP server.

Important: If SELinux is enabled on your system, you need to add a rule to allow creating home
directories automatically by mkhomedir.

For more information, consult the appropriate documentation from OpenLDAP Software document
catalog.

Summary
LDAP, is a widely used protocol for querying and modifying a directory service. In this chapter, we
have shown how to configure an LDAP client to connect to an external authentication source, in
Ubuntu and CentOS client machines.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 13: How to Configure and Use PAM in
Linux
Linux-PAM (short for Pluggable Authentication Modules which evolved from the Unix-PAM
architecture) is a powerful suite of shared libraries used to dynamically authenticate a user to
applications (or services) in a Linux system.

It integrates multiple low-level authentication modules into a high-level API that provides dynamic
authentication support for applications. This allows developers to write applications that require
authentication, independently of the underlying authentication system.

Many modern Linux distributions support Linux-PAM (hereinafter referred to as “PAM”) by


default. In this article, we will explain how to configure advanced PAM in Ubuntu and CentOS
systems.

Before we proceed any further, note that:

• As a system administrator, the most important thing is to master how PAM configuration
file(s) define the connection between applications (services) and the pluggable
authentication modules (PAMs) that perform the actual authentication tasks. You don’t
necessarily need to understand the internal working of PAM.
• PAM has the potential to seriously alter the security of your Linux system. Erroneous
configuration can disable access to your system partially, or completely. For instance an
accidental deletion of a configuration file(s) under /etc/pam.d/* and/or /etc/pam.conf can
lock you out of your own system!

How to Check a Program is PAM-aware


To employ PAM, an application/program needs to be “PAM aware“; it needs to have been written
and compiled specifically to use PAM. To find out if a program is “PAM-aware” or not, check if it
has been compiled with the PAM library using the ldd command.
For example sshd:

$ sudo ldd /usr/sbin/sshd | grep libpam.so


libpam.so.0 => /lib/x86_64-linux-gnu/libpam.so.0 (0x00007effddbe2000)

How to Configure PAM in Linux


The main configuration file for PAM is /etc/pam.conf and the /etc/pam.d/ directory contains the
PAM configuration files for each PAM-aware application/services. PAM will ignore the file if the
directory exists.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The syntax for the main configuration file is as follows. The file is made up of a list of rules written
on a single line (you can extend rules using the “\” escape character) and comments are preceded
with “#” marks and extend to the next end of line.

The format of each rule is a space separated collection of tokens (the first three are case-
insensitive). We will explain the these tokens in subsequent sections.

service type control-flag module module-arguments

Where:

• service: actual application name.


• type: module type/context/interface.
• control-flag: indicates the behavior of the PAM-API should the module fail to succeed in its
authentication task.
• module: the absolute filename or relative pathname of the PAM.
• module-arguments: space separated list of tokens for controlling module behavior.

The syntax of each file in /etc/pam.d/ is similar to that of the main file and is made up of lines of the
following form:

type control-flag module module-arguments

This is a example of a rule definition (without module-arguments) found in the /etc/pam.d/sshd file,
which disallows non-root logins when /etc/nologin exists:

account required pam_nologin.so

Understanding PAM Management Groups and Control-flags


PAM authentication tasks are separated into four independent management groups. These groups
manage different aspects of a typical user’s request for a restricted service.
A module is associated to one these management group types:

• account: provide services for account verification: has the user’s password expired?; is this
user permitted access to the requested service?.
• authentication: authenticate a user and set up user credentials.
• password: are responsible for updating user passwords and work together with
authentication modules.
• session: manage actions performed at the beginning of a session and end of a session.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


PAM loadable object files (the modules) are to be located in the following
directory: /lib/security/ or /lib64/security depending on the architecture.

The supported control-flags are:

• requisite: failure instantly returns control to the application indicating the nature of the first
module failure.
• required: all these modules are required to succeed for libpam to return success to the
application.
• sufficient: given that all preceding modules have succeeded, the success of this module leads
to an immediate and successful return to the application (failure of this module is ignored).
• optional: the success or failure of this module is generally not recorded.

In addition to the above are the keywords, there are two other valid control flags:

• include: include all lines of given type from the configuration file specified as an argument
to this control.
• substack: include all lines of given type from the configuration file specified as an argument
to this control.

How to Restrict root Access to SSH Service Via PAM


As an example, we will configure how to use PAM to disable root user access to a system via
SSH and login programs. Here, we want to disable root user access to a system, by restricting
access to login and sshd services.
We can use the /lib/security/pam_listfile.so module which offers great flexibility in limiting the
privileges of specific accounts. Open and edit the file for the target service in
the /etc/pam.d/ directory as shown.

$ sudo vim /etc/pam.d/sshd OR


$ sudo vim /etc/pam.d/login

Add this rule in both files.

auth required pam_listfile.so \


onerr=succeed item=user sense=deny file=/etc/ssh/deniedusers

Explaining the tokens in the above rule:

• auth: is the module type (or context).

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• required: is a control-flag that means if the module is used, it must pass or the overall result
will be fail, regardless of the status of other modules.
• pam_listfile.so: is a module which provides a way to deny or allow services based on an
arbitrary file.
• onerr=succeed: module argument.
• item=user: module argument which specifies what is listed in the file and should be checked
for.
• sense=deny: module argument which specifies action to take if found in file, if the item is
NOT found in the file, then the opposite action is requested.
• file=/etc/ssh/deniedusers: module argument which specifies file containing one item per
line.

Next, we need to create the file /etc/ssh/deniedusers and add the name root in it:

$ sudo vim /etc/ssh/deniedusers

Save the changes and close the file, then set the required permissions on it:

$ sudo chmod 600 /etc/ssh/deniedusers

From now on, the above rule will tell PAM to consult the /etc/ssh/deniedusers file and deny access
to the SSH and login services for any listed user.

How to Configuring Advanced PAM in Linux


To write more complex PAM rules, you can use valid control-flags in the following form:

type [value1=action1 value2=action2 …] module module-arguments

Where valueN corresponds to the return code from the function invoked in the module for which
the line is defined. You can find supported values from the on-line PAM Administrator’s Guide. A
special value is default, which implies all valueN’s not mentioned explicitly.
The actionN can take one of the following forms:

• ignore: if this action is used with a stack of modules, the module’s return status will not
contribute to the return code the application obtains.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• bad: indicates that the return code should be thought of as indicative of the module failing. If
this module is the first in the stack to fail, its status value will be used for that of the whole
stack.
• die: equivalent to bad but may terminate the module stack and PAM immediately returning
to the application.
• ok: this instructs PAM that the system administrator thinks this return code should contribute
directly to the return code of the full stack of modules.
• done: equivalent to ok but may terminate the module stack and PAM immediately returning
to the application.
• N (an unsigned integer): equivalent to ok but may jump over the next N modules in the
stack.
• Reset: this action clears all memory of the state of the module stack and restart with the next
stacked module.

Each of the four keywords: required; requisite; sufficient; and optional, have an equivalent
expression in terms of the [...] syntax, which allow you to write more complicated rules and they
are:

• required: [success=ok new_authtok_reqd=ok ignore=ignore default=bad]


• requisite: [success=ok new_authtok_reqd=ok ignore=ignore default=die]
• sufficient: [success=done new_authtok_reqd=done default=ignore]
• optional: [success=ok new_authtok_reqd=ok default=ignore]

The following is an example from a modern CentOS 7 system. Let’s consider these rules from
the /etc/pam.d/postlogin PAM file:

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
session [success=1 default=ignore] pam_succeed_if.so service !~ gdm* service !~ su* quiet
session [default=1] pam_lastlog.so nowtmp showfailed
session optional pam_lastlog.so silent noupdate showfailed

Here is another example configuration from the /etc/pam.d/smartcard-auth PAM file:

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required pam_env.so
auth [success=done ignore=ignore default=die] pam_pkcs11.so nodebug wait_for_card
auth required pam_deny.so
account required pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_succeed_if.so uid < 1000 quiet
account required pam_permit.so
password required pam_pkcs11.so
session optional pam_keyinit.so revoke
© 2016-2019 Tecmint.com
session required – Last revised: January 2019 – All rights reserved
pam_limits.so
session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
For more information, see the pam.d man page:

$ man pam.d

Lastly, a comprehensive description of the Configuration file syntax and all PAM modules can be
found in the documentation for Linux-PAM.

Summary
PAM is a powerful high-level API that allows programs that rely on authentication to authentic
users to applications in a Linux system.

In this chapter, we’ve explained how to configure advanced features of PAM in Ubuntu and
CentOS.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 14: How to Create SSH Tunneling or Port
Forwarding in Linux
SSH tunneling (also referred to as SSH port forwarding) is simply routing local network traffic
through SSH to remote hosts. This implies that all your connections are secured using encryption.
It provides an easy way of setting up a basic VPN (Virtual Private Network), useful for connecting
to private networks over unsecure public networks like the Internet.
You may also be used to expose local servers behind NATs and firewalls to the Internet over secure
tunnels, as implemented in ngrok.
SSH sessions permit tunneling network connections by default and there are three types of SSH port
forwarding: local, remote and dynamic port forwarding.
In this chapter, we will demonstrate how to quickly and easily setup a SSH tunneling or the
different types of port forwarding in Linux.

Testing Environment:
• Local Host: 192.168.43.31
• Remote Host: CentOS 7 VPS with hostname server1.example.com.

Usually, you can securely connect to a remote server using SSH as follows. In this example, I have
configured passwordless SSH login between my local and remote hosts, so it has not asked for user
admin’s password.

$ ssh admin@server1.example.com

Local SSH Port Forwarding


This type of port forwarding lets you connect from your local computer to a remote server.
Assuming you are behind a restrictive firewall, or blocked by an outgoing firewall from accessing
an application running on port 3000 on your remote server.
You can forward a local port (e.g 8080) which you can then use to access the application locally as

follows. The -L flag defines the port forwarded to the remote host and remote port.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


$ ssh admin@server1.example.com -L 8080: server1.example.com:3000

Adding the -N flag means do not execute a remote command, you will not get a shell in this case.

$ ssh -N admin@server1.example.com -L 8080: server1.example.com:3000

The -f switch instructs ssh to run in the background.

$ ssh -f -N admin@server1.example.com -L 8080: server1.example.com:3000

Now, on your local machine, open a browser, instead of accessing the remote application using the

address server1.example.com:3000, you can simply use localhost:8080 or 192.168.43.31:8080 , as


shown in the screenshot below.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Remote SSH Port Forwarding
Remote port forwarding allows you to connect from your remote machine to the local computer. By
default, SSH does not permit remote port forwarding.
You can enable this using the GatewayPorts directive in you SSHD main configuration file /etc/ssh/
sshd_config on the remote host.
Open the file for editing using your favorite command line editor.

$ sudo vim /etc/ssh/sshd_config

Look for the required directive, uncomment it and set its value to yes , as shown in the screenshot.

GatewayPorts yes

Save the changes and exit. Next, you need to restart sshd to apply the recent change you made.

$ sudo systemctl restart sshd OR


$ sudo service sshd restart

Next run the following command to forward port 5000 on the remote machine to port 3000 on the
local machine.

$ ssh -f -N admin@server1.example.com -R 5000:localhost:3000

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Once you understand this method of tunneling, you can easily and securely expose a local
development server, especially behind NATs and firewalls to the Internet over secure tunnels.
Tunnels such as Ngrok, pagekite, localtunnel and many others work in a similar way.

Dynamic SSH Port Forwarding


This is the third type of port forwarding. Unlike local and remote port forwarding which allow
communication with a single port, it makes possible, a full range of TCP communications across a
range of ports.
Dynamic port forwarding sets up your machine as a SOCKS proxy server which listens on
port 1080, by default.
For starters, SOCKS is an Internet protocol that defines how a client can connect to a server via a
proxy server (SSH in this case).

You can enable dynamic port forwarding using the -D option. The following command will start a
SOCKS proxy on port 1080 allowing you to connect to the remote host.

$ ssh -f -N -D 1080 admin@server1.example.com

From now on, you can make applications on your machine use this SSH proxy server by editing
their settings and configuring them to use it, to connect to your remote server. Note that
the SOCKS proxy will stop working after you close your SSH session.

Summary
In this chapter, we explained the various types of port forwarding from one machine to another, for
tunneling traffic through the secure SSH connection.

Attention: SSH port forwarding has some considerable disadvantages, it can be abused: it can be
used to by-pass network monitoring and traffic filtering programs (or firewalls). Attackers can use it
for malicious activities.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 15: How to Install and Configure
Firewalld
Firewalld (firewall daemon) is an alternative to the iptables service, for dynamically managing a
system’s firewall with support for network (or firewall) zones and provides a D-Bus interface for
managing configurations.

It’s easy to use and configure, and it’s now the default firewall management tool on RHEL/CentOS,
Fedora and several other Linux distributions.

In this chapter, we will discuss how to configure system firewall with firewalld and implement
basic packet filtering in CentOS and Ubuntu.

The Basics About Firewalld


Firewalld comprises of three layers, which are, the:
• core layer: responsible for handling the configuration and the back ends (listed below).
• D-Bus interface: the primary means of changing and creating the firewall configuration.
• backends: for interacting with netfilter (the native kernel module used for firewalling). They
include iptables, ip6tables, ebtables, ipset, nft, linnftables; networkmanager; and modules.
It manages firewall rules by implementing network/firewall zones that define the trust level of
network connections or interfaces.
Other supported firewall features include services, direct configuration (used to directly pass raw
iptables syntax), IPSets as well as ICMP types.
Two kinds of configuration environments are supported by firewalld:
• runtime configuration which is only effective until the machine has been rebooted or the
firewalld service has been restarted
• permanent configuration which is saved and works persistently.
The firewall-cmd command line tool is used to manage runtime and permanent configuration.
Alternatively, you may use the firewall-config graphical user interface (GUI) configuration tool to
interact with the daemon.
In addition, firewalld offers a well defined interface for other local services or applications to
request changes to the firewall rules directly, if they are running with root privileges.

The global configuration file for firewalld is located at /etc/firewalld/firewalld.conf and firewall
features are configured in XML format.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Understanding Important Firewalld Features
The central feature of firewalld is network/firewall zones. Every other feature is bounded to a zone.
A firewall zone describes the trust level for a connection, interface or source address binding.

The default configuration comes with a number of predefined zones sorted according to the default
trust level of the zones from untrusted to trusted: drop, block, public, external, dmz, work, home,
internal and trusted. They are defined in files stored under the /usr/lib/firewalld/zones directory.

You can configure or add your custom zones using the CLI client or simply create or copy a zone
file in /etc/firewalld/zones from existing files and edit it.

Another important concept under firewalld is services. A service is defined using ports and
protocols; these definitions represent a given network service such as a web server or remote access
service. Services are defined in files stored under the /usr/lib/firewalld/services/ or
/etc/firewalld/services/ directory.

If you know basic iptables/ip6tables/ebtables concepts, you can also use the direct interface (or
configuration) to gain direct access to the firewall. But, for those without any iptables knowledge,
you can employ the rich language for creating more complex firewall rules for IPv4 and Ipv6.

Installing Firewalld
On CentOS 7, the firewalld package comes pre-installed and you can verify using following
command.

$ rpm -qa firewalld

On Ubuntu 16.04 and 18.04, you can install it using the default package manager as shown.

$ sudo apt install firewalld

Managing Firewalld
Firewalld is a regular systemd service that can be managed via the systemctl command.

$ sudo systemctl start firewalld


$ sudo systemctl enable firewalld
$ sudo systemctl status firewalld

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


After starting firewalld service, you can also check whether the daemon is running or not, using
the firewall-cmdtool (in case it’s not active, this command will output “not running”).

$ sudo firewall-cmd --state

If you happen to save any changes permanently, you can reload firewalld. This will reload firewall
rules and keep state information. The current permanent configuration will become new runtime
configuration.

$ sudo firewall-cmd --reload

Working with Firewalld Zones


To get a list of all available firewall zones and services, run these commands.

$ sudo firewall-cmd –get-zones


$ sudo firewall-cmd --get-services

The default zone is the zone that is used for every firewall feature that is not explicitly bounded to
another zone. You can get the default zone set for network connections and interfaces by running.

$ sudo firewall-cmd --get-default-zone

To set the default zone, for example to external, use the following command.

Note that adding the option --permanent sets the configuration permanently (or enables querying of
information from the permanent configuration environment).

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


$ sudo firewall-cmd --set-default-zone=external OR
$ sudo firewall-cmd --set-default-zone=external --permanent
$ sudo firewall-cmd --reload

Next, let’s look at how to add an interface to a zone. This example shows how to add your wireless
network adapter (wlp1s0) to zone home, which is used in home areas.

$ sudo firewall-cmd --zone=home --add-interface=wlp1s0

An interface can only be added to a single zone. To move it to another zone, use the --change-
interface switch as shown, or remove it from the previous zone using the –remove-interface switch,
then add it to the new zone.
Assuming you want to connect to a public WI-FI network, you should move your wireless interface
back to the public zone, like this:

$ sudo firewall-cmd --zone=public --add-interface=wlp1s0


$ sudo firewall-cmd --zone=public --change-interface=wlp1s0

You can use many zones at the same time. To get a list of all active zones with the enabled features
such as interfaces, services, ports, protocols, run:

$ sudo firewall-cmd --get-active-zones

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In relation to the previous point, If you want to find more information about a particular zone, i.e
everything added or enabled in it, use one of these commands:

$ sudo firewall-cmd --zone=home –list-all OR


$ sudo firewall-cmd --info-zone public

Another useful option is --get-target , which shows you the target of a permanent zone. A target is
one of: default, ACCEPT, DROP, REJECT. You can check the target of various zones:

$ sudo firewall-cmd --permanent --zone=public --get-target


$ sudo firewall-cmd --permanent --zone=block --get-target
$ sudo firewall-cmd --permanent --zone=dmz --get-target
$ sudo firewall-cmd --permanent --zone=external --get-target
$ sudo firewall-cmd --permanent --zone=drop --get-target

Enable or Disable Ports in Firewalld


To open a port (or port/protocol combination) in the firewall, simply add it in a zone with the --add-
port option. If you don’t explicitly specify the zone, it will be enabled in the default zone.
The following example shows how to add port 80 and 443 to allow in-bound web traffic via the
HTTP and HTTPS protocols, respectively:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Next, reload
$ sudo firewalld and
firewall-cmd check the enabled
--zone=public features--add-port=80/tcp
--permanent in the public zone once more, you should be
--add-port=443/tcp
able see the just added ports.

$ sudo firewall-cmd –reload


$ sudo firewall-cmd --info-zone public

Blocking or closing a port in the firewall is equally easy, simply remove it from a zone with the --
remove-port option. For example, to close ports 80 and 443 in the public zone.

$ sudo firewall-cmd --zone=public --permanent --remove-port=80/tcp --remove-port=443/tcp

Instead of using port or port/protocol combination, you can use the service name to which a port is
assigned as explained in the next section.

Enable or Disable Services in Firewalld


To open a service in the firewall, enable it using the --add-service option. If zone is omitted, default
zone will be used.
The following command will permanently enable the http service in the public zone.

$ sudo firewall-cmd --zone=public --permanent --add-service=http


$ sudo firewall-cmd --reload

The --remove-service option can be used to disable a service.

$ sudo firewall-cmd --zone=public --permanent --remove-service=http


$ sudo firewall-cmd --reload

Enable or Disable IP Masquerading Using Firewalld


IP Masquerading (also known as IPMASQ or MASQ) is a NAT (Network Address Translation)
mechanism in Linux networking which allows your hosts in a network, with private IP addresses to
communicate with the Internet using your Linux server’s (IPMASQ gateway) assigned public IP
address.
It is a one-to-many mapping. Traffic from the your invisible hosts will appear to other computers on
the internet as if it were coming from your Linux server.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


You can enable IP masquerading in a desired zone, for instance the public zone. But before doing
that, first check if masquerading is active or not (a “no” means its disabled and a “yes” means
otherwise).

$ sudo firewall-cmd --zone=public –query-masquerade


$ sudo firewall-cmd --zone=public --add-masquerade

A typical use case for masquerading is to perform port forwarding. Assuming you want to SSH
from a remote machine to a host in your internal network with the IP 10.20.1.3, on which the sshd
daemon is listening on port 5000.

You can forward all connections to port 22 on your Linux server to the intended port on your target
host by issuing:

$ sudo firewall-cmd --zone=public –add-forward-port=port=22=proto=tcp:


toport=5000:toaddr=10.20.1.3

To disable masquerading in a zone, use the --remove-masquerade switch.

$ sudo firewall-cmd --zone=public --remove-masquerade

Enable or Disable IMCP Request in Firewalld


ICMP (Internet Control Message Protocol) messages are either information requests or replies to
information requests or in error conditions.
You can enable or disable ICMP messages in the firewall, but before that first list all supported
icmp types.

$ sudo firewall-cmd --get-icmptypes

To add or remove a block type you want.

$ sudo firewall-cmd --zone=home --add-icmp-block=echo-reply OR


$ sudo firewall-cmd --zone=home --remove-icmp-block=echo-reply

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


You can view all icmp types added in a zone using the --list-icmp-blocks switch.

$ sudo firewall-cmd --zone=home --list-icmp-blocks

Pass Raw iptables Rules in Firewalld


The firewall-cmd also provides direct options ( --direct ) for you to get more direct access to the
firewall. This is useful for those with basic knowledge of iptables.
Important: You should only use the direct options as a last resort when it’s not possible to use the
regular firewall-cmd options explained above.

Here is an example of how to pass raw iptables rule, using the --add-rules switch. You can easily
remove these rules by replacing --add-rule with --remove-rule :

$ sudo firewall-cmd --direct --add-rule ipv4 filter IN_public_allow 0


-m tcp -p tcp --dport 80 -j ACCEPT

If you aren’t familiar with iptables syntax, you can opt for firewalld’s “rich language” for creating
more complex firewall rules in an easy to understand manner as explained next.

Using Rich Language in Firewalld


The rich language (also known as rich rules) is used to add more complex firewall rules
for IPv4 and IPv6without the knowledge of iptables syntax.
It extends the zone features (service, port, icmp-block, masquerade and forward-port) that we have
covered. It supports source and destination addresses, logging, actions and limits for logs and
actions.

The --add-rich-rule is used to add rich rules. This example shows how to allow
new IPv4 and IPv6connections for service http and log 1 per minute using audit:

$ sudo firewall-cmd --add-rich-rule='rule service name="http"


audit limit value="1/m" accept'

To remove the added rule, replace the --add-rich-rule option with --remove-rich-rule .

$ sudo firewall-cmd --remove-rich-rule='rule service name="http"


audit limit value="1/m" accept'

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


This feature also allows for blocking or allowing traffic from a specific IP address. The following
example shows how to reject connections from the IP 10.20.1.20.

$ sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4"


source address="192.168.0.254" reject'

Enable or Disable Panic Mode in Firewalld


Panic mode is a special mode under firewalld where all in-bound and out-bound packets are
dropped, and active connections will expire once activated.
You can enable this mode in emergency situations where a threat to your network environment
exits.

To query panic mode, use the --query-panic option.

$ sudo firewall-cmd --query-panic

To enable panic mode, use the --panic-on option. You can test if it is working using the ping
command as shown. Because the packet is dropped, the name www.google.com can not be
resolved, hence the error displayed.

$ sudo firewall-cmd --panic-on


$ ping -c 2 www.google.com

To disable panic mode, use the --panic-off option.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


$ sudo firewall-cmd --panic-off

Lockdown Firewalld
Remember, we mentioned under the basics about firewalld that local applications or services are
able to alter the firewall configuration if they are running with root privileges. You can control
which applications are able to request firewall changes, by specifying then in a lockdown whitelist.

This feature is turned off by default, you can enable or disable it with the --lockdown-on or --

lockdown switch receptively.

$ sudo firewall-cmd --lockdown-on OR


$ sudo firewall-cmd --lockdown-off

Note that it is recommended to enable or disable this feature by editing the main config file, because
the firewall-cmd may not exist on lockdown whitelist when you enable lockdown.

$ sudo vim /etc/firewalld/firewalld.conf

Find the parameter Lockdown and change its value from no (means off) to yes (means on).

Lockdown=yes

To make this setting permanent reload firewalld.

$ sudo firewall-cmd --reload

Summary
Firewalld is an easy to use replacement for the iptables service, which uses iptables as a backend.
In this chapter, we have shown how to install firewalld package, explained firewalld’s important
features and discussed how to configure them in the runtime and permanent configuration
environments.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 16: How to Setup Apache with Name-
Based Virtual Hosting with SSL Certificate
In this chapter we will show you how to configure Apache to serve web content, and how to set up
name-based virtual hosts and SSL, including a self-signed certificate.

Note that this chapter is not supposed to be a comprehensive guide on Apache, but rather a starting
point for self-study about this topic for the LFCE exam.

Installing Apache Web Server


The Apache web server is a robust and reliable FOSS implementation of a HTTP server. As of the
end of 2018, Apache powers 385 million sites, giving it a 39.45% share of the market. You can use
Apache to serve a standalone website or multiple virtual hosts in one machine.

# yum update && yum install httpd [On CentOS]


# aptitude update && aptitude apache2 [On Ubuntu]

By now, you should have the Apache web server installed and running. You can verify this with the
following command:

# ps -ef | grep -Ei '(apache|httpd)' | grep -v grep

Note that the above command checks for the presence of either apache or httpd (the most common
names for the web daemon) among the list of running processes. If Apache is running, you will get
output like the following:

The ultimate method of testing the Apache installation and checking whether it’s running is
launching a web browser and pointing to the IP of the server.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


We should be presented with the following screen or at least a message confirming that Apache is
working:

Configuring Apache
The main configuration file for Apache can be in different directories depending on your
distribution:

/etc/apache2/apache2.conf # Ubuntu
/etc/httpd/conf/httpd.conf # CentOS

Fortunately for us, the configuration directives are extremely well documented in the Apache
project web site. We will refer to some of them throughout this chapter.

Serving Pages in a Standalone Web Server


The most basic usage of Apache is to serve web pages in a standalone server where no virtual hosts
have been configured.

The DocumentRoot directive specifies the directory out of which Apache will serve web pages and
other documents.

Note that by default, all requests are taken from this directory, but you can also use symbolic links
and / or aliases to point to other locations as well.

Unless matched by the Alias directive (which allows documents to be stored in the local filesystem
instead of under the directory specified by DocumentRoot), the server appends the path from the
requested URL to the document root to make the path to the document.

For example, given the following DocumentRoot:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


When the web browser points to [Server IP or hostname]/lfce/tecmint.html, the server will open
/var/www/html/lfce/tecmint.html (assuming that such file exists) and save the event to its access log
with a 200 (OK) response (or with the corresponding response code to the error log).

The access log is typically found inside /var/log/httpd (CentOS) or /var/log/apache2 (Ubuntu) under
a descriptive name, such as access.log or access_log. Otherwise, the failed event will still be logged
to the access log but with a 404 (Not Found) response.

The failed events will be recorded in the error log:

The format of the access log can be customized according to your needs using the LogFormat
directive in the main configuration file, whereas you cannot do the same with the error log.

The default format of the access log is as follows:

LogFormat "%h %l %u %t \"%r\" %>s %b" [nickname]

where each of the letters preceded by a percent sign indicates the server to log a certain piece of
information:

String Description
%h Remote hostname or IP address
%l Remote log name
%u Remote user if the request is authenticated
%t Date and time when the request was
received
%r First line of request to the server
%>s Final status of the request
%b Size of the response [bytes]

and nickname is an optional alias that can be used to customize other logs without having to enter
the whole configuration string again.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


You may refer to the LogFormat directive [Custom log formats section] in the Apache docs for
further options.

Both log files (access and error) represent a great resource to quickly analyze at a glance what’s
happening on the Apache server. They are the first tool a system administrator uses to troubleshoot
issues.

Finally, another important directive is Listen, which tells the server to accept incoming requests on
the specified port or address/port combination:

• If only a port number is defined, Apache will listen on the given port on all network
interfaces (the wildcard sign * is used to indicate ‘all network interfaces’).
• If both IP address and port is specified, then Apache will listen on the combination of given
port and network interface.

Please note (as you will see in the examples below) that multiple Listen directives can be used at the
same time to specify multiple addresses and ports to listen to. This option instructs the server to
respond to requests from any of the listed addresses and ports.

Restrict Access to a Web Page with Apache


If you want to ensure only allowed users can access a certain web page or directory, Apache
provides a few authentication methods that you can use. We will explain how to use basic
authentication (username and password) to accomplish that goal.

To begin, we will use htpasswd to create username/password pairs in /etc/apache2/.creds as


follows (you can choose another file, as long as the account running Apache has read access). As an
example, let’s add user tecmint to this list:

htpasswd -Bc /etc/apache2/.creds tecmint

where -c is used to create the file (use only if it does not exist previously) and -B to encrypt the
password. Note that it is not required that this user exists in /etc/passwd). Don’t forget to take note
of the password since you will need it to access the protected resource later.

Next, let’s assign the proper permissions and ownership (replace www-data with apache if you’re
using CentOS instead of Ubuntu):

# chmod 640 /etc/apache2/.creds


# chgrp www-data /etc/apache2/.creds

Now add the following lines in the Apache configuration file to password-protect
/var/www/html/secret:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


<Directory "/var/www/html/secret">
AuthType Basic
Finally, save changes
AuthName "This andis
restart Apache. The next
a restricted time you point your browser to the above
directory"
directory you will be asked tofile
AuthBasicProvider enter your credentials (tecmint and the chosen password). If the
authentication succeeds,
AuthUserFile you will be able to access the directory’s contents.
"/etc/apache2/.creds"
Require valid-user
To restrict access to a specific web page only, use the Files directive instead of Directory.
</Directory>

What we have just discussed also applies to virtual hosts, our next topic.

Setting Up Name-Based Virtual Hosts


The concept of virtual host defines an individual site (or domain) that is stored in and served from
the same physical machine. Multiple sites / domains can be served off a single “real” server as
virtual hosts.

This process is transparent to the end user, to whom it appears that the different sites are being
served by distinct web servers.

Name-based virtual hosting allows the server to rely on the client to report the hostname as part of
the HTTP headers. Thus, using this technique, many different hosts can share the same IP address.a

Each virtual host is configured in a directory within DocumentRoot. For our case, we will use the
following dummy domains for the testing setup, each located in the corresponding directory:

• ilovelinux.com - /var/www/html/ilovelinux.com/public_html

• linuxrocks.org - /var/www/html/linuxrocks.org/public_html

For pages to be displayed correctly, we will chmod each virtual host directory to 755:

# chmod -R 755 /var/www/html/ilovelinux.com/public_html


# chmod -R 755 /var/www/html/linuxrocks.org/public_html

Next, create a sample index.html file inside each public_html directory:

<html>
<head>
<title>www.ilovelinux.com</title>
</head>
<body>
<h1>This is the main page of www.ilovelinux.com</h1>
</body>
</html>

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Finally, in CentOS add the following section at the bottom of /etc/httpd/conf/httpd.conf or
/etc/apache2/httpd.conf, respectively, or just modify it if it’s already there:

<VirtualHost *:80>
ServerAdmin admin@ilovelinux.com
DocumentRoot /var/www/html/ilovelinux.com/public_html
ServerName www.ilovelinux.com
ServerAlias www.ilovelinux.com ilovelinux.com
ErrorLog /var/www/html/ilovelinux.com/error.log
LogFormat "%v %l %u %t \"%r\" %>s %b" myvhost
CustomLog /var/www/html/ilovelinux.com/access.log myvhost
</VirtualHost>
<VirtualHost *:80>
ServerAdmin admin@linuxrocks.org
DocumentRoot /var/www/html/linuxrocks.org/public_html
ServerName www.linuxrocks.org
ServerAlias www.linuxrocks.org linuxrocks.org
ErrorLog /var/www/html/linuxrocks.org/error.log
LogFormat "%v %l %u %t \"%r\" %>s %b" myvhost
CustomLog /var/www/html/linuxrocks.org/access.log myvhost
</VirtualHost>

Please note that you can also add each virtual host definition in separate files inside the
/etc/httpd/conf.d directory. If you choose to do so, each configuration file must be named as
follows:

/etc/httpd/conf.d/ilovelinux.com.conf
/etc/httpd/conf.d/linuxrocks.org.conf

In other words, you need to add .conf to the site or domain name.

In Ubuntu, each individual configuration file is named /etc/apache2/sites-available/[site name].conf.


Each site is then enabled or disabled with the a2ensite or a2dissite commands, respectively,
as follows:

# a2ensite /etc/apache2/sites-available/ilovelinux.com.conf
# a2dissite /etc/apache2/sites-available/ilovelinux.com.conf
# a2ensite /etc/apache2/sites-available/linuxrocks.org.conf
# a2dissite /etc/apache2/sites-available/linuxrocks.org.conf

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The a2ensite and a2dissite commands create links to the virtual host configuration file and
place (or remove) them in the /etc/apache2/sites-enabled directory.

To be able to browse to both sites from another Linux box, you will need to add the following lines
in the /etc/hosts file of the client machine to redirect requests to those domains to a specific IP
address:

[IP address of your web server] www.ilovelinux.com


[IP address of your web server] www.linuxrocks.org

As a security measure, SELinux will not allow Apache to write logs to a directory other than the
default /var/log/httpd. You can either disable SELinux, or set the right security context:

# chcon system_u:object_r:httpd_log_t:s0 /var/www/html/xxxxxx/error.log

where xxxxxx is the directory inside /var/www/html where you have defined your Virtual Hosts.

After restarting Apache, you should see the following page at the above addresses:

Installing and Configuring SSL with Apache


Finally, we will create and install a self-signed certificate to use with Apache. This kind of setup is
acceptable in small environments, such as a private LAN.

However, if your server will expose content to the outside world over the Internet, you will want to
install a certificate signed by a 3rd party to corroborate its authenticity.

Either way, a certificate will allow you to encrypt the information that is transmitted to, from, or
within your site.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In CentOS, you need to install the mod_ssl package first:

# yum update && yum install mod_ssl # CentOS


whereas in Ubuntu you’ll have to enable the ssl module for Apache:

# a2enmod ssl

The following steps are explained using a CentOS test server, but your setup should be almost
identical in the other distributions (if you run into any kind of issues, don’t hesitate to leave your
questions using the comments form).

Step 1 [Optional]: Create a directory to store your certificates.

# mkdir /etc/httpd/ssl-certs

Step 2: Generate your self-signed certificate and the key that will protect it:

# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout


/etc/httpd/ssl-certs/apache.key -out /etc/httpd/ssl-certs/apache.crt

A brief explanation of the options listed above:

• req -X509 indicates we are creating a x509 certificate.

• -nodes (NO DES) means “don’t encrypt the key”.

• -days 365 is the number of days the certificate will be valid for.

• -newkey rsa:2048 creates a 2048-bit RSA key.

• -keyout /etc/httpd/ssl-certs/apache.key is the absolute path of the RSA key.

• -out /etc/httpd/ssl-certs/apache.crt is the absolute path of the certificate.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Step 3: Open your chosen virtual host configuration file (or its corresponding section in
/etc/httpd/conf/httpd.conf as explained earlier) and add the following lines to a virtual host
declaration listening on port 443:

SSLEngine on
SSLCertificateFile /etc/httpd/ssl-certs/apache.crt
SSLCertificateKeyFile /etc/httpd/ssl-certs/apache.key

The following example is taken from /etc/httpd/conf/httpd.conf:

Then restart Apache.

# service apache2 restart # sysvinit and upstart-based systems


# systemctl restart httpd.service # systemd-based systems

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


And point your browser to https://www.ilovelinux.com. You will be presented with the following
screen:

Go ahead and click on “I understand the risks” and “Add exception”:

Finally, check “Permanently store this exception” and click “Confirm Security Exception”:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


and you will be redirected to your home page using https:

Summary
In this chapter we have shown how to configure Apache and name-based virtual hosting with SSL
to secure data transmission. If for some reason you ran into any issues, feel free to let us know. We
will be more than glad to help you perform a successful set up.

You may refer to the Let’s Encrypt section to further setup free SSL/TLS certificates needed for
your server to run securely, making a smooth browsing experience for your users, without any
errors.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 17: How to Setup Nginx with Name-
Based Virtual Hosting with SSL Certificate
Nginx (short for Engine-x) is a free, open source, powerful, high-performance and scalable HTTP
and reverse proxy server, a mail and standard TCP/UDP proxy server. It is easy to use and
configure, with a simple configuration language. Nginx is now the preferred web server software
for powering heavily loaded sites, due its scalability and performance.

In this chapter will discuss how to use Nginx as a HTTP server, configure it to serve web content,
and set up name-based virtual hosts, and create and install SSL for secure data transmissions,
including a self-signed certificate on Ubuntu and CentOS.

Installing Nginx Web Server


First start by installing the Nginx package from the official repositories using your package
manager as shown.

$ sudo apt install nginx [On Ubuntu]


$ sudo yum install epel-release [On CentOS]
$ sudo yum install nginx

After the Nginx package is installed, you need to start the service for now, enable it to auto-start at
boot time and view it’s status, using the following commands.
Note that on Ubuntu, it should be started and enabled automatically while the package is pre-
configured.

$ sudo systemctl start nginx


$ sudo systemctl enable nginx
$ sudo systemctl status nginx

If your system has a firewall enabled, you need to open port 80 and 443 to
allow HTTP and HTTPS traffic respectively, through it, by running.

------------ On CentOS ------------


$ sudo firewall-cmd --permanent –add-port=80/tcp
$ sudo firewall-cmd --permanent –add-port=443/tcp
$ sudo firewall-cmd --reload
------------ On Ubuntu ------------
$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp
$ sudo ufw reload
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
The ideal method for testing the Nginx installation and checking whether it’s running and able to
serve web pages is by opening a web browser and pointing to the IP of the server.

Configuring Nginx Web Server


Nginx’s configuration files are located in the directory /etc/nginx and the global configuration file
is located at /etc/nginx/nginx.conf on both CentOS and Ubuntu.

Nginx is made up of modules that are controlled by various configuration options, known
as directives. A directive can either be simple (in the form name and values terminated with a ; )
or block ( has extra instructions enclosed using {} ). And a block directive which contains other
directives is called a context.
All the directives are comprehensively explained in the Nginx documentation in the project website.
You can refer to it for more information.

Serving Pages in a Standalone Web Server


At a foundational level, Nginx can be used to serve static content such as HTML and media files, in
standalone mode, where only the default server block is used (analogous to Apache where no virtual
hosts have been configured).
We will start by briefly explaining the configuration structure in the main configuration file.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


$ sudo vim /etc/nginx/nginx.conf
If you look into this Nginx configuration file, the configuration structure should appear as follows

and this is referred to as the main context, which contains many other simple and block directives.
All web traffic is handled in the http context.

user nginx;
worker_processes 1;
..…

error_log /var/log/nginx/error.log warn;


pid /var/run/nginx.pid;
....
Events {
..…
}
Http {
server{
…….
}
..…
}

The following is a sample Nginx main configuration (/etc/nginx/nginx.conf) file, where the http
block above contains an include directive which tells Nginx where to find website configuration
files (virtual host configurations).

user www-data;
worker_processes auto;
pid /run/nginx.pid;
Events {
worker_connections 768;
# multi_accept on;
}
Http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
©
}
2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
Note that on Ubuntu, you will also find an additional include directive (include /etc/nginx/sites-
enabled/*;), where the directory /etc/nginx/sites-enabled/ stores symlinks to the websites
configuration files created in /etc/nginx/sites-available/, to enable the sites. And deleting a symlink
disables that particular site.
Based on your installation source, you’ll find the default website configuration file
at /etc/nginx/conf.d/default.conf (if you installed from official NGINX repository and EPEL)
or /etc/nginx/sites-enabled/default (if you installed from Ubuntu repositories).

This is our sample default nginx server block located at /etc/nginx/conf.d/default.conf on the test
system.

Server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /var/www/html/;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}

A brief explanation of the directives in the above configuration:

• listen: specifies the port the server listens on.


• server_name: defines the server name which can be exact names, wildcard names, or regular
expressions.
• root: specifies the directory out of which Nginx will serve web pages and other documents.
• index: specifies the type(s) of index file(s) to be served.
• location: used to process requests for specific files and folders.

From a web browser, when you point to the server using the hostname localhost or its IP address, it
processes the request and serves the file /var/www/html/index.html, and immediately saves the
event to its access log (/var/log/nginx/access.log) with a 200 (OK) response. In case of an error
(failed event), it records the message in the error log (/var/log/nginx/error.log).

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Restrict Access to a Web Page with Nginx
In order to restrict access to your website/application or some parts of it, you can setup basic HTTP
authentication. This can be used essentially to restrict access to the whole HTTP server, individual
server blocks or location blocks.
Start by creating a file that will store your access credentials (username/password) by using
the htpasswdutility.

$ yum install httpd-tools #RHEL/CentOS


$ sudo apt install apache2-utils #Debian/Ubuntu

As an example, let’s add user admin to this list (you can add as many users as possible), where the -
c option is used to specify the password file, and the -B to encrypt the password. Once you
hit [Enter], you will be asked to enter the users password:

$ sudo htpasswd -Bc /etc/nginx/conf.d/.htpasswd admin

Then, let’s assign the proper permissions and ownership to the password file (replace the user and
group nginx with www-data on Ubuntu).

$ sudo chmod 640 /etc/nginx/conf.d/.htpasswd


$ sudo chown nginx:nginx /etc/nginx/conf.d/.htpasswd

As we mentioned earlier on, you can restrict access to your webserver, a single website (using its
server block) or specific directory or file. Two useful directives can be used to achieve this:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• auth_basic – turns on validation of user name and password using the “HTTP Basic
Authentication” protocol.
• auth_basic_user_file – specifies the credential’s file.

As an example, we will show how to password-protect the directory /var/www/html/protected.

Server {
listen 80 default_server;
server_name localhost;
root /var/www/html/;
index index.html;
location / {
try_files $uri $uri/ =404;
}

location /protected/ {
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
}
}

Now, save changes and restart Nginx service.

$ sudo systemctl restart nginx

The next time you point your browser to the above directory (http://localhost/protected) you will be
asked to enter your login credentials (username admin and the chosen password).

A successful login allows you to access the directory’s contents, otherwise you will get a a “401
Authorization Required” error.

Setting Up Name-Based Virtual Hosts


The server context allows multiple domains/sites to be stored in and served from the same physical
machine or virtual private server (VPS). Multiple server blocks (representing virtual hosts) can be
declared within the http context for each site/domain. Nginx decides which server processes a
request based on the request header it receives.
We will demonstrate this concept using the following dummy domains, each located in the specified
directory:

• wearetecmint.com – /var/www/html/wearetecmint.com/
• welovelinux.com – /var/www/html/welovelinux.com/

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Next, assign the appropriate permissions on the directory for each site.

$ sudo chmod -R 755 /var/www/html/wearetecmint.com/public_html


$ sudo chmod -R 755 /var/www/html/welovelinux.com/public_html

Now, create a sample index.html file inside each public_html directory.

<html>
<head>
<title>www.wearetecmint.com</title>
</head>
<body>
<h1>This is the main page of www.wearetecmint.com</h1>
</body>
</html>

Next, create the server block configuration files for each site inside the /etc/httpd/conf.d directory.

$ sudo vi /etc/nginx/conf.d/wearetecmint.com.conf
$ sudo vi /etc/nginx/conf.d/welovelinux.com.conf

Add the following server block declaration in the wearetecmint.com.conf file.

Server {
listen 80;
server_name wearetecmint.com;
root /var/www/html/wearetecmint.com/public_html ;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}

Next, add the following server block declaration in the welovelinux.com.conf file.

Server {
listen 80;
server_name welovelinux.com;
root /var/www/html/welovelinux.com/public_html ;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To apply the recent changes, restart the Nginx web server.

$ sudo systemctl restart nginx

and pointing your web server to the above addresses should make you see the main pages of the
dummy domains.

http://wearetecmint.com
http://welovelinux.com

Important: If you have SELinux enabled, its default configuration does not allow Nginx to access
files outside of well-known authorized locations (such as /etc/nginx for
configurations, /var/log/nginx for logs, /var/www/html for web files etc..).
You can handle this by either disabling SELinux, or setting the correct security context. For more
information, refer to this guide: using Nginx and Nginx Plus with SELinux on the Nginx Plus
website.

Installing and Configuring SSL with Nginx


SSL certificates help to enable secure http (HTTPS) on your site, which is essential to establishing a
trusted/secure connection between the end users and your server by encrypting the information that
is transmitted to, from, or within your site.
We will cover how to create and install a self-signed certificate, and generate a certificate signing
request (CSR) to acquire an SSL certificate from a certificate authority (CA), to use with Nginx.
Self-signed certificates are free to create and are practically good to go for testing purposes and for
internal LAN-only services. For public-facing servers, it is highly recommended to use a certificate
issued by a CA (for example Let’s Encrypt) to uphold its authenticity.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To create a self-signed certificate, first create a directory where your certificates will be stored.

$ sudo mkdir /etc/nginx/ssl-certs/

Then generate your self-signed certificate and the key using the openssl command line tool.

$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048


-keyout /etc/nginx/ssl-certs/nginx.key -out /etc/nginx/ssl-certs/nginx.crt

Let’s briefly describe the options used in the above command:


• req -X509 – shows we are creating a x509 certificate.
• -nodes (NO DES) – means “don’t encrypt the key”.
• -days 365 – specifies the number of days the certificate will be valid for.
• -newkey rsa:2048 – specifies that the key generated using RSA algorithm should be 2048-
bit.
• -keyout /etc/nginx/ssl-certs/nginx.key – specifies the full path of the RSA key.
• -out /etc/nginx/ssl-certs/nginx.crt – specifies the full path of the certificate.

Next, open your virtual host configuration file and add the following lines to a server block
declaration listening on port 443. We will test with the virtual host
file /etc/nginx/conf.d/wearetecmint.com.conf.

$ sudo vi /etc/nginx/conf.d/wearetecmint.com.conf
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
Then add the ssl directive to nginx configuration file, it should look similar to below.

Server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;

ssl on;
ssl_certificate /etc/nginx/ssl-certs/nginx.crt;
ssl_trusted_certificate /etc/nginx/ssl-certs/nginx.crt;
ssl_certificate_key /etc/nginx/ssl-certs/nginx.key;

server_name wearetecmint.com;
root /var/www/html/wearetecmint.com/public_html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}

Now restart the Nginx and point your browser to the https://www.wearetecmint.com address.

$ sudo systemctl restart nginx

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


If you would like to purchase an SSL certificate from a CA, you need to generate a certificate
signing request (CSR) as shown.

$ sudo openssl req -newkey rsa:2048 -nodes -keyout /etc/nginx/ssl-cert


/example.com.key -out /etc/nginx/ssl-certs/example.com.csr

You can also create a CSR from an existing private key.

$ sudo openssl req -key /etc/nginx/ssl-certs/


example.com.key -new -out /etc/nginx/ssl-certs/example.com.csr

Then, you need to send the CSR that is generated to a CA to request the issuance of a CA-signed
SSL certificate. Once you receive your certificate from the CA, you can configure it as shown
above.

Summary
In this chapter, we have explained how to install and configure Nginx; covered how to setup name-
based virtual hosting with SSL to secure data transmissions between the web server and a client.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 18: Setting Up Time Synchronization
Server NTP
The Network Time Protocol (NTP) is a protocol used to synchronize computer system clock
automatically over a network. The machine can have the system clock use Coordinated Universal
Time (UTC) rather than local time.

The most common method to sync system time over a network in Linux desktops or servers is by
executing the ntpdate command which can set your system time from an NTP time server.

In this case, the ntpd daemon must be stopped on the machine where the ntpdate command is
issued.

Install and Configure NTP


In most Linux systems, the ntpdate command is not installed by default. To install it, execute one
of the below commands:

# yum install ntp [On CentOS]


# apt install ntp [On Ubuntu]

The example of ntpdate command as shown:

ntpdate 1.ro.pool.ntp.org

To just query the server and not set the clock and use an unprivileged port to send the packets from
and bypass firewalls, issue ntpdate with the below flags:

ntpdate -qu 1.ro.pool.ntp.org

Always try to query and sync the time with the closest NTP servers available for your zone. The list
of the NTP server pools can be found at http://www.pool.ntp.org/en/. http://www.pool.ntp.org/en/..

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In newer Linux distributions that ship with Systemd, you can also sync time in the timesyncd.conf
file.

Just open the file for editing, and add or uncomment the following lines after [Time] section, as
illustrated in the below excerpt:

[Time]
NTP=0.ro.pool.ntp.org 1.ro.pool.ntp.org
FallbackNTP=ntp.ubuntu.com 0.arch.pool.ntp.org

To apply changes, run timedatectl and check with date.

$ sudo timedatectl set-ntp true


$ timedatectl status

Summary
By now you should have NTP network service described in this chapter installed, and possibly
running with the default configuration.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chaper 19: Setting Up Centralized Log Server with
Rsyslog
Logs are a critical component of any software or operating system. Logs usually record user’s
actions, system events, network activity and so much more, depending on what they are intended
for. One of the most widely used logging systems on Linux systems is rsyslog.

Rsyslog is a powerful, secure and high-performance log processing system which accepts data from
different types of source (systems/applications) and outputs it into multiple formats.

It has evolved from a regular syslog daemon to a fully-featured, enterprise level logging system. It
is designed in a client/server model, therefore it can be configured as a client and/or as a central
logging server for other servers, network devices, and remote applications.

Testing Environment
For the purpose of this guide, we will use the following hosts:

• Server: 192.168.241.140
• Client: 172.31.21.58

Installing and Configuring Rsyslog Server


Most Linux distributions come with the rsyslog package preinstalled. In case it’s not installed, you
can install it using your Linux package manager tool as shown.

$ sudo yum update && yum install rsyslog #CentOS 7


$ sudo apt update && apt install rsyslog #Ubuntu 16.04, 18.04

Once rsyslog installed, you need to start the service for now, enable it to auto-start at boot and check
it’s status with the systemctl command.

$ sudo systemctl start rsyslog


$ sudo systemctl enable rsyslog
$ sudo systemctl status rsyslog

The main rsyslog configuration file is located at /etc/rsyslog.conf, which loads modules, defines the
global directives, contains rules for processing log messages and it also includes all config files
in /etc/rsyslog.d/ for various applications/services.

$ sudo vim /etc/rsyslog.conf

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


By default, rsyslog uses the imjournal and imusock modules for importing structured log messages
from systemd journal and for accepting syslog messages from applications running on the local
system via Unix sockets, respectively.

To configure rsyslog as a network/central logging server, you need to set the protocol
(either UDP or TCP or both) it will use for remote syslog reception as well as the port it listens on.

If you want to use a UDP connection, which is faster but unreliable, search and uncomment the
lines below (replace 514 with the port you want it to listen on, this should match the port address
that the clients send messages to, we will look at this more when configuring a rsyslog client).

$ModLoad imudp
$UDPServerRun 514

To use TCP connection (which is slower but more reliable), search and uncomment the lines below.

$ModLoad imtcp
$UDPServerRun 514

In this case, we want to use both UDP and TCP connections at the same time.

Next, you need to define the ruleset for processing remote logs in the following format.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


facility.severity_level destination (where to store log)

Where:
• facility: is type of process/application generating message, they include auth, cron, daemon,

kernel, local0..local7. Using * means all facilities.

• severity_level: is type of log message: emerg-0, alert-1, crit-2, err-3, warn-4, notice-5, info-

6, debug-7. Using * means all severity levels and none implies no severity level.

• destination: is either local file or remote rsyslog server (defined in the form IP:port).

We will use the following ruleset for collecting logs from remote hosts, using
the RemoteLogs template.

Note that these rules must come before any rules for processing local messages, as shown in the
screenshot.

$template RemoteLogs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log"
*.* ?RemoteLogs
& ~

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Looking at the above ruleset, the first rule is “$template
RemoteLogs,”/var/log/%HOSTNAME%/%PROGRAMNAME%.log””.

The directive $template tells rsyslog daemon to gather and write all of the received remote
messages to distinct logs under /var/log, based on the hostname (client machine name) and remote
client facility (program/application) that generated the messages as defined by the settings present
in the template RemoteLogs.

The second line “*.* ?RemoteLogs” means record messages from all facilities at all severity levels
using the RemoteLogs template configuration.

The final line “& ~” instructs rsyslog to stop processing the messages once it is written to a file. If
you don’t include “& ~”, messages will instead be be written to the local files.

There are many other templates that you can use, for more information, see the rsyslog
configuration man page (man rsyslog.conf) or refer to the Rsyslog online documentation.

That’s it with configuring the rsyslog server. Save and close the configuration file. To apply the
recent changes, restart rsyslog daemon with the following command.

$ sudo systemctl restart rsyslog

Now verify the rsyslog network sockets. Use the ss command (or netstat with the same flags)
command and pipe the output to grep to filter out rsyslogd connections.

$ sudo ss -tulnp | grep "rsyslog"

Next, on CentOS 7, if you have SELinux enabled, run the following commands to allow rsyslog
traffic based on the network socket type.

$ sudo semanage -a -t syslogd_port_t -p udp 514


$ sudo semanage -a -t syslogd_port_t -p tcp 514

If the system has firewall enabled, you need to open port 514 to allow both UDP/TCP connections
to the rsyslog server, by running.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


----------- On CentOS -------------
$ sudo firewall-cmd --permanent –add-port=514/udp
$ sudo firewall-cmd --permanent –add-port=514/tcp
$ sudo firewall-cmd --reload
------------- On Ubuntu -------------
$ sudo ufw allow 514/udp
$ sudo ufw allow 514/tcp
$ sudo ufw reload

Installing and Configuring Rsyslog Client


Now on the client system, check if the rsyslog service is running or not with the following
command.

$ sudo systemctl status rsyslog

Most Linux distributions come with the rsyslog package preinstalled. In case it’s not installed, you
can install it using your Linux package manager tool as shown.

$ sudo yum update && yum install rsyslog #CentOS 7


$ sudo apt update && apt install rsyslog #Ubuntu 16.04, 18.04

Once rsyslog installed, you need to start the service for now, enable it to auto-start at boot and check
it’s status with the systemctl command.

$ sudo systemctl start rsyslog


$ sudo systemctl enable rsyslog
$ sudo systemctl status rsyslog

Once the rsyslog service is up and running, open the main configuration file where you will perform
changes to the default configuration.

$ sudo vim /etc/rsyslog.conf

To force the rsyslog daemon to act as a log client and forward all locally generated log messages to
the remote rsyslog server, add this forwarding rule, at the end of the file as shown in the following
screenshot.

*.* @@192.168.100.10:514
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
The above rule will send messages from all facilities and at all severity levels. To send messages
from a specific facility for example auth, use the following rule.

auth. * @@192.168.100.10:514

Save the changes and close the configuration file. To apply the above settings, restart the rsyslog
daemon.

$ sudo systemctl restart rsyslog

Monitor Remote Logging on the Rsyslog Serve


The final step is to verify if the rsyslog is actually receiving and logging messages from the client,
under /var/log, in the form hostname/programname.log.

Run a ls command to long listing of the parent logs directory and check if there is a directory
called ip-172.31.21.58 (or whatever your client machine’s hostname is).

$ ls -l /var/log/

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


If the directory exists, check the log files inside it, by running.

$ sudo ls -l /var/log/ip-172-31-21-58/

Summary
Rsyslog is a high-performance log processing system, designed in a client/server architecture. We
hope you are able to install and configure Rsyslog as a central/network logging server and as a
client as demonstrated in this chapter.
You may also want to refer to relevant rsyslog manual pages for more help. Feel free to give us any
feedback or ask questions.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 20: Setting Up DHCP Server and Client
DHCP (short for Dynamic Host Configuration Protocol) is a client/server protocol that enables a
server to automatically assign an IP address and other related configuration parameters (such as the
subnet mask and default gateway) to a client on a network.

DHCP is important because it prevents a system or network administrator from manually


configuring IP addresses for new computers added to the network or computers that are moved from
one subnet to another.

The IP address assigned by a DHCP server to a DHCP client is on a “lease”, the lease time normally
varies depending on how long a client computer is likely to require the connection or the DHCP
configuration.

In this chapter, we will explain how to configure a DHCP server in CentOS and Ubuntu Linux
distributions to assign IP address automatically to a client machine.

Installing DHCP Server


The DCHP server package is available in the official repositories of mainstream Linux distributions,
installing is quite easy, simply run the following command.

# yum install dhcp #CentOS


$ sudo apt install isc-dhcp-server #Ubuntu

Once the installation is complete, configure the interface on which you want the DHCP daemon to
serve requests in the configuration file /etc/default/isc-dhcp-server or /etc/sysconfig/dhcpd.

# vim /etc/sysconfig/dhcpd #CentOS


$ sudo vim /etc/default/isc-dhcp-server #Ubuntu

For example, if you want the DHCPD daemon to listen on eth0 , set it using the following directive.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


DHCPDARGS=”eth0”

Configuring DHCP Server


The main DHCP configuration file is located at /etc/dhcp/dhcpd.conf , which should contain
settings of what to do, where to do something and all network parameters to provide to the clients.

This file basically consists of a list of statements grouped into two broad categories:

• Global parameters: specify how to carry out a task, whether to carry out a task, or what
network configuration parameters to provide to the DHCP client.
• Declarations: define the network topology, state a clients is in, offer addresses for the clients,
or apply a group of parameters to a group of declarations.

------------ On CentOS ------------


# cp /usr/share/doc/dhcp-4.2.5/dhcpd.conf.example /etc/dhcp/dhcpd.conf
# vi /etc/dhcp/dhcpd.conf

------------ On Ubuntu ------------


$ sudo vim /etc/dhcp/dhcpd.conf

Start by defining the global parameters which are common to all supported networks, at the top of
the file. They will apply to all the declarations:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


option domain-name "tecmint.lan";
option domain-name-servers ns1.tecmint.lan, ns2.tecmint.lan;
default-lease-time 3600;
max-lease-time 7200;
authoritative;

Next, you need to define a sub-network for an internal subnet i.e 192.168.1.0/24 as shown.

subnet 192.168.1.0 netmask 255.255.255.0 {


option routers 192.168.1.1;
option subnet-mask 255.255.255.0;
option domain-search "tecmint.lan";
option domain-name-servers 192.168.1.1;
range 192.168.10.10 192.168.10.100;
range 192.168.10.110 192.168.10.200;
}

Note that hosts which require special configuration options can be listed in host statements (see
the dhcpd.conf man page).
Now that you have configured your DHCP server daemon, you need to start the service for the
mean time and enable it to start automatically from the next system boot, and check if its up and
running using following commands.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


------------ On CentOS ------------
# systemctl start dhcpd
# systemctl enable dhcpd
# systemctl enable dhcpd

------------ On Ubuntu ------------


$ sudo systemctl start isc-dhcp-server
$ sudo systemctl enable isc-dhcp-server
$ sudo systemctl enable isc-dhcp-serve

Next, permit requests to the DHCP daemon on Firewall, which listens on port 67/UDP, by running.

------------ On CentOS ------------


# firewall-cmd --zone=public --permanent --add-service=dhcp
# firewall-cmd --reload

------------ On Ubuntu ------------


$ sudo ufw allow 67/udp
$ sudo ufw reload

Configuring DHCP Clients


Finally, you need to test if the DHCP server is working fine. Logon to a few client machines on the
network and configure them to automatically receive IP addresses from the server.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Modify the appropriate configuration file for the interface on which the clients will auto-receive IP
addresses.

DHCP Client Setup on CentOS

# vim /etc/sysconfig/network-scripts/ifcfg-eth0

Add the options below:

DEVICE=eth0
BOOTPROTO=dhcp
TYPE=Ethernet
ONBOOT=yes

# systemctl restart network

DHCP Client Setup on Ubuntu

$ sudo vi /etc/network/interfaces

Add these lines in it:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


auto eth0
iface eth0 inet dhcp

$ sudo systemctl restart networking

On Ubuntu 18.04, networking is controlled by the Netplan program. You need to edit the
appropriate file under the directory /etc/netplan/, for example.

$ sudo vim /etc/netplan/01-netcfg.yaml

Then enable dhcp4 under a specific interface for example under ethernets, ens0, and comment out
static IP related configs:

network:
version: 2
renderer: networkd
ethernets:
ens0:
dhcp4: yes

Save the changes and run the following command to effect the changes.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


$ sudo netplan apply

For more information, see the dhcpd and dhcpd.conf man pages.

$ man dhcpd
$ man dhcpd.conf

Summary
In this chapter, we have explained how to configure a DHCP server in CentOS and Ubuntu Linux
distributions.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 21: Setting Up Mail Server
Regardless of the many online communication methods that are available today, email remains a
practical way to deliver messages from one end of the world to another, or to a person sitting in the
office next to ours.
In this chapter we will explain how to configure your mail server and how to perform the following
tasks:

• Configure email aliases

• Configure an IMAP and IMAPS service

• Configure a smtp service

• Restrict access to a smtp server

Note that our setup will only cover a mail server for a local area network where the machines
belong to the same domain. Sending email messages to other domains require a more complex
setup, including domain name resolution capabilities. That is out of the scope of the certifications.

Installing Mail Server


Postfix is a Mail Transport Agent (MTA). It is the application responsible for routing and delivering
email messages from a source to a destination mail servers, whereas dovecot is a widely used IMAP
and POP3 email server that fetches messages from the MTA and delivers them to the right user
mailbox.
Dovecot plugins for several relational database management systems are also available.

# yum update && yum install postfix dovecot [On CentOS]


# aptitude update && aptitude postfix dovecot-imapd dovecot-pop3d [On Ubuntu]

Once installed, let’s start with a few definitions.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The Process of Sending and Receiving Email Messages
The following image illustrates the process of email transport starting with the sender until the
message reaches the recipient's inbox:

To

make this possible, several things happen behind the scenes. For an email message to be delivered
from a client application (such as Thunderbird, Outlook, or webmail services such as Gmail or
Yahoo! Mail) to his / her mail server and from there to the destination server and finally to its
intended recipient, a SMTP (Simple Mail Transfer Protocol) service must be in place in each server.

In order for these components to be able to “talk” to each other, they must
“speak” the same “language” (or protocol), namely SMTP as defined in the RFC
2821. Most likely, you will have to refer to that RFC while setting up your mail
server environment.

Other protocols that we need to consider are IMAP (Internet Message Access Protocol), which
allows to manage email messages directly on the server without downloading them to our client’s
hard drive, and POP3 (Post Office Protocol), which allows to download the messages and folders to
the user’s computer.

Our Testing Environment:


Our testing environment is as follows:

• Mail server: Ubuntu 16.04 [IP 192.168.0.15]

• Mail client: Ubuntu 18.04 [IP 192.168.0.103]

• Local domain: example.com.ar

• Aliases: sysadmin@example.com.ar is aliased to gacanepa@example.com.ar and


jdoe@example.com.ar

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• On our client, we have set up elementary DNS resolution by adding the following line to the
/etc/hosts file:

192.168.0.15 example.com.ar mailserver

Adding Email Aliases


By default, a message sent to a specific user should be delivered to that user only. However, if you
want to also deliver it to a group of users as well, or to a different user, you can create a mail alias
or use one of the existing ones in /etc/postfix/aliases, following this syntax:

user1: user1, user2

Thus, emails sent to user1 will be also delivered to user2. Note that if you omit the word user1 after
the colon, as in:

user1: user2

the messages sent to user1 will only be sent to user2, and not to user1.

In the above example, user1 and user2 should already exist on the system.

In our specific case, we will use the following alias as explained before (add the following line in
/etc/aliases):

sysadmin: gacanepa, jdoe

and run:

# postalias /etc/postfix/aliases

to create or refresh the aliases lookup table. Thus, messages sent to sysadmin@example.com.ar will
be delivered to the inbox of the users listed above.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Configuring Postfix Mail Server - SMTP
The main configuration file for Postfix is /etc/postfix/main.cf. You only need to set up a few
parameters before being able to use the mail service.

However, you should become acquainted with the full configuration parameters (which can be
listed with man 5 postconf) to set up a secure and fully customized mail server.

Note that this chapter is only supposed to get you started in that process and does not represent a
comprehensive guide on email services with Linux.

1) myorigin specifies the domain that appears in messages sent from the server. You may see the
/etc/mailname file used with this parameter. Feel free to edit it if needed.

myorigin = /etc/mailname

If the value above is used, mails will be sent as user@debian.gabrielcanepa.com.ar, where user is
the user sending the message.

2) mydestination lists what domains this machine will deliver email messages locally, instead of
forwarding to another machine (acting as a relay system). The default settings will suffice in our
case.

The /etc/postfix/transport file defines the relationship between domains and the next server to which
mail messages should be forwarded. In our case, since we will be delivering messages to our local
area network only (thus bypassing any external DNS resolution), the following configuration will
suffice:

example.com.ar local:
.example.com.ar local:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Next, we need to convert this plain text file to the .db format, which creates the lookup table that
Postfix will use to know what to do with incoming and outgoing mail:

# postmap /etc/postfix/transport

You will need to remember to recreate this table if you add more entries to the corresponding text
file.

3) mynetworks defines the authorized networks Postfix will forward messages from. The default
value, subnet, tells Postfix to forward mail from SMTP clients in the same IP subnetworks as the
local machine only.

mynetworks = subnet

4) The relay_domains variable specifies the destinations to which emails should be sent to. We will
leave the default value untouched, which points to mydestination. Remember that we are setting up
a mail server for our LAN.

relay_domains = $mydestination

Note that you can use $mydestination instead of listing the actual contents.

5) The inet_interfaces variable defines which network interfaces the mail service should listen on.
The default, all, tells Postfix to use all network interfaces.

inet_interfaces = all

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Finally,

6) mailbox_size_limit and message_size_limit will be used to set the size of each user’s mailbox
and the maximum allowed size of individual messages, respectively, in bytes.

mailbox_size_limit = 51200000
message_size_limit = 5120000

Restricting Access to SMTP Server


The Postfix SMTP server can apply certain restrictions to each client connection request. Not all
clients should be allowed to identify themselves to the mail server using the smtp HELO command,
and certainly not all of them should be granted access to send or receive messages.
To implement these restrictions, we will use the following directives in the main.cf file. Though
they are self-explanatory, comments have been added for clarification purposes:

# Require that a remote SMTP client introduces itself with the HELO
or EHLO command before sending the MAIL command or other commands that
require EHLO negotiation.
smtpd_helo_required = yes

# Permit the request when the client IP address matches any network or
network address listed in $mynetworks
\\\

# Reject the request when the client HELO and EHLO command has a bad
hostname syntax
smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname

# Reject the request when Postfix does not represent the final destination
for the sender address
smtpd_sender_restrictions = permit_mynetworks, reject_unknown_sender_domain

# Reject the request unless 1) Postfix is acting as mail forwarder or


2) is the final destination
smtpd_recipient_restrictions = permit_mynetworks, reject_unauth_destination

The Postfix configuration parameters page may come in handy in order to further explore the
available options.

Configuring Dovecot
Right after installing Dovecot, it supports out-of-the-box the POP3 and IMAP protocols, along with
their secure versions, POP3S and IMAPS, respectively.

Add the following lines in /etc/dovecot/conf.d/10-mail.conf:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


# %u represents the user account that logs in
# Mailboxes are in mbox format
mail_location = mbox:~/mail:INBOX=/var/mail/%u
# Directory owned by the mail group and the directory set to
group-writable (mode=0770, group=mail)
# You may need to change this setting if postfix is running a different user
/ group on your system
mail_privileged_group = mail

If you check your home directory, you will notice there is a mail subdirectory with the following
contents:

Also, please note that the /var/mail/%u file is where the user’s mails are stored on most systems.

Add the following directive to /etc/dovecot/dovecot.conf (note that imap and pop3 imply imaps and
pop3s as well):

protocols = imap pop3

And make sure /etc/dovecot/conf.d/10-ssl.conf includes the following lines (otherwise, add them):

ssl_cert = </etc/dovecot/dovecot.pem
ssl_key = </etc/dovecot/private/dovecot.pem

Now let’s restart Dovecot and verify that it listens on the ports related to imap, imaps, pop3, and
pop3s:

# netstat -npltu | grep dovecot

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Configuring Mail Client for Sending and Receiving Emails
On our client computer, we will open Thunderbird and click on File → New → Existing mail
account. We will be prompted to enter the name of the account and the associated email address,
along with its password. When we click Continue, Thunderbird will then try to connect to the mail
server to verify settings:

Repeat the process above for the next account (gacanepa@example.com.ar) and the following two
inboxes should appear in Thunderbird’s left pane:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


On our server, we will write an email message to sysadmin, which is aliased to jdoe and gacanepa:

The mail log (/var/log/mail.log) seems to indicate that the email that was sent to sysadmin was
relayed to jdoe@example.com.ar and gacanepa@example.com.ar, as can be seen in the following
image:

We can verify if the mail was delivered to our client, where the IMAP accounts were configured in
Thunderbird:

Finally, let’s try to send a message from jdoe@example.com.ar to gacanepa@example.com.ar:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In the exam you will be asked to work exclusively with command-line utilities. This means you will
not be able to install a desktop client application such as Thunderbird but will be required to use
mail instead. We have used Thunderbird in this chapter for illustrative purposes only.

Summary
In this chapter we have explained how to set up an IMAP mail server for your local area network
and how to restrict access to the SMTP server.

If you happen to run into an issue while implementing a similar setup in your testing environment,
you will want to check the online documentation of Postfix and Dovecot (specially the pages about
the main configuration files, /etc/postfix/main.cf and /etc/dovecot/dovecot.conf, respectively).

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 22: Setting Up Squid HTTP Proxy Server
Web proxies have been around for quite some time now and have been used by millions of users
around the globe.

They have a wide range of purposes, most popular being online anonymity, but there are other ways
you can take advantage of web proxies.

Here are some ideas:

• Online anonymity
• Improve online security
• Improve loading times
• Block malicious traffic
• Log your online activity
• To circumvent regional restrictions
• In some cases can reduce bandwidth usage

How Proxy Server Works


The proxy server is a computer that is used as an intermediary between the client and other servers
from which client may request resources. A simple example of this is when a client makes online
requests (for example want to open a web page), he connects first to the proxy server.

The proxy server then checks its local disk cache and if the data can be found in there, it will return
the data to the client, if not cached, it will make the request in the client’s behalf using the proxy IP
address (different from the clients) and then return the data to the client. The proxy server will try to
cache the new data and will use it for future requests made to the same server.

What is Squid Proxy


Squid is a web proxy used my wide range of organizations. It is often used as caching proxy and
improving response times and reducing bandwidth usage.

For the purpose of this article, I will be installing Squid on a CentOS 7 VPS and use it as an HTTP
proxy server.

Installing Squid Server


Before we start, you should know that Squid, does not have any minimum requirements, but the
amount of RAM usage may vary depending on the clients browsing the internet through the proxy
server.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Squid is included in the base repository and thus the installation is simple and straightforward.
Before installing it, however, make sure your packages are up to date by running.

# yum -y update [On CentOS]


# apt -y update [On Ubuntu]

Once your packages are up to date, you can proceed further to install squid and start and enable it
on system startup using following commands.

# yum -y install squid [On CentOS]


# apt -y install squid [On Ubuntu]
# systemctl start squid
# systemctl enable squid

At this point your Squid web proxy should already be running and you can verify the status of the
service with.

# systemctl status squid

Here are some important file locations you should be aware of:
• Squid configuration file: /etc/squid/squid.conf
• Squid Access log: /var/log/squid/access.log
• Squid Cache log: /var/log/squid/cache.log
A minimum squid.conf configuration file (without comments in it) looks like this:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Configuring Squid as an HTTP Proxy
Here, we will show you how to configure squid as an HTTP proxy using only the client IP address
for authentication.

Add Squid ACLs


If you wish to allow IP address to access the web through your new proxy server, you will need to
add new acl(access control list) line in the configuration file.

# vim /etc/squid/squid.conf

The line you should add is:

acl localnet src XX.XX.XX.XX

Where XX.XX.XX.XX is the actual client IP address you wish to add. The line should be added in
the beginning of the file where the ACLs are defined. It is a good practice to add a comment next to
ACL which will describe who uses this IP address.
It is important to note that if Squid is located outside your local network, you should add the public
IP address of the client.
You will need to restart Squid so the new changes can take effect.

# systemctl restart squid

Open Ports in Squid Proxy


By default, only certain ports are allowed in the squid configuration, if you wish to add more just
define them in the configuration file as shown.

acl Safe_ports port XXX

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Where XXX is the port number that you wish to allow. Again it is a good practive to define a
comment next to acl that will describe what the port is going to be used for.

For the changes to take effect, you will need to restart squid once more.

# systemctl restart squid

Squid Proxy Client Authentication


To allow users to authenticate before using the proxy, you need to enable basic http authentication
in the configuration file, but before that you need to install apache utils package using following
command.

# yum -y install httpd-tools [On CentOS]


# apt -y install apache2-utils [On Ubuntu]

Now create a file called “passwd” that will later store the username for the authentication. Squid
runs with user “proxy” so the file should be owned by that user.

# touch /etc/squid/passwd
# chown squid: /etc/squid/passwd

Now we will create a new user called “proxyclient” and setup its password.

# htpasswd /etc/squid/passwd proxyclient

Now to configure the autnetication open the configuration file.

# vim /etc/squid/squid.conf
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
After the ports ACLs add the following lines:

auth_param basic program /usr/lib64/squid/basic_ncsa_auth /etc/squid/passwd


auth_param basic children 5
auth_param basic realm Squid Basic Authentication
auth_param basic credentialsttl 2 hours
acl auth_users proxy_auth REQUIRED
http_access allow auth_users

Save the file and restart squid so that the new changes can take effect:

# systemctl restart squid

Block Websites on Squid Proxy


Finally we will create one last ACL that will help us block unwanted websites. First create the file
that will store the blacklisted sites.

# touch /etc/squid/blacklisted_sites.acl

You can add some domains you wish to block. For example:

.badsite1.com
.badsite2.com

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The proceding dot tells squid to block all referecnes to that sites
including www.badsite1, subsite.badsite1.com etc.
Now open Squid’s configuration file /etc/squid/squid.conf and after the ports ACLs add the
following two lines:

acl bad_urls dstdomain "/etc/squid/blacklisted_sites.acl"


http_access deny bad_urls

Now save the file and restart squid:

Configure Client to Use Squid Proxy


Now to test that your proxy server is working or not, you may open Firefox and go to Edit –>
Preferences –> Advanced –> Network –> Settings and select “Manual proxy configuration” and
enter your proxy server IP address and Port to be used for all connection as it follows.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To make sure that you are surfing the web using your proxy server, you may
visit http://www.ipaddresslocation.org/, in the right top corner you must see the same IP address as
your server IP address.

Verifying Client Accessing Internet


You can now verify that your local network client is accessing the Internet through your proxy as
follows:

1) In your client, open up a terminal and type

ip address show eth0 | grep -Ei '(inet.*eth0)'

That command will display the current IP address of your client (192.168.0.104 in the following
image).

2) In your client, use a web browser to open any given web site (www.tecmint.com in this case).

3) In the server, run

tail -f /var/log/squid/access.log

and you’ll get a live view of requests being served through Squid:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Restricting Access by Client
Now suppose you want to explicitly deny access to that particular client IP address, while yet
maintaining access for the rest of the local network.

1) Define a new ACL directive as follows (I’ve named it ubuntuOS but you can name it whatever
you want)

acl ubuntuOS src 192.168.0.104

2) Add the ACL directive to the localnet access list that is already in place, but prefacing it with an
exclamation sign. This means, “Allow Internet access to clients matching the localnet ACL directive
except to the one that matches the ubuntuOS directive”:

http_access allow localnet !ubuntuOS

3) Now we need to restart Squid in order to apply changes. Then if we try to browse to any site we
will find that access is denied now:

Fine Tuning Squid Proxy


To restrict access to Squid by domain we will use the dstdomain keyword in a ACL directive, as
follows:

acl forbidden dstdomain "/etc/squid/forbidden_domains"

where forbidden_domains is a plain text file that contains the domains that we desire to deny access
to:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Finally, we must grant access to Squid for requests not matching the directive above:

http_access allow localnet !forbidden

Or maybe we will only want to allow access to those sites during a certain time of the day (10:00
until 11:00 am) only on Monday (M), Wednesday (W), and Friday (F).

acl someDays time MWF 10:00-11:00

http_access allow forbidden someDays

http_access deny forbidden

Otherwise, access to those domains will be blocked.

Restricting Access by User Authentication


Squid support several authentication mechanisms (Basic, NTLM, Digest, SPNEGO, and Oauth) and
helpers (SQL database, LDAP, NIS, NCSA, to name a few). In this tutorial we will use Basic
authentication with NCSA.

Add the following lines to your /etc/squid/squid.conf file (in CentOS 7, the NCSA plugin will be
found in /usr/lib64/squid/basic_nsca_auth).

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd

auth_param basic credentialsttl 30 minutes

auth_param basic casesensitive on

auth_param basic realm Squid proxy-caching web server for Tecmint's LFCE
series

acl ncsa proxy_auth REQUIRED

http_access allow ncsa

A few clarifications:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• We need to tell Squid which authentication helper program to use with the auth_param
directive by specifying the name of the program (most likely, /usr/lib/squid/ncsa_auth), plus
any command line options (/etc/squid/passwd in this case) if necessary.

• The /etc/squid/passwd file is created through htpasswd, a tool to manage basic


authentication through files. It will allow us to add a list of usernames (and their
corresponding passwords) that will be allowed to use Squid.

• credentialsttl 30 minutes will require entering your username and password every 30
minutes (you can specify this time interval with hours as well).

• casesensitive on indicates that usernames and passwords are case sensitive.

• realm represents the text of the authentication dialog that will be used to authenticate to
squid.

• Finally, access is granted only when proxy authentication (proxy_auth REQUIRED)


succeeds.

Run the following command to create the file and to add credentials for user gacanepa (omit the -
c flag if the file already exists):

htpasswd -c /etc/squid/passwd gacanepa

Open a web browser in the client machine and try to browse to any given site:

If authentication succeeds, access is granted to the requested resource. Otherwise, access will be
denied.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Using Cache to Speed Up Data Transfer
One of Squid’s distinguishing features is the possibility of caching resources requested from the
web to disk to speed up future requests of those objects either by the same client or others.

Add the following directives in your squid.conf file:

maximum_object_size 100 MB

cache_dir ufs /var/cache/squid 1000 16 256

refresh_pattern .*\.(mp4|iso) 2880

Where:

• ufs is the Squid storage format

• /var/cache/squid is a top-level directory where cache files will be stored. This directory must
exist and be writable by Squid (Squid will NOT create this directory for you).

• 1000 is the amount (in MB) to use under this directory

• 16 is the number of 1st-level subdirectories, whereas 256 is the number of 2nd-level


subdirectories within /var/spool/squid.

• The maximum_object_size directive specifies the maximum size of allowed objects in the
cache.

• refresh_pattern tells Squid how to deal with specific file types (.mp4 and .iso in this case)
and for how long it should store the requested objects in cache (2880 minutes = 2 days). The
first and second 2880 are is an lower and upper limits, respectively, on how long objects
without an explicit expiry time will be considered recent, and thus will be served by the
cache, whereas 0% is the percentage of the objects’ age (time since last modification) that
each object without explicit expiry time will be considered recent.

Case study: downloading a .mp4 file from 2 different clients and testing the cache

First client (IP 192.168.0.104) downloads a 71 MB .mp4 file in 2 minutes and 52 seconds:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Second client (IP 192.168.0.17) downloads the same file in 1.4 seconds!

That is because the file was served from the Squid cache (indicated by TCP_HIT/200) in the second
case, as opposed to the first instance, when it was downloaded directly from the Internet
(represented by TCP_MISS/200).

The HIT and MISS keywords, along with the 200 http response code, indicate that the file was
served successfully both times, but the cache was HIT and MISSed respectively. When a request
cannot be served by the cache for some reason, then Squid attempts to serve it from the Internet.

Configuring Squid Proxy for CLI Browsers


There are two approaches to configure a text-based browser (such as elinks or w3m) to use a Squid
proxy server installed on localhost and listening on port 8000. Both methods rely on setting the
HTTP_PROXY environment variable, which is then picked up by the browser automatically:

System-wide (set HTTP_PROXY in /etc/profile):


echo "HTTP_PROXY='http://localhost:8000'" >> /etc/profile
echo "export HTTP_PROXY" >> /etc/profile`

Per user (jdoe in the following example):


echo "HTTP_PROXY='http://localhost:8000'" >> /home/jdoe/.bash_profile`
echo "export HTTP_PROXY" >> /home/jdoe/.bash_profile`

To verify, you can check the Squid logs (typically /var/log/squid/access.log, or given by the
cache_log directive in /etc/squid/squid.conf).

Then test:

w3m gacanepa.github.com

and you should see the proxy events recorded in the logs:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


1456943424.000 701 ::1 TCP_MISS/200 5468 GET http://gacanepa.github.io/ -
HIER_DIRECT/23.235.46.133 text/html
1456943424.194 187 ::1 TCP_MISS/200 4953 GET
http://gacanepa.github.io/public/css/hyde.css - HIER_DIRECT/23.235.46.133 text/
css
1456943424.326 320 ::1 TCP_MISS/200 1505 GET
http://fonts.googleapis.com/css? - HIER_DIRECT/64.233.186.95 text/css
1456943424.385 187 ::1 TCP_MISS/200 3670 GET
http://gacanepa.github.io/public/css/syntax.css - HIER_DIRECT/23.235.46.133
text/css
1456943424.695 365 ::1 TCP_MISS/200 7052 GET
http://gacanepa.github.io/public/css/poole.css - HIER_DIRECT/23.235.46.133
text/css

Note: This assumes that traffic through port 8000 is allowed in your firewall.

Otherwise, you can allow access as follows:

In Firewalld:

firewall-cmd --add-port=8000/tcp
firewall-cmd --add-port=8000/tcp --permanent

In Iptables:

iptables --append INPUT --protocol tcp --destination-port 8000 --jump ACCEPT


iptables-save > /etc/sysconfig/iptables

Summary
In this chapter we have discussed how to set up a Squid web caching proxy. You can use the proxy
server to filter contents using some chosen criteria, and also to reduce latency (since identical
incoming requests are served from the cache, which is closer to the client than the web server that is
actually serving the content, resulting in faster data transfers) and network traffic as well (reducing
the amount of used bandwidth, which saves you money if you’re paying for traffic).

You may want to refer to the Squid web site for further documentation (make sure to also check the
wiki).

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 23: Setting Up SquidGuard for Squid
Proxy
In this chapter we will explain how to use squidGuard, a filter, redirector and access controller
plugin for squid. Let’s start our discussion by highlighting what squidGuard can and cannot do:

squidGuard can be used to:

• limit the allowed web access for some users to a list of accepted/well known web servers
and/or URLs only, while denying access to other blacklisted web servers and/or URLs.

• block access to sites (by IP address or domain name) matching a list of regular expressions
or words for some users.

• require the use of domain names/prohibit the use of IP address in URLs.

• redirect blocked URLs to error or info pages.

• use distinct access rules based on time of day, day of the week, date etc.

• implement different rules for distinct user groups.

However, neither squidGuard nor Squid can be used to

• analyze text inside documents and act in result.

• detect or block embedded scripting languages like JavaScript, Python, or VBscript inside
HTML code.

Blacklists – The Basics


Blacklists are an essential part of squidGuard. Basically, they are plain text files that will allow you
to implement content filters based on specific keywords. There are both freely available and
commercial blacklists, and you can find the download links in the project’s website.

In this chapter I will show you how to integrate the blacklists provided by Shalla Secure Services
(http://www.shallalist.de/) to your squidGuard installation.

These blacklists are free for personal / non-commercial use and are updated on a daily basis. They
include, as of today, over 1,700,000 entries.

For our convenience, let’s create a directory to download the blacklist package:

# mkdir /opt/3rdparty
# cd /opt/3rdparty
# wget http://www.shallalist.de/Downloads/shallalist.tar.gz

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The latest download link is always available as highlighted below:

After untarring the newly downloaded file, we will browse to the blacklist (BL) folder:

# tar xzf shallalist.tar.gz


# cd BL
# ls

You can think of the directories shown in the output of ls as backlist categories, and their
corresponding (optional) subdirectories as subcategories, descending all the way down to specific
URLs and domains, which are listed in the files urls and domains, respectively.

Refer to the below image for further details:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Installing Blacklists
Installation of the whole blacklist package, or of individual categories, is performed by copying the
BL directory, or one of its subdirectories, respectively, to the /var/squidGuard/db directory. Of
course you could have downloaded the blacklist tarball to this directory in the first place, but the
approach explained earlier gives you more control over what categories should be blocked (or not)
at a specific time.

Next, I will show you how to install the anonvpn, hacking, and chat blacklists and how to configure
squidGuard to use them.

Please note that this chapter was written using CentOS 7. If you are using another distribution, the
squidGuard database should be located in a similar directory under /var.

Step 1: Copy recursively the anonvpn, hacking, and chat directories from /opt/3rdparty/BL to
/var/squidGuard/db

# cp -a /opt/3rdparty/BL/anonvpn /var/squidGuard/db
# cp -a /opt/3rdparty/BL/hacking /var/squidGuard/db
# cp -a /opt/3rdparty/BL/chat /var/squidGuard/db

Step 2: Use the domains and urls files to create squidguard’s database files. Please note that the
following command will work for creating .db files for all the installed blacklists - even when a
certain category has 2 or more subcategories.

squidGuard –d -C all

Step 3: Change the ownership of the /var/squidGuard/db/ directory and its contents to the proxy
user so that Squid can read the database files

# chown -R proxy:proxy /var/squidGuard/db/

Step 4: Configure Squid to use squidGuard

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


We will use Squid’s url_rewrite_program directive in /etc/squid/squid.conf to tell Squid to use
squidGuard as a URL rewriter / redirector. Add the following line to squid.conf, making sure that
/usr/bin/squidGuard is the right absolute path in your case:

# which squidGuard
# echo "url_rewrite_program $(which squidGuard)" >> /etc/squid/squid.conf
# tail -n 1 /etc/squid/squid.conf

Step 5: Add the necessary directives to squidGuard’s configuration file (located in


/etc/squidguard/squidGuard.conf)

Please refer to the screenshot after the following code for further clarification

src localnet {

ip 192.168.0.0/24

dest anonvpn {

domainlist anonvpn/domains

urllist anonvpn/urls

dest hacking {

domainlist hacking/domains

urllist hacking/urls

dest chat {

domainlist chat/domains

urllist chat/urls

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


acl {

localnet {

pass !anonvpn !hacking !chat !in-addr all

redirect http://www.lds.org

default {

pass local none

Step 6: Restart Squid and test

service squid restart # sysvinit / Upstart-based systems

systemctl restart squid.service # systemctl-based systems

Open a web browser in a client within local network and browse to a site found in any of the
blacklist files (domains or urls - we will use http://spin.de/chat in the following example) and you
will be redirected to another URL, www.lds.org in this case.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


You can verify that the request was made to the proxy server but was denied (301 http response -
Moved permanently) and was redirected to www.lds.org instead:

Removing Restrictions
If for some reason you need to enable a category that has been blocked in the past, remove the
corresponding directory from /var/squidGuard/db and comment (or delete) the related acl in the
squidguard.conf file.

For example, if you want to enable the domains and urls blacklisted by the anonvpn category, you
would need to perform the following steps:

rm -rf /var/squidGuard/db/anonvpn

And edit the squidguard.conf file as follows:

Please note that parts highlighted in yellow under BEFORE have been deleted in AFTER doing:

squidGuard –d -C all

squid –k reconfigure

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Whitelisting Specific Domains and URLs
On occasions you may want to allow certain URLs or domains, but not an entire blacklisted
directory. In that case, you should create a directory named myWhiteLists (or whatever name you
choose) and insert the desired URLs and domains under /var/squidGuard/db/myWhiteLists in files
named urls and domains, respectively.

Then, initialize the new content rules as before,

squidGuard -C all

and modify the squidguard.conf as follows:

As before, the parts highlighted in yellow indicate the changes that need to be added. Note that the
myWhiteLists string needs to be first in the row that starts with pass.

Finally, remember to restart Squid in order to apply changes.

Summary
After following the steps outlined in this tutorial you should have a powerful content filter and URL
redirector working hand in hand with your Squid proxy. If you experience any issues during your
installation / configuration process or have any questions or comments, you may want to refer to
squidGuard’s web documentation.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 24: Implement and Configure a PXE Boot
Server on CentOS 7
PXE Server – Preboot eXecution Environment – instructs a client computer to boot, run or install an
operating system directly form a network interface, eliminating the need to burn a CD/DVD or use
a physical medium, or, can ease the job of installing Linux distributions on your network
infrastructure on multiple machines the same time.

This chapter explains how you can install and configure a PXE Server on CentOS 7 x64-bit with
mirrored local installation repositories, sources provided by CentOS 7 DVD ISO image, with the
help of DNSMASQ Server.

Which provides DNS and DHCP services, Syslinux package which provides bootloaders for
network booting, TFTP-Server, which makes bootable images available to be downloaded via
network using Trivial File Transfer Protocol (TFTP) and VSFTPD Server which will host the local
mounted mirrored DVD image – which will act as an official CentOS 7 mirror installation
repository from where the installer will extract its required packages.

Install and Configure DNSMASQ Server


No need to remind you that is absolutely demanding that one of your network card interface, in case
your server poses more NICs, must be configured with a static IP address from the same IP range
that belongs to the network segment that will provide PXE services.
So, after you have configured your static IP Address, updated your system and performed other
initial settings, use the following command to install DNSMASQ daemon.

# yum install dnsmasq

DNSMASQ main default configuration file located in /etc directory is self-explanatory but intends
to be quite difficult to edit, do to its highly commented explanations.
First make sure you backup this file in case you need to review it later and, then, create a new blank
configuration file using your favorite text editor by issuing the following commands.

# mv /etc/dnsmasq.conf /etc/dnsmasq.conf.backup
# nano /etc/dnsmasq.conf

Now, copy and paste the following configurations on dnsmasq.conf file and assure that you change
the below explained statements to match your network settings accordingly.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


interface=eno16777736,lo
#bind-interfaces
domain=centos7.lan
# DHCP range-leases
dhcp-range= eno16777736,192.168.1.3,192.168.1.253,255.255.255.0,1h
# PXE
dhcp-boot=pxelinux.0,pxeserver,192.168.1.20
# Gateway
dhcp-option=3,192.168.1.1
# DNS
dhcp-option=6,92.168.1.1, 8.8.8.8
server=8.8.4.4
# Broadcast Address
dhcp-option=28,10.0.0.255
# NTP Server
dhcp-option=42,0.0.0.0

pxe-prompt="Press F8 for menu.", 60


pxe-service=x86PC, "Install CentOS 7 from network server 192.168.1.20", pxelinux
enable-tftp
tftp-root=/var/lib/tftpboot

The statements that you need to change are follows:

• interface – Interfaces that the server should listen and provide services.

• bind-interfaces – Uncomment to bind only on this interface.

• domain – Replace it with your domain name.

• dhcp-range – Replace it with IP range defined by your network mask on this segment.

• dhcp-boot – Replace the IP statement with your interface IP Address.

• dhcp-option=3,192.168.1.1 – Replace the IP Address with your network segment Gateway.

• dhcp-option=6,92.168.1.1 – Replace the IP Address with your DNS Server IP – several DNS
IPs can be defined.

• server=8.8.4.4 – Put your DNS forwarders IPs Addresses.

• dhcp-option=28,10.0.0.255 – Replace the IP Address with network broadcast address –


optionally.

• dhcp-option=42,0.0.0.0 – Put your network time servers – optionally (0.0.0.0 Address is for
self-reference).

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• pxe-prompt – Leave it as default – means to hit F8 key for entering menu 60 with seconds
wait time..

• pxe=service – Use x86PC for 32-bit/64-bit architectures and enter a menu description
prompt under string quotes. Other values types can be: PC98, IA64_EFI, Alpha, Arc_x86,
Intel_Lean_Client, IA32_EFI, BC_EFI, Xscale_EFI and X86-64_EFI.

• enable-tftp – Enables the build-in TFTP server.

• tftp-root – Use /var/lib/tftpboot – the location for all netbooting files.

For other advanced options concerning configuration file feel free to read dnsmasq manual.

Installing SysLinux Bootloaders


After you have edited and saved DNSMASQ main configuration file, go ahead and install Syslinx
PXE bootloader package by issuing the following command.

# yum install syslinux

The PXE bootloaders files reside in /usr/share/syslinux absolute system path, so you can check it by
listing this path content. This step is optional, but you might need to be aware of this path because
on the next step, we will copy of all its content to TFTP Server path.

# ls /usr/share/syslinux

Installing TFTP-Server
Now, let’s move to next step and install TFTP-Server and, then, copy all bootloders files provided
by Syslinux package from the above listed location to /var/lib/tftpboot path by issuing the following
commands.

# yum install tftp-server


# cp -r /usr/share/syslinux/* /var/lib/tftpboot

Setting Up PXE Configuration


Typically the PXE Server reads its configuration from a group of specific files (GUID files – first,
MAC files – next, Default file – last) hosted in a folder called pxelinux.cfg, which must be located
in the directory specified in tftp-root statement from DNSMASQ main configuration file.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Create the required directory pxelinux.cfg and populate it with a default file by issuing the
following commands.

# mkdir /var/lib/tftpboot/pxelinux.cfg
# touch /var/lib/tftpboot/pxelinux.cfg/default

Now it’s time to edit PXE Server configuration file with valid Linux distributions installation
options. Also note that all paths used in this file must be relative to the /var/lib/tftpboot directory.
Below you can see an example configuration file that you can use it, but modify the installation
images (kernel and initrd files), protocols (FTP, HTTP, HTTPS, NFS) and IPs to reflect your
network installation source repositories and paths accordingly.

# nano /var/lib/tftpboot/pxelinux.cfg/default

Add the following whole excerpt to the file.

default menu.c32
prompt 0
timeout 300
ONTIMEOUT local

menu title ########## PXE Boot Menu ##########

label 1
menu label ^1) Install CentOS 7 x64 with Local Repo
kernel centos7/vmlinuz
append initrd=centos7/initrd.img method=ftp://192.168.1.20/pub devfs=nomount

label 2
menu label ^2) Install CentOS 7 x64 with http://mirror.centos.org Repo
kernel centos7/vmlinuz
append initrd=centos7/initrd.img method=http://mirror.centos.org/centos/7/os/x86_64/
devfs=nomount ip=dhcp

label 3
menu label ^3) Install CentOS 7 x64 with Local Repo using VNC
kernel centos7/vmlinuz
append initrd=centos7/initrd.img method=ftp://192.168.1.20/pub devfs=nomount
inst.vnc inst.vncpassword=password

label 4
menu label ^4) Boot from local drive

As you can see CentOS 7 boot images (kernel and initrd) reside in a directory named centos7
relative to /var/lib/tftpboot (on an absolute system path this would mean /var/lib/tftpboot/centos7)

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


and the installer repositories can be reached by using FTP protocol on 192.168.1.20/pub network
location – in this case the repos are hosted locally because the IP address is the same as the PXE
server address).
Also menu label 3 specifies that the client installation should be done from a remote location via
VNC (here replace VNC password with a strong password) in case you install on a headless client
and the menu label 2 specifies as installation sources a CentOS 7 official Internet mirror (this case
requires an Internet connection available on client through DHCP and NAT).

Important: As you see in the above configuration, we’ve used CentOS 7 for demonstration purpose,
but you can also define RHEL 7 images, and following whole instructions and configurations are
based on CentOS 7 only, so be careful while choosing distribution.

Adding CentOS 7 Boot Images to PXE


For this step CentOS kernel and initrd files are required. To get those files you need the CentOS 7
DVD ISO Image. So, go ahead and download CentOS DVD Image, put it in your DVD drive and
mount the image to /mnt system path by issuing the below command.
The reason for using the DVD and not a Minimal CD Image is the fact that later this DVD content
would be used to create the locally installer repositories for FTP sources.

# mount -o loop /dev/cdrom /mnt


# ls /mntt

If your machine has no DVD drive you can also download CentOS 7 DVD ISO locally
using wget or curl utilities from a CentOS mirror and mount it.

# wget http://mirrors.xservers.ro/centos/7.0.1406/isos/x86_64
/CentOS-7.0-1406-x86_64-DVD.iso
# mount -o loop /path/to/centos-dvd.iso /mnt

After the DVD content is made available, create the centos7 directory and copy CentOS 7 bootable
kernel and initrd images from the DVD mounted location to centos7 folder structure.

# mkdir /var/lib/tftpboot/centos7
# cp /mnt/images/pxeboot/vmlinuz /var/lib/tftpboot/centos7
# cp /mnt/images/pxeboot/initrd.img /var/lib/tftpboot/centos7

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The reason for using this approach is that, later you can create new separate directories in
/var/lib/tftpboot path and add other Linux distributions to PXE menu without messing up the entire
directory structure.

Creating CentOS 7 Local Mirror Installation Source


Although you can setup Installation Source Mirrors via a variety of protocols such as HTTP,
HTTPS or NFS, for this guide, I have chosen FTP protocol because is very reliable and easy to
setup with the help of vsftpd server.

Further install vsftpd daemon, copy all DVD mounted content to vsftpd default server path (/var/ftp/
pub) – this can take a while depending on your system resources and append readable permissions
to this path by issuing the following commands.

# yum install vsftpd


# cp -r /mnt/* /var/ftp/pub/
# chmod -R 755 /var/ftp/pub

Now that the PXE server configuration is finally finished, start DNSMASQ and VSFTPD servers,
verify their status and enable it system-wide, to automatically start after every system reboot, by
running the below commands.

# systemctl start dnsmasq


# systemctl status dnsmasq
# systemctl start vsftpd
# systemctl status vsftpd
# systemctl enable dnsmasq
# systemctl enable vsftpd

To get a list of all ports that needs to be open on your Firewall in order for client machines to reach
and boot from PXE server, run netstat command and add CentOS 7 Firewalld rules accordingly to
dnsmasq and vsftpd listening ports.

# netstat -tulpn
# firewall-cmd --add-service=ftp --permanent ## Port 21
# firewall-cmd --add-service=dns --permanent ## Port 53
# firewall-cmd --add-service=dhcp --permanent ## Port 67
# firewall-cmd --add-port=69/udp --permanent ## Port for TFTP
# firewall-cmd --add-port=4011/udp --permanent ## Port for ProxyDHCP
# firewall-cmd --reload ## Apply rules

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Testing FTP Installation Source
To test FTP Installation Source network path open a browser locally (lynx should do it) or on a
different computer and type the IP Address of your PXE server with
FTP protocol followed by /pub network location on URL filed and the result should be as presented
in the below screenshot.

ftp://192.168.1.20/pub

To debug PXE server for eventual misconfigurations or other information and diagnostics in live
mode run the following command.

# tailf /var/log/messages

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Finally, the last required step that you need to do is to unmount CentOS 7 DVD and remove the
physical medium.

\# umount /mnt

Configure Clients to Boot from PXE Network


Now your clients can boot and install CentOS 7 on their machines by configuring Network Boot as
primary boot device from their systems BIOS or by hitting a specified key during BIOS POST
operations as specified in motherboard manual.
In order to choose network booting. After first PXE prompt appears, press F8 key to enter
presentation and then hit Enter key to proceed forward to PXE menu.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Once you have reached PXE menu, choose your CentOS 7 installation type, hi Enter key and
continue with the installation procedure the same way as you might install it from a local media
boot device.
Please note down that using variant 2 from this menu requires an active Internet connection on the
target client. Also, on below screenshots you can see an example of a client remote installation via
VNC.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
Summary
That’s all for setting up a minimal PXE Server on CentOS 7. If you want to know more about PXE
server configuration such as how to setup automated installations of CentOS 7 using Kickstart files
and adding other Linux distributions to PXE menu – Ubuntu Server and Debian 7. check out at
Tecmint.com.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 25: Implement and Configure a PXE Boot
Server on Ubuntu
PXE or Preboot eXecution Environment is a server-client mechanism which instructs a client
machine to boot form network.

In this chapter we’ll show how to install Ubuntu Server via a PXE server with local HTTP sources
mirrored from Ubuntu server ISO image via Apache web server. The PXE server used in this
tutorial is Dnsmasq Server.

Install and Configure DNSMASQ Server


In order to setup the PXE server, on the first step login with the root account or an account with root
privileges and install Dnsmasq package in Ubuntu by issuing the following command.

# apt install dnsmasq

Next, backup dnsmasq main configuration file and then start editing the file with the following
configurations.

# mv /etc/dnsmasq.conf /etc/dnsmasq.conf.backup
# nano /etc/dnsmasq.conf

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Add the following configuration to dnsmasq.conf file.

interface=ens33,lo
bind-interfaces
domain=mypxe.local

dhcp-range=ens33,192.168.1.230,192.168.1.253,255.255.255.0,1h
dhcp-option=3,192.168.1.1
dhcp-option=6,192.168.1.1
dhcp-option=6,8.8.8.8
server=8.8.4.4
dhcp-option=28,10.0.0.255
dhcp-option=42,0.0.0.0

dhcp-boot=pxelinux.0,pxeserver,192.168.1.14

pxe-prompt="Press F8 for menu.", 2


pxe-service=x86PC, "Install Ubuntu 16.04 from network server
192.168.1.14", pxelinux
Also,enable-tftp
after you’ve finished editing the dnsmasq configuration file, create the directory for the PXE
netboot files by issuing the below command and restart dnsmasq daemon to apply changes. Check
tftp-root=/srv/tftp
dnsmasq service status to see if it has been started.

# mkdir /srv/tftp
# systemctl restart dnsmasq.service
# systemctl status dnsmasq.service

Install TFTP Netboot Files


On the next step grab the latest version of Ubuntu server ISO image for 64-bit architecture by
issuing the following command.

# wget http://releases.ubuntu.com/16.04/ubuntu-16.04.3-server-amd64.iso

After Ubuntu server ISO has been downloaded, mount the image in /mnt directory and list the
mounted directory content by running the below commands.

# mount -o loop ubuntu-16.04.3-desktop-amd64.iso /mnt/


# ls /mnt/

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Next, copy the netboot files from Ubuntu mounted tree to tftp system path by issuing the below
command. Also, list tftp system path to see the copied files.

# cp -rf /mnt/install/netboot/* /srv/tftp/


# ls /srv/tftp/

Prepare Local Installation Source Files


The local network installation sources for Ubuntu server will be provided via HTTP protocol. First,
install, start and enable Apache web server by issuing the following commands.

# apt install apache2


# systemctl start apache2
# systemctl enable apache2

Then, copy the content of the mounted Ubuntu DVD to Apache web server web root path by
executing the below commands. List the content of Apache web root path to check if Ubuntu ISO
mounted tree has been completely copied.

# cp -rf /mnt/* /var/www/html/


# ls /var/www/html/

Next, open HTTP port in firewall and navigate to your machine IP address via a browser
(http://192.168.1.14/ubuntu) in order to test if you can reach sources via HTTP protocol.

# ufw allow http

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Setup PXE Server Configuration File
In order to be able to pivot the rootfs via PXE and local sources, Ubuntu needs to be instructed via a
preseed file. Create the following local-sources.seed file in your web server document root path
with the following content.

# nano /var/www/html/ubuntu/preseed/local-sources.seed

Add following line to local-sources.seed file.

d-i live-installer/net-image string http://192.168.1.14/ubuntu/install/filesystem.squashfs

Here, make sure you replace the IP address accordingly. It should be the IP address where web
resources are located. In this guide the web sources, the PXE server and TFTP server are hosted on
the same system. In a crowded network you might want to run PXE, TFTP and web services on
separate machines in order to improve PXE network speed.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


A PXE Server reads and executes configuration files located in pxelinux.cfg TFTP root directory in
this order: GUID files, MAC files and default file.
The directory pxelinux.cfg is already created and populated with the required PXE configuration
files because we’ve earlier copied the netboot files from Ubuntu mounted ISO image.
In order to add the above preseed statement file to Ubuntu installation label in PXE configuration
file, open the following file for editing by issuing the below command.

# nano /srv/tftp/ubuntu-installer/amd64/boot-screens/txt.cfg

In Ubuntu PXE txt.cfg configuration file replace the following line as illustrated in the below
excerpt.

default install
label install
menu label ^Install Ubuntu 16.04 with Local Sources
menu default
kernel ubuntu-installer/amd64/linux
append auto=true url=http://192.168.1.14/ubuntu/preseed/local-sources.seed
vga=788 initrd=ubuntu-installer/amd64/initrd.gz --- quiet
label cli
menu label ^Command-line install
kernel ubuntu-installer/amd64/linux
append tasks=standard pkgsel/language-pack-patterns= pkgsel/
install-language-support=false vga=788 initrd=ubuntu-installer/amd64/initrd.gz --- quiet

In case you want to add the preseed url statement to Ubuntu Rescue menu, open the below file and
make sure you update the content as illustrated in the below example.

# nano /srv/tftp/ubuntu-installer/amd64/boot-screens/rqtxt.cfg

Add the followng configuration to rqtxt.cfg file.

label rescue
menu label ^Rescue mode
kernel ubuntu-installer/amd64/linux
append auto=true url=http://192.168.1.14/ubuntu/preseed/local-sources.seed
vga=788 initrd=ubuntu-installer/amd64/initrd.gz rescue/enable=true --- quiet

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The important line you should update is url=http://192.168.1.14/ubuntu/preseed/local-sources.seed
which specifies the URL address where the pressed file is located in your network.
Finally, open Ubuntu pxe menu.cfg file and comment the first three lines in order to expand the
PXE boot screen as illustrated in the below screenshot.

# nano /srv/tftp/ubuntu-installer/amd64/boot-screens/menu.cfg

Comment these three following lines.

#menu hshift 13
#menu width 49
#menu margin 8

Now run netstat command with root privileges to identify dnsmasq, tftp and web open ports in
listening state on your server as illustrated in the below excerpt.

# netstat -tulpn

After you’ve identified all required ports, issue the below commands to open the ports in ufw
firewall.

# ufw allow 53/tcp


# ufw allow 53/udp
# ufw allow 67/udp
# ufw allow 69/udp
# ufw allow 4011/udp

Install Ubuntu with Local Sources via PXE


To install Ubuntu server via PXE and use the local network installation sources, reboot your
machine client, instruct the BIOS to boot from network and at the first PXE menu screen choose the
first option as illustrated in the below images.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
The installation procedure should be performed as usual. When the installer reaches the Ubuntu
archive mirror country setup, use the up keyboard arrow to move to the first option, which says:
enter information manually.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Press [enter] key to update this option, delete the mirror string and add the IP address
[http://192.168.1.14] of the web server mirror sources and press enter to continue as illustrated in
the below image.

On the next screen, add your mirror archive directory [/ubuntu] as shown below and press enter key
to continue with the installation process and usually.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In case you want to see information about what packages are downloaded from your network local
mirror, press [CTRL+ALT+F2] keys in order to change machine virtual console and issue the
following command.

# tail –f /var/log/syslog

After the installation of the Ubuntu server finishes, login to the newly installed system and run the
following command with root privileges in order to update the repositories packages from local
network sources to official Ubuntu mirrors.
The mirrors needs to be changed in order to update the system using the internet repositories.

$ sudo sed –i.bak ‘s/192.168.1.14/archive.ubuntu.com/g’ /etc/apt/sources.list

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Assure you replace the IP address according to the IP address of your own web local sources.

Summary
That’s all! You can now update your Ubuntu server system and install all required software.
Installing Ubuntu via PXE and a local network source mirror can improve the installation speed and
can save internet bandwidth and costs in case of deploying a large number of servers in a short
period of time at your premises.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 26: Setting Up a Caching DNS Server
Imagine what it would be like if we had to remember the IP addresses of all the websites that we
use daily. Even if we had a prodigious memory, the process to browse to a website would be
ridiculously slow and time-consuming.

And what about if we needed to visit multiple websites or use several applications that reside in the
same machine or virtual host? That would be one of the worst headaches I can think of - not to
mention the possibility that the IP address associated with a website or application can change
without prior notice. Just the very thought of it would be enough reason to desist using the Internet
after a while.

That’s precisely what a world without Domain Name System (also known as DNS) would be.
Fortunately, this service solves all of the issues mentioned above - even if the relationship between
an IP address and a name changes.

For that reason, in this chapter we will learn how to configure and use a caching DNS server, a
service that will allow to translate domain names into IP addresses and vice versa.

Introducing Name Resolution


For small networks that are not subject to frequent changes, the /etc/hosts file can be used as a
rudimentary method of domain name to IP address resolution. With a very simple syntax, this file
allows us to associate a name (and / or an alias) with an IP address as follows:

[IP address] [name] [alias(es)]

For example,

192.168.0.1 gateway gateway.mydomain.com


192.168.0.2 web web.mydomain.com

Thus, you can reach a machine by its name, alias, or IP address.

For larger networks, or those that are subject to frequent changes, using the /etc/hosts file to resolve
domain names into IP addresses would not be an acceptable solution. That’s where the need for a
dedicated service comes in.

Under the hood, a DNS server queries a large database in the form of a tree, which starts at the root
(“.”) zone. The following image will help us to illustrate:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In the image above, the root (.) zone contains com, edu, and net domains. Each of these domains are
(or at least can be) managed by different organizations to avoid depending on a big, central one.
This allows to properly distribute requests in a hierarchical way.

Let’s see what happens under the hood:

1) When a client makes a query to a DNS server for web1.sales.me.com, the server sends the query
to the top (root) DNS server, which points the query to the name server in the .com zone.

This, in turn, sends the query to the next level name server (in the me.com zone), and then to
sales.me.com. This process is repeated until the FQDN (Fully Qualified Domain Name,
web1.sales.me.com in this example) is returned by the name server of the zone where it belongs.

2) In this example, the name server in sales.me.com. responds for the address web1.sales.me.com
and returns the desired domain name-IP association and other information as well (if configured to
do so).

All this information is sent to the original DNS server, which then passes it back to the client that
requested it in the first place. To avoid repeating the same steps for future identical queries, the
results of the query are stored in the DNS server.

These are the reasons why this kind of setup is commonly known as a recursive or caching DNS
server.

Installing and Configuring a DNS Server


In Linux, the most used DNS server implementation is bind (short for Berkeley Internet Name
Daemon), which can be installed as follows:

# yum update && install bind bind-utils # CentOS 7


# apt-get update && apt-get install bind9 bind9utils # Ubuntu 16.04

Next, let’s make a copy of the configuration file before making any changes:

# cp /etc/named.conf /etc/named.conf.orig # CentOS


# cp /etc/bind/named.conf /etc/bind/named.conf.orig # Ubuntu

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Then let’s open named.conf and head over to the options block. There we need to set make sure the
following settings are present to configure a recursive, caching server with IP 192.168.0.18/24 that
can be accessed only by hosts in the same network (as a security measure).

The forwarders settings are used to indicate which name servers should be queried first (in the
following example we use Google’s) for hosts outside our domain:

options {

...

listen-on port 53 { 127.0.0.1; 192.168.0.18};

allow-query { localhost; 192.168.0.0/24; };

recursion yes;

forwarders {

8.8.8.8;

8.8.4.4;

};

Outside the options block we will define our sales.me.com zone (in Ubuntu this is usually done in a
separate file called named.conf.local) that maps a domain with a given IP address and a reverse
zone to map the IP address to the corresponding domain.

However, the actual configuration of each zone will go in separate files as indicated by the file
directive (“master” indicates we will only use one DNS server).

Add the following blocks to named.conf:

zone "sales.me.com." IN {

type master;

file "/var/named/sales.me.com.zone";

};

zone "0.168.192.in-addr.arpa" IN {

type master;

file "/var/named/0.162.198.in-addr.arpa.zone";

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


};

Note that in-addr.arpa (for IPv4 addresses) and ip6.arpa (for IPv6) are conventions for reverse zone
configurations.

After saving the above changes to named.conf, we can check for errors as follows:

named-checkconf /etc/named.conf

If any errors are found, the above command will output an informative message with the cause and
the line where they are located. Otherwise, it will not return anything.

Configuring DNZ Zones


In the files /var/named/sales.me.com.zone and /var/named/0.168.192.in-addr.arpa.zone we will
configure the forward (domain → IP address) and reverse (IP address → domain) zones,
respectively.

Let’s tackle the forward configuration first:

0) At the top of the file you will find a line beginning with TTL (short for Time To Live), which
specifies how long the cached response should “live” before being replaced by the results of a new
query.

In the line immediately below, we will reference our domain and set the email address where
notifications should be sent (note that the root.sales.me.com means root@sales.me.com).

1) A SOA (Start Of Authority) record indicates that this system is the authoritative nameserver for
machines inside the sales.me.com domain. The following settings are required when there are two
nameservers (one master and one slave) per domain (although such is not our case since it is not
required in the exam, they are presented here for your reference):

• The Serial is used to distinguish one version of the zone definition file from a previous one
(where settings could have changed). If the cached response points to a definition with a
different serial, the query is performed again instead of feeding it back to the client.

• In a setup with a slave (secondary) nameserver, Refresh indicates the amount of time until
the secondary should check for a new serial from the master server. In addition, Retry tells
the server how often the secondary should attempt to contact the primary if no response
from the primary has been received, whereas Expire indicates when the zone definition in
the secondary is no longer valid after the master server could not be reached, and Negative
TTL is the time that a Non-existent domain (NXdomain) should be cached.

2) A NS record indicates what is the authoritative DNS server for our domain (referenced by the @
sign at the beginning of the line).

3) An A record (for IPv4 addresses) or an AAAA (for IPv6 addresses) translates names into IP
addresses. In the example below:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• dns: 192.168.0.18 (the DNS server itself)

• web1: 192.168.0.29 (a web server inside the sales.me.com zone)

• mail1: 192.168.0.28 (a mail server inside the sales.me.com zone)

• mail2: 192.168.0.30 (another mail server)

4) A MX record indicates the names of the authorized mail transfer agents (MTAs) for this domain.
The hostname should be prefaced by a number indicating the priority that the current mail server
should have when there are two or more MTAs for the domain (the lower the value, the higher the
priority - in the following example, mail1 is the primary whereas mail2 is the secondary MTA).

5) A CNAME record sets an alias (www.web1) for a host (web1).

Important: The dot (.) at the end of the names is required.

$TTL 604800

@ IN SOA sales.me.com. root.sales.me.com. (

2016051101 ; Serial

10800 ; Refresh

3600 ; Retry

604800 ; Expire

604800) ; Negative TTL

@ IN NS dns.sales.me.com.

dns IN A 192.168.0.18

web1 IN A 192.168.0.29

mail1 IN A 192.168.0.28

mail2 IN A 192.168.0.30

@ IN MX 10 mail1.sales.me.com.

@ IN MX 20 mail2.sales.me.com.

www.web1 IN CNAME web1

Let’s now look at the reverse zone configuration (/var/named/0.168.192.in-addr.arpa.zone). The


SOA record is the same as in the previous file, whereas the last three lines with a PTR (pointer)

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


record indicate the last octet in the IPv4 address of the mail1, web1, and mail2 hosts (192.168.0.28,
192.168.0.29, and 192.168.0.30, respectively).

$TTL 604800

@ IN SOA sales.me.com. root.sales.me.com. (

2016051101 ; Serial

10800 ; Refresh

3600 ; Retry

604800 ; Expire

604800) ; Minimum TTL

@ IN NS dns.sales.me.com.

28 IN PTR mail1.sales.me.com.

29 IN PTR web1.sales.me.com.

30 IN PTR mail2.sales.me.com.

You can check the zone files for errors with:

named-checkzone sales.me.com /var/named/sales.me.com.zone

named-checkzone 0.168.192.in-addr.arpa /var/named/0.168.192.in-


addr.arpa.zone

The following image illustrates what is the expected output on success:

Otherwise, you will get an error message stating the cause and how to fix it:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Once you have verified the main configuration file and the zone files, restart the named service to
apply changes. In CentOS 7, do:

systemctl restart named

And don’t forget to enable it for future boots as well:

systemctl enable named

In Ubuntu 16.04:

sudo service bind9 restart

Finally, you will have to edit the configuration of the network interface in the clients:

DNS1=192.168.0.18 # In /etc/sysconfig/network-scripts/ifcfg-enp0s3 for CentOS

dns-nameservers 192.168.0.18 # in /etc/network/interfaces for Ubuntu

and restart the network service to apply changes.

Testing the DNS Server


At this point we are ready to query our DNS server for local and outside names and addresses.

The following commands will return the IP address associated with the host web1:

host web1.sales.me.com

host web1

host www.web1

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


How can we find out who is handling emails for sales.me.com? Just query the MX records for the domain:

host -t mx sales.me.com

Likewise, let’s perform a reverse query. This will help us find out the name behind an IP address:

host 192.168.0.28

host 192.168.0.29

You can try the same operations for outside hosts:

host -t mx linux.com

host 8.8.8.8

To verify that queries are indeed going through our DNS server, let’s enable logging:

rndc querylog

And check the /var/log/messages file (in CentOS):

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To disable logging, type again

rndc querylog

In Ubuntu, enabling logging will require adding the following independent block (same level as the
options block) to /etc/bind/named.conf:

logging {

channel query_log {

file "/var/log/bind9/query.log";

severity dynamic;

print-category yes;

print-severity yes;

print-time yes;

};

category queries { query_log; };

};

Note that the log file must exist and be writable by named.

To ensure the proper operation of your DNS server, don’t forget to allow the service in your firewall
(port TCP/UDP 53) as follows:

firewall-cmd --add-port=53/tcp

firewall-cmd --add-port=53/udp

firewall-cmd --add-port=53/tcp --permanent

firewall-cmd --add-port=53/udp --permanent

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Summary
In this chapter we have explained how to set up a basic recursive, caching DNS server and how to
configure zones for a domain. The mystery of name to IP resolution (and vice versa) is not such
anymore!

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 27: Logical Volume Management – LVM
One of the most important decisions while installing a Linux system is the amount of storage space
to be allocated for system files, home directories, and others. If you make a mistake at that point,
growing a partition that has run out of space can be burdensome and somewhat risky.

Logical Volumes Management (also known as LVM), which have become a default for the
installation of most (if not all) Linux distributions, have numerous advantages over traditional
partitioning management. Perhaps the most distinguishing feature of LVM is that it allows logical
divisions to be resized (reduced or increased) at will without much hassle.

The structure of the LVM consists of:

• One or more entire hard disks or partitions are configured as physical volumes (PVs).

• A volume group (VG) is created using one or more physical volumes. You can think of a
volume group as a single storage unit.

• Multiple logical volumes can then be created in a volume group. Each logical volume is
somewhat equivalent to a traditional partition - with the advantage that it can be resized at
will as we mentioned earlier.

In this chapter we will use three disks of 8 GB each (/dev/sdb, /dev/sdc, and /dev/sdd) to create
three physical volumes. You can either create the PVs directly on top of the device, or partition it
first.

Creating physical volumes, volume groups, and logical volumes


To create physical volumes on top of /dev/sdb, /dev/sdc, and /dev/sdd, do:

pvcreate /dev/sdb /dev/sdc /dev/sdd

You can list the newly created PVs with

pvs

and get detailed information about each PV with

pvdisplay /dev/sdX

(where X is b, c, or d)

If you omit /dev/sdX as parameter, you will get information about all the PVs.

To create a volume group named vg00 using /dev/sdb and /dev/sdc (we will save /dev/sdd for later
to illustrate the possibility of adding other devices to expand storage capacity when needed):

vgcreate vg00 /dev/sdb /dev/sdc

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


As it was the case with physical volumes, you can also view information about this volume group
by issuing

vgdisplay vg00

Since vg00 is formed with two 8 GB disks, it will appear as a single 16 GB drive:

When it comes to creating logical volumes, the distribution of space must take into consideration
both current and future needs. It is considered good practice to name each logical volume according
to its intended use.

For example, let’s create two LVs named vol_projects (10 GB) and vol_backups (remaining space),
which we can use later to store project documentation and system backups, respectively.

The -n option is used to indicate a name for the LV, whereas -L sets a fixed size and -l (lowercase L)
is used to indicate a percentage of the remaining space in the container VG.

lvcreate -n vol_projects -L 10G vg00

lvcreate -n vol_backups -l 100%FREE vg00

As before, you can view the list of LVs and basic information with

lvs

and detailed information with

lvdisplay

To view information about a single LV, use lvdisplay with the VG and LV as parameters, as follows:

lvdisplay vg00/vol_projects

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In the image above we can see that the LVs were created as storage devices (refer to the LV
Path line). Before each logical volume can be used, we need to create a filesystem on top of it.We’ll
use ext4 as an example here since it allows us both to increase and reduce the size of each LV (as
opposed to xfs that only allows to increase the size):

mkfs.ext4 /dev/vg00/vol_projects
mkfs.ext4 /dev/vg00/vol_backups

In the next section we will explain how to resize logical volumes and add extra physical storage
space when the need arises to do so.

Resizing logical volumes and extending volume groups


Now picture the following scenario. You are starting to run out of space in vol_backups, while you
have plenty of space available in vol_projects.

Due to the nature of LVM, we can easily reduce the size of the latter (say 2.5 GB) and allocate it for
the former, while resizing each filesystem at the same time.

Fortunately, this is as easy as doing:

lvreduce -L -2.5G -r /dev/vg00/vol_projects

lvextend -l +100%FREE -r /dev/vg00/vol_backups

It is important to include the minus (-) or plus (+) signs while resizing a logical volume. Otherwise,
you’re setting a fixed size for the LV instead of resizing it.

It can happen that you arrive at a point when resizing logical volumes cannot solve your storage
needs anymore and you need to buy an extra storage device.

Keeping it simple, you will need another disk. We are going to simulate this situation by adding the
remaining PV from our initial setup (/dev/sdd).

To add /dev/sdd to vg00, do

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


vgextend vg00 /dev/sdd

If you run vgdisplay vg00 before and after the previous command, you will see the increase in the
size of the VG:

Now you can use the newly added space to resize the existing LVs according to your needs, or to
create additional ones as needed.

Mounting logical volumes on boot and on demand


Of course, there would be no point in creating logical volumes if we are not going to use them! To
better identify a logical volume, we will need to find out what its UUID (a non-changing attribute
that uniquely identifies a formatted storage device) is. To do that, use blkid followed by the path to
each device:

blkid /dev/vg00/vol_projects

blkid /dev/vg00/vol_backups

Create mount points for each LV:

mkdir /home/projects

mkdir /home/backups

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


and insert the corresponding entries in /etc/fstab (make sure to use the UUIDs obtained before):

UUID=b85df913-580f-461c-844f-546d8cde4646 /home/projects ext4


defaults 0 0
UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups
ext4 defaults 0 0

Then save the changes and mount the LVs:

mount -a
mount | grep home

When it comes to using the LVs, you will need to assign proper ugo+rwx permissions as explained
in Chapter 10 (“User management and file attributes”).

Summary
In this chapter we have introduced Logical Volume Management, a versatile tool to manage storage
devices that provides scalability.

When combined with RAID, you can enjoy not only scalability (provided by LVM) but also
redundancy (offered by RAID).

In this type of setup, you will typically find LVM on top of RAID, that is, configure RAID first and
then configure LVM on top of it.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 28: Setting Up Network Share (Samba &
NFS) Filesystems
Once a disk has been partitioned, Linux needs some way to access the data on the partitions. Unlike
DOS or Windows (where this is done by assigning a drive letter to each partition), Linux uses a
unified directory tree where each partition is mounted at a mount point in that tree.

A mount point is a directory that is used as a way to access the filesystem on the partition, and
mounting the filesystem is the process of associating a certain filesystem (a partition, for example)
with a specific directory in the directory tree.

In other words, the first step in managing a storage device is attaching the device to the file system
tree. This task can be accomplished on a one-time basis by using tools such as mount (and then
unmounted with umount) or persistently across reboots by editing the /etc/fstab file.

Mounting Filesystem
The mount command (without any options or arguments) shows the currently mounted filesystems:

In addition, mount is used to mount filesystems into the filesystem tree. Its standard syntax is as
follows:

mount -t type device dir -o options

This command instructs the kernel to mount the filesystem found on device (a partition, for
example, that has been formatted with a filesystem type) at the directory dir, using all options. In
this form, mount does not look in /etc/fstab for instructions.

If only a directory or device is specified, for example:

mount /dir -o options

or

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


mount device -o options

mount tries to find a mount point and if it can’t find any, then searches for a device (both cases in
the /etc/fstab file), and finally attempts to complete the mount operation (which usually succeeds,
except for the case when either the directory or the device is already being used, or when the user
invoking mount is not root).

You will notice that every line in the output of mount has the following format:

device on directory type (options)

For example,

/dev/mapper/debian-home on /home type ext4


(rw,relatime,user_xattr,barrier=1,data=ordered)

Reads:

/dev/mapper/debian-home is mounted on /home, which has been formatted as ext4, with the
following options: rw,relatime,user_xattr,barrier=1,data=ordered

Mount options
Most frequently used mount options include:

• async: allows asynchronous I/O operations on the file system being mounted.

• auto: marks the file system as enabled to be mounted automatically using mount -a. It is the
opposite of noauto.

• defaults: this option is an alias for async,auto,dev,exec,nouser,rw,suid. Note that multiple


options must be separated by a comma without any spaces. If by accident you type a space
between options, mount will interpret the subsequent text string as another argument.

• loop: Mounts an image (an .iso file, for example) as a loop device. This option can be used
to simulate the presence of the disk’s contents in an optical media reader.

• noexec: prevents the execution of executable files on the particular filesystem. It is the
opposite of exec.

• nouser: prevents any users (other than root) to mount and unmount the filesystem. It is the
opposite of user.

• remount: mounts the filesystem again in case it is already mounted.

• ro: mounts the filesystem as read only.

• rw: mounts the file system with read and write capabilities.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• relatime: makes access time to files be updated only if atime is earlier than mtime.

• user_xattr: allow users to set and remote extended filesystem attributes.

For example, in order to mount a device with ro and noexec options, you will need to do:

mount -t ext4 /dev/sdg1 /mnt -o ro,noexec

In this case we can see that attempts to write a file to or to run a binary file located inside our
mounting point fail with corresponding error messages:

touch /mnt/myfile

/mnt/bin/echo “Hi there”

To mount a device with default options:

mount -t ext4 /dev/sdg1 /mnt -o defaults

In the following scenario, we will try to write a file to our newly mounted device and run an
executable file located within its filesystem tree using the same commands as in the previous
example:

In this last case, it works perfectly.

Unmounting Devices
Unmounting a device (with the umount command) means finish writing all the remaining “on
transit” data so that it can be safely removed.

Note that if you try to remove a mounted device without properly unmounting it first, you run the
risk of damaging the device itself or cause data loss.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


That being said, in order to unmount a device, you must be “standing outside” its block device
descriptor or mount point.

In other words, your current working directory must be something else other than the mounting
point. Otherwise, you will get a message saying that the device is busy:

An easy way to “leave” the mounting point is typing the cd command which, in lack of arguments,
will take us to our current user’s home directory, as shown above.

Mounting Networked Filesystems


The two most frequently used network file systems are SMB (which stands for “Server Message
Block”) and NFS (“Network File System”). Chances are you will use NFS if you need to set up a
share for Unix-like clients only, and will opt for Samba if you need to share files with Windows-
based clients and perhaps other Unix-like clients as well.

The following steps assume that Samba and NFS shares have already been set up in the server with
IP 192.168.0.10 (please note that setting up a NFS share is one of the competencies required for the
LFCE exam, which we will cover after the present book).

Installing and Mounting Samba Share


To mount a Samba share on Linux, follow these steps:

STEP 1: Install the samba-client samba-common and cifs-utils packages

Red Hat-based distributions:

yum update && yum install samba-client samba-common cifs-utils

Debian and derivatives:

aptitude update && aptitude install samba-client samba-common cifs-utils

Then run the following command to look for available samba shares in the server:

smbclient -L 192.168.0.10

and enter the password for the root account in the remote machine:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


In the above image we have highlighted the share that is ready for mounting on our local system.
You will need a valid samba username and password on the remote server in order to access it.

STEP 2: When mounting a password-protected network share, it is not a good idea to write your
credentials in the /etc/fstab file. Instead, you can store them in a hidden file somewhere with
permissions set to 600, like so:

mkdir /media/samba

echo “username=samba_username” > /media/samba/.smbcredentials

echo “password=samba_password” >> /media/samba/.smbcredentials

chmod 600 /media/samba/.smbcredentials

STEP 3: Then add the following line to /etc/fstab

//192.168.0.10/gacanepa /media/samba cifs


credentials=/media/samba/.smbcredentials,defaults 0 0

STEP 4: You can now mount your samba share, either manually (mount //192.168.0.10/gacanepa)
or by rebooting your machine so as to apply the changes made in /etc/fstab permanently.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Installing and Mounting NFS Share
To mount a NFS share, do:

STEP 1: Install the nfs-common and portmap packages

Red Hat-based distributions:

yum update && yum install nfs-utils nfs-utils-lib

Debian and derivatives:

aptitude update && aptitude install nfs-common

STEP 2: Create a mounting point for the NFS share

mkdir /media/nfs

STEP 3: Add the following line to /etc/fstab

192.168.0.10:/NFS-SHARE /media/nfs nfs defaults 0 0

STEP 4: You can now mount your NFS share, either manually (mount 192.168.0.10:/NFS-SHARE)
or by rebooting your machine so as to apply the changes made in /etc/fstab permanently.

Mounting Filesystems Persistently


As shown in the previous two examples, the /etc/fstab file controls how Linux provides access to
disk partitions and removable media devices and consists of a book of lines that contain six fields
each; the fields are separated by one or more spaces or tabs. A line that begins with a hash mark (#)
is a comment and is ignored.

Each line has the following format:

<file system> <mount point> <type> <options> <dump> <pass>

where:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


<filesystem>: The first column specifies the mount device. Most distributions now specify
partitions by their labels or UUIDs. This practice can help reduce problems if partition numbers
change.

<mount point>: The second column specifies the mount point.

<type>: The file system type code is the same as the type code used to mount a filesystem with the
mount command. A file system type code of auto lets the kernel auto-detect the filesystem type,
which can be a convenient option for removable media devices. Note that this option may not be
available for all filesystems out there.

<options>: One (or more) mount option(s).

<dump>: You will most likely leave this to 0 (otherwise set it to 1) to disable the dump utility to
backup the filesystem upon boot (The dump program was once a common backup tool, but it is
much less popular today.)

<pass>: This column specifies whether the integrity of the filesystem should be checked at boot
time with fsck. A 0 means that fsck should not check a filesystem. The higher the number, the
lowest the priority. Thus, the root partition will most likely have a value of 1, while all others that
should be checked should have a value of 2.

Mount Examples
To mount a partition with label TECMINT at boot time with rw and noexec attributes, you should
add the following line in /etc/fstab:

LABEL=TECMINT /mnt ext4 rw,noexec 0 0

If you want the contents of a disk in your DVD drive be available at boot time:

/dev/sr0 /media/cdrom0 iso9660 ro,user,noauto 0 0

where /dev/sr0 is your DVD drive.

Summary
You can rest assured that mounting and unmounting local and network filesystems from the
command line will be part of your day-to-day responsibilities as sysadmin.

You will also need to master /etc/fstab. For more information on this essential system file, you may
want to check the Arch Linux documentation on the subject at
https://wiki.archlinux.org/index.php/fstab.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 29: Configure and Maintain High
Availability/Clustering
High Availability (HA) simply refers to a quality of a system to operate continuously without failure
for a long period of time. HA solutions can be implemented using hardware and/or software, and
one of the common solutions to implementing HA is clustering.
In computing, a cluster is made up of two or more computers (commonly known
as nodes or members) that work together to perform a task. In such a setup, only one node provides
the service with the secondary node(s) taking over if it fails.

Clusters fall into four major types:

• Storage: provide a consistent file system image across servers in a cluster, allowing the
servers to simultaneously read and write to a single shared file system.
• High Availability: eliminate single points of failure and by failing over services from one
cluster node to another in case a node goes becomes inoperative.
• Load Balancing: dispatch network service requests to multiple cluster nodes to balance the
request load among the cluster nodes.
• High Performance: carry out parallel or concurrent processing, thus helping to improve
performance of applications.

Another widely used solution to providing HA is replication (specifically data replications).


Replication is the process by which one or more (secondary) databases can be kept in sync with a
single primary (or master) database.

To setup a cluster, we need at least two servers. For the purpose of this chapter, we will use two
Linux servers:

• Node1: 192.168.10.10
• Node2: 192.168.10.11

In this chapter, we will demonstrate the basics of how to deploy, configure and maintain high
availability/clustering in Ubuntu 16.04/18.04 and CentOS 7. We will demonstrate how to add Nginx
HTTP service to the cluster.

Configuring Local DNS Settings on Each Server


In order for the two servers to communicate to each other, we need to configure the appropriate
local DNS settings in the /etc/hosts file on both servers.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Open and edit the file using your favorite command line editor.

$ sudo vim /etc/hosts

Add the following entries with actual IP addresses of your servers.

192.168.10.10 node1.example.com
192.168.10.11 node2.example.com

Save the changes and close the file.

Installing Nginx Web Server


Now install Nginx web server using the following commands.

$ sudo apt install nginx [On Ubuntu]


$ sudo yum install epel-release && sudo yum install nginx [On CentOS 7]

Once the installation is complete, start the Nginx service for now and enable it to auto-start at boot
time, then check if it’s up and running using the systemctl command.
On Ubuntu, the service should be started automatically immediately after package pre-configuration
is complete, you can simply enable it.

$ sudo systemctl enable nginx


$ sudo systemctl start nginx
$ sudo systemctl status nginx

After starting the Nginx service, we need to create custom webpages for identifying and testing
operations on both servers. We will modify the contents of the default Nginx index page as shown.

$ echo "This is the default page for node1.example.com" |


sudo tee /usr/share/nginx/html/index.html #VPS1
$ echo "This is the default page for node2.example.com" |
sudo tee /usr/share/nginx/html/index.html #VPS2

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Installing and Configuring Corosync and Pacemaker
Next, we have to install Pacemaker, Corosync, and Pcs on each node as follows.

$ sudo apt install corosync pacemaker pcs #Ubuntu


$ sudo yum install corosync pacemaker pcs #CentOS

Once the installation is complete, make sure that pcs daemon is running on both servers.

$ sudo systemctl enable pcsd


$ sudo systemctl start pcsd
$ sudo systemctl status pcsd

Creating the Cluster


During the installation, a system user called “hacluster” is created. So we need to set up the
authentication needed for pcs. Let’s start by creating a new password for the “hacluster” user, we
need to use the same password on all servers:

$ sudo passwd hacluster

Next, on one of the servers (Node1), run the following command to set up the authentication needed
for pcs.

$ sudo pcs cluster auth node1.example.com node2.example.com -u hacluster


-p password_here --force

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Now create a cluster and populate it with some nodes (the cluster name cannot exceed 15
characters, in this example, we have used examplecluster) on Node1 server.

$ sudo pcs cluster setup --name examplecluster node1.example.com


node2.example.com

Now enable the cluster on boot and start the service.

$ sudo pcs cluster enable –all


$ sudo pcs cluster start --all

Now check if the cluster service is up and running using the following command.

$ sudo pcs status OR


$ sudo crm_mon -1

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


From the output of the above command, you can see that there is a warning about no STONITH
devices yet the STONITH is still enabled in the cluster. In addition, no cluster resources/services
have been configured.

Configuring Cluster
The first option is to disable STONITH (or Shoot The Other Node In The Head), the fencing
implementation on Pacemaker.
This component helps to protect your data from being corrupted by concurrent access. For the
purpose of this guide, we will disable it since we have not configured any devices.
To turn off STONITH, run the following command:

$ sudo pcs property set stonith-enabled=false

Next, also ignore the Quorum policy by running the following command:

$ sudo pcs property set no-quorum-policy=ignore

After setting the above options, run the following command to see the property list and ensure that
the above options, stonith and the quorum policy are disabled.

$ sudo pcs property list


© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
Adding a Cluster Service
In this section, we will look at how to add a cluster resource. We will configure a floating IP which
is the IP address that can be instantly moved from one server to another within the same network or
data center.
In short, a floating IP is a technical common term, used for IPs which are not bound strictly to one
single interface.
In this case, it will be used to support failover in a high-availability cluster. Keep in mind that
floating IPs aren’t just for failover situations, they have a few other use cases.
We need to configure the cluster in such a way that only the active member of the cluster “owns” or
responds to the floating IP at any given time.
We will add two cluster resources: the floating IP address resource called “floating_ip” and a
resource for the Nginx web server called “http_server”.
First start by adding the floating_ip as follows. In this example, our floating IP address is
192.168.10.20.

$ sudo pcs resource create floating_ip ocf:heartbeat:IPaddr2


ip=192.168.10.20 cidr_netmask=24 op monitor interval=60s

where:
• floating_ip: is the name of the service.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


• “ocf:heartbeat:IPaddr2”: tells Pacemaker which script to use, IPaddr2 in this case, which
namespace it is in (pacemaker) and what standard it conforms to ocf.
• “op monitor interval=60s”: instructs Pacemaker to check the health of this service every one
minutes by calling the agent’s monitor action.
Then add the second resource, named http_server. Here, resource agent of the service is
ocf:heartbeat:nginx.

$ sudo pcs resource create http_server ocf:heartbeat:nginx


configfile="/etc/nginx/nginx.conf" op monitor timeout="20s" interval="60s"

Once you have added the cluster services, issue the following command to check the status of
resources.

$ sudo pcs status resources

Looking at the output of the command, the two added resources: “floating_ip” and “http_server”
have been listed. The floating_ip service is off because the primary node is in operation.

If you have firewall enabled on your system, you need to allow all traffic to Nginx and all high
availability services through the firewall for proper communication between nodes:

-------------- CentOS 7 --------------


$ sudo firewall-cmd --permanent –add-service=http
$ sudo firewall-cmd --permanent --add-service=high-availability
$ sudo firewall-cmd --reload
-------------- Ubuntu --------------
$ sudo ufw allow http
$ sudo ufw allow high-availability
$ sudo ufw reload

Testing High Availability/Clustering


The final and important step is to test that our high availability setup works. Open a web browser
and navigate to the address 192.168.10.20 you should see the default Nginx page from the

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


node2.example.com as shown in the screenshot.

To simulate a failure, run the following command to stop the cluster on the node2.example.com.

$ sudo pcs cluster stop http_server

Then reload the page at 192.168.10.20, you should now access the default Nginx web page from the
node1.example.com.

Alternatively, you can simulate an error by telling the service to stop directly, without stopping the
the cluster on any node, using the following command on one of the nodes:

$ sudo crm_resource --resource http_server --force-stop

Then you need to run crm_mon in interactive mode (the default), within the monitor interval of 2
minutes, you should be able to see the cluster notice that http_server failed and move it to another
node.
For your cluster services to run efficiently, you may need to set some constraints. You can see the
pcs man page (man pcs) for a list of all usage commands.
For more information on Corosync and Pacemaker, check out: https://clusterlabs.org/

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Summary
In this chapter, we have shown the basics of how to deploy, configure and maintain high
availability/clustering/replication in Ubuntu 16.04/18.04 and CentOS 7. We demonstrated how to
add Nginx HTTP service to a cluster.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 30: Install, Create and Manage LXC (Linux
Containers)
LXC, acronym for Linux Containers, is a lightweight Linux kernel based virtualization solution,
which practically runs on top of the Operating System, allowing you to run multiple isolated
distributions the same time.
The difference between LXC and KVM virtualization is that LXC doesn’t emulates hardware, but
shares the same kernel namespace, similar to chroot applications.

This makes LXC a very fast virtualization solution compared to other virtualization solutions, such
as KVM, XEN or VMware.

This chapter will explain how you can install, deploy and run LXC containers on a CentOS and
Ubuntu Linux distributions.

Installing LXC Virtualization


LXC virtualization is provided through Epel repositories on CentOS and on Ubuntu you can install
it from the default repostory by using the follwing commands.

# yum install epel-release && yum install lxc lxc-templates [On CentOS]
$ sudo apt install lxc lxc-templates

After LXC service has been installed, verify if LXC daemon is running.

# systemctl status lxc.service


# systemctl start lxc.service

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


and check LXC kernel virtualization status by issuing the below command.

# lxc-checkconfig

Create and Manage LXC Containers


To list available LXC templates containers already installed on your system issue the below
command.

# ls -alh /usr/share/lxc/templates/

The process of creating a LXC container is very simple. The command syntax to create a new
container is explained below.

# lxc-create -n container_name -t container_template

In the below excerpt we’ll create a new container named mydeb based on a debian template that
will be pulled off from LXC repositories.

# lxc-create -n mydcb -t debian

After a series of base dependencies and packages that will be downloaded and installed in your
system the container will be created.
When the process finishes a message will display your default root account password. Change this
password once you start and login to the container console in order to be safe.
Now, you can use lxc-ls to list your containers and lxc-info to obtain information about a
running/stopped container.
In order to start the newly created container in background (will run as a daemon by specifying the -
d option) issue the following command:

# lxc-start -n mydeb -d

After the container has been started you can list running containers using the lxc-ls --active
command and get detailed information about the running container.

# lxc-ls --active
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
In order to login to the container console issue the lxc-console command against a running
container name. Login with the user root and the password generated by default by lxc supervisor.
Once logged in the container you can run several commands in order to verify the distribution by
displaying the /etc/issue.net file content, change the root password by issuing passwd command or
view details about network interfaces with ifconfig.

# lxc-console -n mydeb
# cat /etc/issue.net
# ifconfig
# passwd

To detach from the container console and go back to your host console, leaving the container in
active state, hit Ctrl+a then q on the keyboard.
To stop the a running container issue the following command.

# lxc-stop -n mydcb

In order to create a LXC container based on an Ubuntu template, enter /usr/sbin/ directory and
create the following debootstrap symlink.

# cd /usr/sbin
# ln -s debootstrap qemu-debootstrap

Now open and edit qemu-debootstrap file with Vi editor and replace the following two MIRROR
lines as follows:

DEF_MIRROR=”http://mirrors.kernel.org/ubuntu”
DEF_HTTPS_MIRROR=”https://mirrors.kernel.org/ubuntu”

For reference, see the following content and place the above two lines as stated:

MAKE_TARBALL=""
EXTRACTOR_OVERRIDE=""
UNPACK_TARBALL=""
ADDITIONAL=""
EXCLUDE=""
VERBOSE=""
CERTIFICATE=""
CHECKCERTIF=""
PRIVATEKEY=""
DEF_MIRROR=”http://mirrors.kernel.org/ubuntu”
DEF_HTTPS_MIRROR=”https://mirrors.kernel.org/ubuntu”
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
Finally create a new LXC container based on Ubuntu template issuing the same lxc-create
command.

Once the process of generating the Ubuntu container finishes a message will display your container
default login credentials as illustrated on the below screenshot.

# lxc-create -n myubuntu -t ubuntu

In order to create a specific container based on local template use the following syntax:

# lxc-create -n container_name -t container_template -- -r distro_release


-a distro_architercture

Here is an excerpt of creating a debian wheezy container with an amd64 system architecture.

# lxc-create -n mywheezy -t debian -- -r wheezy -a amd64

For instance, specific containers for different distro releases and architectures can be also created
from a generic template which will be downloaded from LXC repositories as illustrated in the
below example.

# lxc-create -n mycentos6 -t download -- -d centos -r 6 -a i386

Here is the list of lxc-create command line switches:

• -n = name
• -t = template
• -d = distibution
• -a = arch
• -r = release

Containers can be deleted from your host with the lxc-destroy command issued against a container
name.

# lxc-destroy -n mywheez

A container can be cloned from an existing container by issuing lxc-clone command:

# lxc-clone mydeb mydeb-clone

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


And finally, all created containers reside in /var/lib/lxc/ directory. If for some reason you need to
manually adjust container settings you must edit the config file from each container directory.

# ls /var/lib/lxc

Summary
Although this LXC examples (along with the rest of the examples in the current tutorial) are a nice
starting point to begin experimenting with commands that are used to create, delete and manage
LXC containers from the Linux command line.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 31: Installing and Configuring a Database
Server
A database server is a critical component of the network infrastructure necessary for today’s
applications. Without the ability to store, retrieve, update, and delete data (when needed), the
usefulness and scope of web and desktop apps becomes very limited. In addition, knowing how to
install, manage, and configure a database server (so that it operates as expected) is an essential skill
that every system administrator must have.

In this chapter we will briefly review how to install and secure a MariaDB database server and then
we will explain how to configure it.

Installing and Securing a MariaDB Server


In CentOS 7.x, MariaDB replaced MySQL, which still can be found in the Ubuntu (along with
MariaDB). For brevity, we will only use MariaDB in this tutorial, but please note that besides
having different names and development philosophies, both Relational DataBase Management
Systems (RDBMSs for short) are almost identical.

This means that the client-side commands are the same on both MySQL and MariaDB, and the
configuration files are named identically and located in the same places.

To install MariaDB, do:

# yum update && yum install mariadb mariadb-server # CentOS


$ sudo aptitude update && sudo aptitude install mariadb-client mariadb-
server # Ubuntu

Note that, in Ubuntu, you will be asked to enter a password for the RDBMS root user.

Once the above packages have been installed, make sure the database service is running and has
been activated to start on boot (in CentOS you will need to perform this operation manually,
whereas in Ubuntu the installation process will have already taken care of it for you):

# systemctl start mariadb && systemctl enable mariadb # CentOS

Then run the mysql_secure_installation script. This process will allow you to 1) set / reset the
password for the RDBMS root user, 2) remove anonymous logins (thus enabling only users with a
valid account to log in to the RDBMS), 3) disable root access for machines other than localhost, 4)
remove the test database (which anyone can access), and 5) activate the changes associated with 1
through 4.

# mysql_secure_installation

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Configuring the Database Server
The default configuration options are read from the following files in the given order:
/etc/mysql/my.cnf, /etc/my.cnf, and ~/.my.cnf.

Most often, only /etc/my.cnf exists. It is on this file that we will set the server-wide settings (which
can be overridden with the same settings in ~/.my.cnf for each user).

The first thing that we need to note about my.cnf is that settings are organized into categories (or
groups) where each category name is enclosed with square brackets.

Server system configurations are given in the [mysqld] section, where typically you will find only
the first two settings in the table below.

The rest are other frequently used options (where indicated, we will change the default value with a
custom one of our choosing):

Setting and description Default value


datadir is the directory where the data files are stored. datadir=/var/lib/mysql
socket indicates the name and location of the socket file that is socket=/var/lib/mysql/mysql.sock
used for local client connections. Keep in mind that a socket file
is a resource that is utilized to pass information between
applications.
bind_address is the address where the database server will listen bind_address=0.0.0.0
on for TCP/IP connections. If you need your server to listen on
more than one IP address, leave out this setting (0.0.0.0 which
means it will listen on all IP addresses assigned to this specific
host).
We will change this to instruct the service to listen only on its
main address (192.168.0.13):
bind-address=192.168.0.13
port represents the port where the database server will be port=3306
listening.
We will replace the default value(3306) with 20500 (but we need
to make sure nothing else is using that port):
port=20500
While some people will argue that security through obscurity is
not good practice, changing the default application ports for
higher ones is a rudimentary -yet effective- method to discourage
port scans.
innodb_buffer_pool_size is the buffer pool (in bytes) of memory innodb_buffer_pool_size=134217728
that is allocated for data and indexes that are accessed frequently

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


when using Innodb (which is the default in MariaDB) or XtraDB
as storage engine.
We will replace the default value with 256 MB:
innodb_buffer_pool_size=256M
skip_name_resolve indicates whether hostnames will be resolved skip_name_resolve=0
or not on incoming connections. If set to 1, as we will do here,
only IP addresses.
Unless you require hostnames to determine permissions, it is
advisable to disable this variable (in order to speed up
connections and queries) by setting its value to 1:
skip_name_resolve=1
query_cache_size represents the size (in bytes) available to the query_cache_size=0 (which means it
query cache in disk, where the results of SELECT queries are is disabled by default)
stored for future use when an identical query (to the same
database and using the same protocol and same character set) is
performed.
You should choose a query cache size that matches your needs
based on 1) the number of repetitive queries, and 2) the
approximate number of records those repetitive queries are
expected to return. We will set this value to 100 MB for the time
being:
query_cache_size=100M
max_connections is the maximum number of simultaneous client max_connections=151
connections to the server. We will set this value to 30:
max_connections=30
Each connection will use a thread, and thus will consume
memory. Take this fact into account while setting
max_connections.
thread_cache_size indicates the numbers of threads that the server thread_cache_size=0 (disabled by
allocates for reuse after a client disconnects and frees thread(s) default)
previously in use. In this situation, it is cheaper (performance-
wise) to reuse a thread than instantiating a new one.
Again, this depends on the number of connections you are
expecting. We can safely set this value to half the number of
max_connections:
thread_cache_size=15

In CentOS, we will need to tell SELinux to allow MariaDB to listen on a non-standard port (20500)
before restarting the service:

# yum install policycoreutils-python


# semanage port -a -t mysqld_port_t -p tcp 20500

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Then restart the service.

# systemctl restart mariadb

Checking and Tuning Database Configuration


To assist us in checking and tuning the configuration as per our specific needs, we can install
mysqltuner (a script that will provide suggestions to improve the performance of our database
server and increase its stability):

# wget https://github.com/major/MySQLTuner-perl/tarball/master
# tar xzf master

Then change directory into the folder extracted from the tarball (the exact version may differ in
your case):

# cd major-MySQLTuner-perl-7dabf27

and run it (you will be prompted to enter the credentials of your administrative MariaDB account)

# ./mysqltuner.pl

The output of the script is in itself very interesting, but let’s skip to the bottom where the variables
to adjust are listed with the recommended value:

The query_cache_type setting indicates whether the query cache is disabled (0) or enabled (1). In
this case, mysqltuner is advising us to disable it.

So why are we advised to deactivate it now? The reason is that the query cache is useful mostly in
high-read / low-write scenarios (which is not our case, since we just installed the database server).

WARNING: Before making changes to the configuration of a production server, you are highly
encouraged to consult an expert database administrator to ensure that a recommendation given by
mysqltuner will not impact negatively on an existing setting.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Summary
In this chapter we have explained how to configure a MariaDB database server after we have
installed and secured it.

The configuration variables listed in the table above are only a few settings that you may want to
consider while preparing the server for use or when tuning it later. Always refer to the official
MariaDB documentation before making changes.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
Chapter 32: Turn a Linux Server into a Router
In this chapter we will discuss the routing of IP traffic statically and dynamically with specific
applications.
First things first, let’s get some definitions straight:

• In simple words, a packet is the basic unit that is used to transmit information within a
network. Networks that use TCP/IP as network protocol follow the same rules for
transmission of data: the actual information is split into packets that are made of both data
and the address where it should be sent to.

• Routing is the process of “guiding” the data from source to destination inside a network.

• Static routing requires a manually-configured set of rules defined in a routing table. These
rules are fixed and are used to define the way a packet must go through as it travels from
one machine to another.

• Dynamic routing, or smart routing (if you wish), means that the system can alter
automatically, as needed, the route that a packet needs to follow. However, in the context of
the LFCE exam, the term dynamic routing refers to the ability to performing routing “on-
the-fly” with the ip command.

IP and Network Device Configuration


The iproute package provides a set of tools to manage networking and traffic control. We will use it
throughout this chapter as they represent the replacement of legacy tools such as ifconfig and route.

The central utility in the iproute suite is called simply ip. Its basic syntax is as follows:

ip object command

where object can be only one of the following (only the most frequent objects are shown - you can
refer to man ip for a complete list):

• link: network device.

• addr: protocol (IP or IPv6) address on a device.

• route: routing table entry.

• rule: rule in routing policy database.

whereas command represents a specific action that can be performed on object. You can run the
following command to display the complete list of commands that can be applied to a particular
object:

ip object help

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


For example,

ip link help

The above image shows, for example, that you can change the status of a network interface with the
following command:

ip link set interface {up | down}

Example 1: Disabling and enabling a network interface

In this example we will disable and enable eth1:

ip link show
ip link set eth1 down

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


If you want to re-enable eth1,

ip link set eth1 up

Instead of displaying all the network interfaces, we can specify one of them:

ip link show eth1

which will return all the information for eth1.

Example 2: Displaying the main routing table

You can view your current main routing table with either of the following 3 commands:

ip route show
route -n
netstat -rn

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


The first column in the output of the three commands indicates the target network. The output of ip
route show (following the keyword dev) also presents the network devices that serve as physical
gateway to those networks. Although nowadays the ip command is preferred over route, you can
still refer to man ip-route and man route for a detailed explanation of the rest of the columns.

Example 3: Using a Linux server to route packages between two private networks

We want to route icmp (ping) packets from dev2 to dev4 and the other way around as well (note that
both client machines are on different networks). The name of each NIC, along with its
corresponding IPv4 address, is given inside square brackets.

Our test environment is as follows:

• Client 1: CentOS 7 [enp0s3: 192.168.0.17/24] - dev1

• Router: Debian Wheezy 7.7 [eth0: 192.168.0.15/24, eth1: 10.0.0.15/24] - dev2

• Client 2: openSUSE 13.2 [enp0s3: 10.0.0.18/24] - dev4

Let’s view the routing table in dev1 (CentOS box):

ip route show

and then modify it in order to use its enp0s3 NIC and the connection to 192.168.0.15 to access hosts
in the 10.0.0.0/24 network:

ip route add 10.0.0.0/24 via 192.168.0.15 dev enp0s3

Which essentially reads, “Add a route to the 10.0.0.0/24 network through the enp0s3 network
interface using 192.168.0.15 as gateway”.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Likewise, in dev4 (openSUSE box) to ping hosts in the 192.168.0.0/24 network

ip route add 192.168.0.0/24 via 10.0.0.15 dev enp0s3

Finally, we need to enable forwarding in our Debian router:

echo 1 > /proc/sys/net/ipv4/ip_forward

Now let’s ping:

and

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


To make these settings persistent across boots, edit /etc/sysctl.conf on the router and make sure the
net.ipv4.ip_forward variable is set to true as follows:

net.ipv4.ip_forward = 1

In addition, configure the NICs on both clients (look for the configuration file within /etc/sysconfig/
network-scripts on CentOS where it’s called ifcfg-enp0s3).

Here’s the configuration file from the openSUSE box:

BOOTPROTO=static
BROADCAST=10.0.0.255
IPADDR=10.0.0.18
NETMASK=255.255.255.0
GATEWAY=10.0.0.15
NAME=enp0s3
NETWORK=10.0.0.0
ONBOOT=yes

Example 4: Using a Linux server to route packages between a private network and the Internet

Another scenario where a Linux machine can be used as router is when you need to share your
Internet connection with a private LAN.

• Router: Debian Wheezy 7.7 [eth0: Public IP, eth1: 10.0.0.15/24] - dev2

• Client: openSUSE 13.2 [enp0s3: 10.0.0.18/24] - dev4

In addition to set up packet forwarding and the static routing table in the client as in the previous
example, we need to add a few iptables rules in the router:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE


iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED
-j ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

The first command adds a rule to the POSTROUTING chain in the nat (Network Address
Translation) table, indicating that the eth0 NIC should be used as the “exit door” for outgoing
packages.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


MASQUERADE indicates that this NIC has a dynamic IP and that before sending the package to
the “wild wild world” of the Internet, the private source address of the packet has to be changed to
that of the public IP of the router.

In a LAN with many hosts, the router keeps track of established connections in
/proc/net/ip_conntrack so it knows where to return the response from the Internet to.

Only part of the output of

cat /proc/net/ip_conntrack

is show in the following screenshot.

where the origin (private IP of openSUSE box) and destination (Google DNS) of packets is
highlighted. This was the result of running

curl www.tecmint.com

on the openSUSE box.

As I’m sure you can already guess, the router is using Google’s 8.8.8.8 as nameserver, which
explains why the destination of outgoing packets points to that address.

Note that incoming packages from the Internet are only accepted is if they are part of an already
established connection (command #2), while outgoing packages are allowed “free exit” (command
#3).

Don’t forget to make your iptables rules persistent following the steps outlined Chapter 27 (“The
firewall”).

Summary
In this chapter we have explained how to set up static and dynamic routing, using a Linux box
router(s). Feel free to add as many routers as you wish, and to experiment as much as you want. .

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Chapter 33: Managing and Configuring Virtual
Machines and Containers
Virtualization and containers are hot topics in today’s IT industry. In this chapter we will list the
necessary tools to manage and configure both.

Managing and configuring virtual machines


For many decades, virtualization has helped IT professionals to reduce operational costs and
increase energy savings.
A virtual machine (or VM for short) is an emulated computer system that runs on top of another
system known as host. VMs have limited access to the host’s hardware resources (CPU, memory,
storage, network interfaces, USB devices, and so forth). The operating system running on the virtual
machine is often referred to as the guest operating system.
CPU extensions
Before we proceed, we need to check if the virtualization extensions are enabled on our CPU(s). To
do that, use the following command, where vmx and svm are the virtualization flags on Intel and
AMD processors, respectively:
grep --color -E 'vmx|svm' /proc/cpuinfo
No output means the extensions are either not available or not enabled in the BIOS. While you may
continue without them, performance will be negatively impacted.
Virtualization tools
To begin, let’s install the necessary tools. In CentOS you will need the following packages:
yum install qemu-kvm libvirt libvirt-client virt-install virt-viewer
whereas in Ubuntu:
sudo apt-get install qemu-kvm qemu virt-manager virt-viewer libvirt-bin
libvirt-dev
Next, we will download a CentOS 7 minimal ISO file for later use:
wget http://mirror.clarkson.edu/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1804.iso
At this point we are ready to create our first virtual machine with the following specifications:
 RAM: 512 MB (Note that the host must have at least 1024 MB)
 1 virtual CPU
 8 GB disk
 Name: centos7vm
virt-install --name=centos7vm --ram=1024 --vcpus=1
--cdrom=/home/user/CentOS-7-x86_64-Minimal-1804.iso --os-type=linux --os-
variant=rhel7 --network type=direct,source=eth0 --disk
path=/var/lib/libvirt/images/centos7vm.dsk,size=8
Depending on the computing resources available on the host, the above command may take some
time to bring up the virtualization viewer. This tool will enable you to perform the installation as if
you were doing it on a bare metal machine.
Useful commands
After you have created a virtual machine, here are some commands you can use to manage it:

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


List all VMs:
virsh --list all
Get info about a VM (centos7vm in this case):
virsh dominfo centos7vm
Edit the settings of centos7vm in your default text editor:
virsh edit centos7vm
Enable or disable autostart to have the virtual machine boot (or not) when the host does:
virsh autostart centos7vm
virsh autostart --disable centos7vm
Stop centos7vm:
virsh shutdown centos7vm
Once it is stopped, you can clone it into a new virtual machine called centos7vm2:
virt-clone --original centos7vm --auto-clone --name centos7vm2
And that’s it. From this point on, you may want to refer to the virt-install, virsh, and virt-clone man
pages for further info.

Managing and configuring containers


If you are a Linux system administrator who provides support for developers, chances are you’ve
heard of Docker. If not, this software solution will make your life easier beginning today by helping
you reduce operating costs and accelerate deployments – among other benefits.
But it’s not magic. Docker as a platform leverages containers – packages of an application along
with all the tools it needs to run- to eliminate differences between environments. In other words,
containerized software will operate and can be managed consistently regardless of where it is
installed. Additionally, containers are much easier to set up, start, stop, and maintain than good old
virtual machines. If you’re interested in knowing more about the differences between these two
technologies, the official Docker website provides a great explanation.
To illustrate, in this chapter we will explain how to 1) install Docker on CentOS 7 and Ubuntu
16.04, and 2) spin up an Apache 2.4 container from Docker Hub. We will then use it to serve a
simple web page from our home directory – all without the need to install a web server on our host.
Installing Docker
To begin, let’s install Docker using the following command. This will download and run a shell
script that will add the Docker repository to our system and install the package:
curl -fsSL https://get.docker.com | sh
Next, use systemctl to start the main service and check its status:
systemctl start docker
systemctl status docker
At this point we can simply execute
docker
to view the list of available commands, or
docker COMMAND --help
to get help. For example,
docker ps --help
will tell us how to list containers present on our system, whereas
docker run --help
will print all the options that we can use to manipulate a container.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Setting up an Apache container
One of the amazing things about the Docker ecosystem is that there are tens of standard containers
that you can easily download and use. In the following example we will instantiate an Apache 2.4
container named tecmint-web, detached from the current terminal. We will use an image called
httpd:2.4 from Docker Hub.
Our plan is to have requests made to our public IP address on port 8080 be redirected to port 80 on
the container. Also, instead of serving content from the container itself, we will serve a simple web
page from /home/user/website. We do this by mapping /home/user/website/ on the
/usr/local/apache2/htdocs/ on the container. Note that you will need to use sudo or log in as root to
proceed, and do not omit the forward slashes at the end of each directory:
sudo docker run -dit --name tecmint-web -p 8080:80 -v
/home/user/website/:/usr/local/apache2/htdocs/ httpd:2.4
At this point our container should be up and running:
sudo docker ps

Now let’s create a simple web page named docker.html inside /home/user/directory:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Learn Docker at Tecmint.com</title>
</head>
<body>
<h1>Learn Docker With Us</h1>
</body>
</html>

Next, point your browser to AAA.BBB.CCC.DDD:8080/docker.html (where AAA.BBB.CCC.DDD


is your host’s public IP address). You should be presented with the page we created previously:

If you wish, you can now stop the container:


sudo docker stop tecmint-web
and remove it:
sudo docker rm tecmint-web
To finish cleaning up, you may want to delete the image that was used in the container (omit this
step if you’re planning on creating other Apache 2.4 containers soon):
sudo docker image remove httpd:2.4
Note that in all the above steps we never had to install the web server on our host.

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved


Summary
In this chapter we explained how to install virtualization tools and Docker, and how to manipulate
virtual machines and containers. Unfortunately, these are just the basics – there are entire courses,
books, and certification exams that cover virtualization and Docker (and containers in general) more
in depth.

Congratulations for making it to the end of this book! Now please consider buying your exam
voucher using the following links to earn us a small commission. This will help us keep this book
updated.

Become a Linux Certified System Administrator at Training.LinuxFoundation.org!


Become a Linux Certified Engineer at Training.LinuxFoundation.org!

© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved

Вам также может понравиться