Академический Документы
Профессиональный Документы
Культура Документы
We hope you will enjoy reading this ebook as much as we enjoyed writing it and formatting it for
distribution in PDF format.
You will probably think of other ideas that can enrich this material. If so, feel free to drop us a note
at admin@tecmint.com or one of our social network profiles:
http://twitter.com/tecmint
https://www.facebook.com/TecMint
https://plus.google.com/+Tecmint
In addition, if you find any typos or errors in this book, please let us know so that we can correct
them and improve the material. Questions and other suggestions are appreciated as well – we look
forward to hearing from you!
Important: All the commands used to perform administrative tasks (adding, updating, or removing
users / groups, changing permissions, managing packages, and so forth) should be preceded by
sudo if you are using Ubuntu.
Last, but not least, please consider buying your exam voucher using the following links to earn us a
small commission. This will help us keep this book updated.
Chapter 4: How to Monitor System Usage, Outages and Troubleshoot Linux Servers ....................42
Storage space utilization................................................................................................................42
Example 1: Reporting disk space usage in bytes and human-readable format...........42
Example 2: Inspecting inode usage by file system in human-readable format with:. 42
Chapter 16: How to Setup Apache with Name-Based Virtual Hosting with SSL Certificate..........153
Installing Apache Web Server......................................................................................................153
Configuring Apache.....................................................................................................................154
Serving Pages in a Standalone Web Server.................................................................................155
Restrict Access to a Web Page with Apache................................................................................156
Setting Up Name-Based Virtual Hosts........................................................................................157
Installing and Configuring SSL with Apache..............................................................................160
Summary......................................................................................................................................164
Chapter 17: How to Setup Nginx with Name-Based Virtual Hosting with SSL Certificate............165
Installing Nginx Web Server........................................................................................................165
Configuring Nginx Web Server...................................................................................................169
Serving Pages in a Standalone Web Server.................................................................................169
Restrict Access to a Web Page with Nginx................................................................................172
Chapter 24: Implement and Configure a PXE Boot Server on CentOS 7........................................231
Install and Configure DNSMASQ Server...................................................................................232
Installing SysLinux Bootloaders..................................................................................................234
Installing TFTP-Server...............................................................................................................234
Setting Up PXE Configuration....................................................................................................234
Adding CentOS 7 Boot Images to PXE.......................................................................................236
Creating CentOS 7 Local Mirror Installation Source..................................................................236
Testing FTP Installation Source...................................................................................................237
Configure Clients to Boot from PXE Network............................................................................240
Summary......................................................................................................................................243
A version control system (or VCS in short) is a tool that records changes to files on a filesystem.
There are many version control systems out there, but Git is currently the most popular and
frequently used, especially for source code management.
Version control can actually be used for nearly any type of file on a computer, not only source code.
Version control systems/tools offer several features that allow individuals or a group of people to:
A project under a version control system such as Git will have mainly three sections, namely:
• a repository: a database for recording the state of or changes to your project files. It contains
all of the necessary Git metadata and objects for the new project. Note that this is normally
what is copied when you clone a repository from another computer on a network or remote
server.
• a working directory or area: stores a copy of the project files which you can work on (make
additions, deletions and other modification actions).
• a staging area: a file (known as index under Git) within the Git directory, that stores
information about changes, that you are ready to commit (save the state of a file or set of
files) to the repository.
There are two main types of VCSs, with the main difference being the number of repositories:
• Centralized Version Control Systems (CVCSs): here each project team member gets their
own local working directory, however, they commit changes to just a single central
repository.
• Distributed Version Control Systems (DVCSs): under this, each project team member gets
their own local working directory and Git directory where they can make commits. After an
In addition, a Git repository can be bare (repository that doesn’t have a working directory) or non-
bare (one with a working directory).
Shared (or public or central) repositories should always be bare – all Github repositories are bare.
Git is a free and open source, fast, powerful, distributed, easy to use, and popular version control
system that is very efficient with large projects, and has a remarkable branching and merging
system.
It is designed to handle data more like a series of snapshots of a mini filesystem, which is stored in
a Git directory.
The workflow under Git is very simple: you make modifications to files in your working directory,
then selectively add just those files that have changed, to the staging area, to be part of your next
commit.
Once you are ready, you do a commit, which takes the files from staging area and saves that
snapshot permanently to the Git directory.
To install Git in Linux, use the appropriate command for your distribution of choice:
After installing Git, it is recommended that you tell Git who you are by providing your full name
and email address, as follows:
# mkdir-p /projects/scripts/
# groupadd sysadmins
# usermod -aG sysadmins admin
# chown :sysadmins -R /projects/scripts/
# chmod 770 -R /projects/scripts/
At this point, you have successfully initialized a bare Git directory which is the central storage
facility for the project.
Try to do a listing of the directory to see all the files and directories in there:
# ls -la /projects/scripts/bashscripts/
You now have a local instance of the project in a non-bare repository (with a working directory),
you can create the initial structure of the project (i.e add a README.md file, sub-directories for
different categories of scripts e.g recon to store reconnaissance scripts, sysadmin ro store sysadmin
scripts etc.):
$ cd ~/bin/bashscripts/
$ ls -la
$ git status
$ git add -A
$ git commit -a -m "Initial Commit"
Right now, your local git repository should be up-to-date with the project central repository (origin),
you can confirm this by running the status command once more.
$ git status
You can also inform you colleagues to start working on the project by cloning the repository to their
local computers.
Alternatively, you can create a new branch and switch to it in one step using the checkout
command with the -b flag.
You can also create a new branch based on another branch, for instance.
To check which branch you are in, use branch command (an asterisk character indicates the active
branch):
$ git branch
After creating and switching to the new branch, make some changes under it and do some commits.
$ vim sysadmin/topprocs.sh
$ git status
$ git commit add sysadmin/topprocs.sh
$ git commit -a -m 'modified topprocs.sh'
If you no longer need a particular branch, you can delete it using the -d switch.
$ git log
Another important feature is the show command which displays various types of objects (such as
commits, tags, trees etc..):
$ git show
Summary
Git allows a team of people to work together using the same file(s), while recording changes to the
file(s) over time so that they can recall specific versions later.
This way, you can use Git for managing source code, configuration files or any file stored on a
computer. You may want to refer to the Git Online Documentation for further documentation.
The difference between > (redirection operator) and | (pipeline operator) is that while the first
connects a command with a file, the latter connects the output of a command with another
command.
Since the redirection operator creates or overwrites files silently, we must use it with extreme
caution, and never mistake it with a pipeline. One advantage of pipes on Linux and UNIX systems
is that there is no intermediate file involved with a pipe – the stdout of the first command is not
written to a file and then read by the second command.
For the following practice exercises we will use the poem “A happy child” (anonymous author).
Using sed
The name sed is short for stream editor. For those unfamiliar with the term, a stream editor is used
to perform basic text transformations on an input stream (a file or input from a pipeline).
The most basic (and popular) usage of sed is the substitution of characters. We will begin by
changing every occurrence of the lowercase y to UPPERCASE Y and redirecting the output
to ahappychild2.txt.
Basic syntax:
Our example:
Should you want to search for or replace a special character (such as / , \ , & ) you need to escape
it, in the term or replacement strings, with a backward slash.
For example, we will substitute the word and for an ampersand. At the same time, we will replace
the word I with You when the first one is found at the beginning of a line.
Another use of sed is showing (or deleting) a chosen portion of a file. In the following example, we
will display the first 5 lines of /var/log/messages from Jun 8.
Note that by default, sed prints every line. We can override this behaviour with the -n option and
then tell sed to print (indicated by p) only the part of the file (or the pipe) that matches the pattern
(Jun 8 at the beginning of line in the first case and lines 1 through 5 inclusive in the second case).
Finally, it can be useful while inspecting scripts or configuration files to inspect the code itself and
leave out comments. The following sed one-liner deletes (d) blank lines or those starting
with # (the | character indicates a boolean OR between the two regular expressions).
Finally, you can combine sort and uniq (as they usually are). Consider the following file with a list
of donors, donation date, and amount.
Suppose we want to know how many unique donors there are. We will use the following command
to cut the first field (fields are delimited by a colon), sort by name, and remove duplicate lines.
Show all the contents of /etc whose name begins with rc followed by any single number.
tr Command Usage
The tr command can be used to translate (change) or delete characters from stdin, and write the
result to stdout.
Examples:
Change all lowercase to uppercase in sortuniq.txt file.
# ls -l | tr -s ' '
Summing up, we will create a text stream consisting of the first and third non-blank files of the
output of the last command. We will use grep as a first filter to check for sessions of user gacanepa,
then squeeze delimiters to only one space (tr -s ‘ ‘).
Next, we’ll extract the first and third fields with cut, and finally sort by the second field (IP
addresses in this case) showing unique.
# last | grep gacanepa | tr -s ' ' | cut -d' ' -f1,3 | sort -k2 | uniq
The above command shows how multiple commands and pipes can be combined so as to obtain
Summary
Although this example (along with the rest of the examples in the current tutorial) may not seem
very useful at first sight, they are a nice starting point to begin experimenting with commands that
are used to create, edit, and manipulate files from the Linux command line. Feel free to leave your
questions and comments below – they will be much appreciated!
PSSH tool includes parallel versions of OpenSSH and related tools such as:
These tools are good for System Administrators who find themselves working with large collections
of nodes on a network.
On CentOS:
On Ubuntu:
The lines in the host file are in the following form and can also include blank lines and comments.
192.168.0.10:22
192.168.0.11:22
1. To execute echo “Hello TecMint” on the terminal of the multiple Linux hosts by root user and
prompt for the root user’s password, run this command below.
Important: Remember all the hosts must be included in the host file.
2. To find out the disk space usage on multiple Linux servers on your network, you can run a single
command as follows.
3. If you wish to know the uptime of multiple Linux servers at one go, then you can run the
following command.
# pssh --help
Summary
Parallel SSH or PSSH is a good tool to use for executing commands in an environment where
a System Administrator has to work with many servers on a network. It will make it easy for
commands to be executed remotely on different hosts on a network.
In this chapter we will present a list of a few tools that are available in most upstream distributions
to check on the system status, analyze outages, and troubleshoot ongoing issues. Specifically, of the
myriad of available data, we will focus on CPU, storage space and memory utilization, basic
process management, and log analysis.
The first one, df (which stands for disk free), is typically used to report overall disk space usage by
file system.
Example 1: Reporting disk space usage in bytes and human-readable format
Without options, df reports disk space usage in bytes. With the -h flag it will display the same
information using MB or GB instead. Note that this report also includes the total size of each file
system (in 1-K blocks), the free and available spaces, and the mount point of each storage device:
# df
# df -h
That’s certainly nice – but there’s another limitation that can render a file system unusable, and that
is running out of inodes. All files in a file system are mapped to an inode that contains its metadata.
# df -hTi
According to the above image, there are 146 used inodes (1%) in /home, which means that you can
still create 226K files in that file system.
Note that you can run out of storage space long before running out of inodes, and vice-versa. For
that reason, you need to monitor not only the storage space utilization but also the number of inodes
used by file system.
Use the following commands to find empty files or directories (which occupy 0B) that are using
inodes without a reason:
Also, you can add the --delete flag at the end of each command if you also want to delete those
empty files and directories:
If the use of a certain file system is above a predefined percentage, you can use du (short for disk
usage) to find out what are the files that are occupying the most space.
The example is given for /var, which as you can see in the first image above, is used at its 67%.
# du -sch /var/*
Note: That you can switch to any of the above sub-directories to find out exactly what’s in them and how
much each item occupies. You can then use that information to either delete some files if there are not needed
or extend the size of the logical volume if necessary.
To start top, simply type the following command in your command line, and hit Enter.
# top
1. The current time (8:41:32 pm) and uptime (7 hours and 41 minutes). Only one user is logged on
to the system, and the load average during the last 1, 5, and 15 minutes, respectively. 0.00, 0.01, and
0.05 indicate that over those time intervals, the system was idle for 0% of the time (0.00: no
processes were waiting for the CPU), it then was overloaded by 1% (0.01: an average of 0.01
processes were waiting for the CPU) and 5% (0.05). If less than 0 and the smaller the number (0.65,
for example), the system was idle for 35% during the last 1, 5, or 15 minutes, depending where 0.65
appears.
2. Currently there are 121 processes running (you can see the complete listing in 6). Only 1 of them
is running (top in this case, as you can see in the %CPU column) and the remaining 120 are waiting
in the background but are “sleeping” and will remain in that state until we call them. How? You can
verify this by opening a mysql prompt and execute a couple of queries. You will notice how the
number of running processes increases.
Alternatively, you can open a web browser and navigate to any given page that is being served by
Apache and you will get the same result. Of course, these examples assume that both services are
installed in your server.
3. us (time running user processes with unmodified priority), sy (time running kernel processes), ni
(time running user processes with modified priority), wa (time waiting for I/O completion), hi (time
spent servicing hardware interrupts), si (time spent servicing software interrupts), st (time stolen
from the current vm by the hypervisor – only in virtualized environments).
To inspect RAM memory and swap usage you can also use free command.
# free
Of course you can also use the -m (MB) or -g (GB) switches to display the same information in
human-readable form:
# free -m
Either way, you need to be aware of the fact that the kernel reserves as much memory as possible
and makes it available to processes when they request it. Particularly, the “-/+ buffers/cache” line
shows the actual values after this I/O cache is taken into account.
In other words, the amount of memory used by processes and the amount available to other
processes (in this case, 232 MB used and 270 MB available, respectively). When processes need
this memory, the kernel will automatically decrease the size of the I/O cache.
The process listing above shows the following information: owner of the process, PID, Parent PID
(the parent process), processor utilization, time when command started, tty (the ? indicates it’s a
daemon), the cumulated CPU time, and the command associated with the process.
Example 8: Customizing and sorting the output of ps
However, perhaps you don’t need all that information, and would like to show the owner of the
process, the command that started it, its PID and PPID, and the percentage of memory it’s currently
using - in that order, and sort by memory use in descending order (note that ps by default is sorted
by PID).
where the minus sign in front of %mem indicates sorting in descending order.
Other reasons why you would consider doing this is when you have started a process in the
foreground but want to pause it and resume in the background.
Example 9: Pausing the execution of a running process and resuming it in the background
When the normal execution of a certain process implies that no output will be sent to the screen
while it’s running, you may want to either start it in the background (appending an ampersand at the
end of the command).
# process_name &
or, once it has started running in the foreground, pause it and send it to the background with:
# Ctrl + Z
# kill -18 PID
# cd /var/log
# cd /var/log/cups
# ls
# tail error_log
The above screenshot provides some helpful information to understand what could be causing your
issue. Note that following the steps or correcting the malfunctioning of the process still may not
solve the overall problem, but if you become used right from the start to check on the logs every
time a problem arises (be it a local or a network one) you’ll be definitely on the right track.
Example 12: Examining the logs for hardware failures
Although hardware failures can be tricky to troubleshoot, you should check the dmesg and
messages logs and grep for related words to a hardware part presumed faulty.
The image below is taken from /var/log/messages after looking for the word error using the
following command:
Summary
In this chapter we have explored some of the tools that can help you to always be aware of your
system’s overall status. In addition, you need to make sure that your operating system and installed
packages are updated to their latest stable versions. And never, ever, forget to check the logs!
In this chapter we will review some well-known tools to examine the performance and increase the
security of a network, and what to do when things aren’t going as expected. Please note that this list
does not present to be comprehensive, so feel free to comment on this post using the form at the
bottom if you would like to add another useful utility that we could be missing.
For example, you need to disable your FTP server if your network does not require one (there are
more secure methods to share files over a network, by the way).
In addition, you should avoid having a web server and a database server in the same system. If one
component becomes compromised, the rest run the risk of getting compromised as well.
However, in this chapter we will focus on the information related to network security only.
Example 1: Showing ALL TCP ports (sockets) that are open on our server
All services running on their default ports (i.e. http on 80, mysql on 3306) are indicated by their
respective names. Others (obscured here for privacy reasons) are shown in their numeric form.
# ss -t -a
On a side note, you may want to check RFC 793 to refresh your memory about possible TCP states
because you also need to check on the number and the state of open TCP connections in order to
become aware of (D)DoS attacks.
Example 2: Displaying ALL active TCP connections with their timers
# ss -t -o
In the output above, you can see that there are 2 established SSH connections. If you notice the
value of the second field of timer, you will notice a value of 36 minutes in the first connection. That
is the amount of time until the next keepalive probe will be sent.
Since it’s a connection that is being kept alive, you can safely assume that is an inactive connection
and thus can kill the process after finding out its PID.
As for the second connection, you can see that it’s currently being used (as indicated by on).
Example 3: Filtering connections by socket
Suppose you want to filter TCP connections by socket. From the server’s point of view, you need to
check for connections where the source port is 80.
A wise sysadmin needs to check how his or her systems are seen by outsiders, and make sure
nothing is left to chance by auditing them frequently. That is called “defensive port scanning”.
Example 4: Displaying information about open ports
You can use the following command to scan which ports are open on your system or in a remote
host:
The above command will scan the host for OS and version detection, port information, and
traceroute (-A). Finally, -sS sends a TCP SYN scan, preventing nmap to complete the 3-way TCP
handshake and thus typically leaving no logs on the target machine.
Before proceeding with the next example, please keep in mind that port scanning is not an illegal
activity. What IS illegal is using the results for a malicious purpose.
For example, the output of the above command run against the main server of a local university
returns the following (only part of the result is shown for sake of brevity):
This specific port scan operation provides all the information that can also be obtained by other
commands, such as:
Example 5: Displaying information about a specific port in a local or remote system
Example 6: Showing traceroute to, and finding out version of services and OS type, hostname
You can check the man page for further details on how to perform other types of port scanning.
Nmap is indeed a very powerful and versatile network mapper utility, and you should be very well
acquainted with it in order to defend the systems you’re responsible for against attacks originated
after a malicious port scan by outsiders.
1. Nmon Utility
nmon is a system tuner and benchmark tool. As such, it can display the CPU, memory, network,
disks, file systems, NFS, top processes, and resources (Linux version & processors). Of course,
we’re mainly interested in the network performance feature.
2. Vnstat Utility
vnstat is a console-based network traffic monitor that keeps a log of hourly (daily or monthly as
well) network traffic for the selected interface(s).
After installing the package, you need to enable the monitoring daemon as follows:
Once you have installed and enabled vnstat, you can initialize the database to record traffic for eth0
(or other NIC) as follows:
# vnstat -u -i eth0
As I have just installed vnstat in the machine that I’m using to write this chapter, I still haven’t
gathered enough data to display usage statistics:
The vnstatd daemon will continue running in the background and collecting traffic data. Until it
collects enough data to produce output, you can refer to the project’s web site to see what the traffic
analysis looks like.
Or even between two remote hosts (in this case, copy the file myFile.txt from remote_host1 to
remote_host2):
# scp remote_user1@remote_host1:/absolute/path/to/remote/directory1/myFile.txt
remote_user1@remote_host2:/absolute/path/to/remote/directory2/
Don’t forget to use the -P switch if SSH is listening on a port other than the default 22.
where XXXX represents the port where SSH is listening on host, which can be either a hostname or
its corresponding IP address. You can disregard the -oPort flag if SSH is listening on its default port
(22).
Once the connection is successful, you can issue the following commands to send or receive files:
In both cases, the -r switch is used to recursively receive or send files, respectively. In the first case,
the -P option will also preserve the original file permissions.
To close the connection, simply type “exit” or “bye”. You can read more about sftp here.
In fact, that may be the last thing that you will have to do in front of a physical terminal. For
security reasons, using Telnet for this purpose is not a good idea, as all traffic goes through the wire
in unencrypted, plain text.
To begin, you will have to install the openssh, openssh-clients and openssh-servers packages. Note
that it’s a good idea to install the server counterparts as you may want to use the same machine as
both client and server at some point or another.
After installation, there is a couple of basic things that you need to consider if you want to secure
remote access to your SSH server. The following settings should be present in the
/etc/ssh/sshd_config file.
1. Change the port where the sshd daemon will listen on from 22 (the default value) to a high port
(2000 or greater), but first make sure the chosen port is not being used.
For example, let’s suppose you choose port 2500. Use netstat to check whether the chosen port is
being used or not:
If netstat does not return anything, you can safely use port 2500 for sshd, and you should change the
Port setting in the configuration file as follows:
Port 2500
Protocol 2
LoginGraceTime 2m
PermitRootLogin no
AllowUsers gacanepa
PasswordAuthentication no
RSAAuthentication yes
PubkeyAuthentication yes
At this point you will need to restart the SSH server to apply the above changes.
In this example we will setup SSH password-less automatic login from server 192.168.0.12 as user
tecmint to 192.168.0.11 with user sheena.
First login into server 192.168.0.12 with user tecmint and generate a pair of public keys using
following command.
# ssh-keygen -t rsa
Next, from 192.168.0.12 connect to 192.168.0.11 using sheena as user and create .ssh directory
under /home/sheena:
$ ssh sheena@192.168.0.11
$ mkdir -p .ssh
We are almost there. From 192.168.0.12 we will now upload the newly generated public key
(id_rsa.pub) to server 192.168.0.11 under sheena‘s .ssh directory as a file named authorized_keys.
$ ssh sheena@192.168.0.11
Summary
You may want to complement what we have covered in this chapter with what we’ve already
learned in other chapters. If you know your systems well, you will be able to easily detect malicious
or suspicious activity when the numbers show unusual activity without an apparent reason. You will
also be able to plan for network resources if you’re expecting a sudden increase in their use.
In this chapter we will explore a few ways to ensure that the system -both hardware and the
software is behaving correctly to avoid potential issues that may cause unexpected production
downtime and money loss. Keep in mind that the files in /var/log are your best friends for this.
One you have installed this tool, you can use it to generate reports of processors statistics.
To display 3 global reports of CPU utilization (-u) for all CPUs (as indicated by -P ALL) at a 2-
second interval, do:
# mpstat -P ALL -u 2 3
To view the same statistics for a specific CPU (CPU 0 in the following example), use:
# mpstat -P 0 -u 2 3
• CPU: Processor number as an integer, or the word all as an average for all processors.
• %sys: Percentage of CPU utilization that occurred while executing kernel applications. This
does not include time spent dealing with interrupts or handling hardware..
• %iowait: Percentage of time when the given CPU (or all) was idle, during which there was a
resource-intensive I/O operation scheduled on that CPU. A more detailed explanation (with
examples) can be found in http://veithen.github.io/2013/11/18/iowait-linux.html.
• %steal: Percentage of time spent in involuntary wait (steal or stolen time) when a virtual
machine, as guest, is “winning” the hypervisor’s attention while competing for the CPU(s).
This value should be kept as small as possible. A high value in this field means the virtual
machine is stalling - or soon will be.
• %idle: percentage of time when CPU(s) were not executing any tasks. If you observe a low
value in this column, that is an indication of the system being placed under a heavy load. In
that case, you will need to take a closer look at the process list, as we will discuss in a
minute, to determine what is causing it.
To put the place the processor under a somewhat high load, run the following commands and then
execute mpstat (as indicated) in a separate terminal:
As you can see in the image above, CPU 0 was under a heavy load during the first two examples, as
indicated by the %idle column.
The above command will only show the PID, PPID, the command associated with the process, and
the percentage of CPU and RAM usage sorted by the percentage of CPU usage in descending order.
When executed during the creation of the .iso file, here’s the first few lines of the output:
Once we have identified a process of interest (such as the one with PID=2822), we can navigate to /
proc/PID (/proc/2822 in this case) and do a directory listing.
This directory is where several files and sub-directories with detailed information about this process
are kept while it is running.
For example:
1. /proc/2822/io contains IO statistics for the process (number of characters and bytes read and
written, among others, during IO operations).
3. /proc/2822/cgroup describes the control groups (cgroups for short) to which the process
belongs if the CONFIG_CGROUPS kernel configuration option is enabled, which you can
verify with:
CONFIG_CGROUPS=y
Using cgroups you can manage the amount of allowed resource usage on a per-process basis as
explained in Chapters 1 through 4 of the Red Hat Enterprise Linux 7 Resource Management guide,
and in the Control Groups section of the Ubuntu 14.04 Server documentation.
4. proc/2822/fd is a directory that contains one symbolic link for each file descriptor the
process has opened. The following image shows this information for the process that was
started in tty1 (the first terminal) to create the .iso image:
The above image shows that stdin (file descriptor 0), stdout (file descriptor 1), and stderr (file
descriptor 2) are mapped to /dev/zero, /root/test.iso, and /dev/tty1, respectively.
More information about /proc can be found in “The /proc filesystem” document kept and
maintained by Kernel.org, and in the Linux Programmer's Manual.
To do this, edit /etc/security/limits.conf and add the following line at the bottom of the file to set the
limit:
The first field can be used to indicate either a user, a group, or all of them (*), whereas the second
field enforces a hard limit on the number of process (nproc) to 10. To apply changes, logging out
and back in is enough.
Thus, let’s see what happens if a certain user other than root (either a legitimate one or not) attempts
to start a shell fork bomb. If we had not implemented limits, this would initially launch two
instances of a function, and then duplicate each of them in a neverending loop. Thus, it would
eventually bringing your system to a crawl.
However, with the above restriction in place, the fork bomb does not succeed but the user will still
get locked out until the system administrator kills the process associated with it:
TIP: Other possible restrictions made possible by ulimit are documented in the limits.conf file.
1. Modify the execution priority (use of system resources) of a process using renice. This
means that the kernel will allocate more or less system resources to the process based on the
assigned priority (a number commonly known as “niceness” in a range from -20 to 19). The
lower the value, the greater the execution priority. Regular users (other than root) can only
modify the niceness of processes they own to a higher value (meaning a lower execution
priority), whereas root can modify this value for any process, and may increase or decrease
it.
If the argument after the new priority value is not present (empty), it is set to PID by default. In that
case, the niceness of process with PID=identifier is set to <new priority>.
# kill PID
Alternatively, you can use pkill to terminate all processes of a given owner (-u), or a group owner (-
G), or even those processes which have a PPID in common (-P). These options may be followed by
the numeric representation or the actual name as identifier:
For example, the following command will kill all processes owned by group with GID=1000.
# pkill -G 1000
# pkill -P 4993
Before running a pkill, it is a good idea to test the results with pgrep first, perhaps using the -l
option as well to list the processes’ names.
It takes the same options but only returns the PIDs of processes (without taking any further action)
that would be killed if pkill is used.
When executing commands, any output is mailed to the owner of the crontab (or to the user
specified in the MAILTO environment variable in the /etc/crontab file, if it exists).
Crontab files (which are created by typing crontab -e and pressing Enter) have the following
format:
Thus, if we
want to
update the
local file
database (which is used by locate to find files by name or pattern) every second day of the month at
2:15 am, we need to add the following crontab entry:
# 15 02 2 * * /bin/updatedb
The above crontab entry reads, “Run /bin/updatedb on the second day of the month, every month of
the year, regardless of the day of the week, at 2:15 am”. As I’m sure you already guessed, the star
symbol is used as a wildcard character.
After adding a cron job, you can see that a file named root was added inside /var/spool/cron, as we
mentioned earlier. That file lists all the tasks that the crond daemon should run:
If you need to run a task on a more fine-grained basis (for example, twice a day or three times each
month), cron can also help you to do that.
For example, to run /my/script on the 1st and 15th of each month and send any output to /dev/null,
you can add two crontab entries as follows:
But for the task to be easier to maintain, you can combine both entries into one:
Following the previous example, we can run /my/other/script at 1:30 am on the first day of the
month every three months:
But when you must repeat a certain task every “x” minutes, hours, days, or months, you can divide
the right position by the desired frequency. The following crontab entry has the exact same meaning
as the previous one:
Or perhaps you need to run a certain job on a fixed frequency or after the system boots, for
example. You can use one of the following string instead of the five fields to indicate the exact time
when you want your job to run:
MAILTO=gacanepa@tecmint.com
30 01 1 */3 * /my/other/script > /dev/null 2>&1
will send the results the output of /my/other/script, if any, to gacanepa@tecmint.com. Of course,
this requires that an MTA is installed and configured in the same machine.
Finally, it is important to note that system-wide jobs are usually placed in /etc/crontab. You can
check /var/log/syslog (grep for cron) for more information.
Summary
In this chapter we have explored a few ways to monitor resource usage to verify the integrity and
availability of critical hardware and software components in a Linux system. We have also learned
how to take appropriate action (either by adjusting the execution priority of a given process or by
terminating it) under unusual circumstances.
1. Acting as an interface between the hardware and the software running on the system.
2. Managing system resources as efficiently as possible.
To do this, the kernel communicates with the hardware through the drivers that are built into it or
those that can be later installed as a module.
For example, when an application running on your machine wants to connect to a wireless network,
it submits that request to the kernel, which in turns uses the right driver to connect to the network.
With new devices and technology coming out periodically, it is important to keep our kernel up to
date if we want to make the most of out them. Additionally, updating our kernel will help us to
leverage new kernel functions and to protect ourselves from vulnerabilities that have been
discovered in previous versions.
Ready to update your kernel on CentOS 7 and Ubuntu? If so, keep reading!
# uname -sr
If we now go to https://www.kernel.org/, we will see that the latest kernel version is 4.20 at the time
of this writing (other versions are available from the same site).
This new Kernel 4.20 version is a long-term release and will be supported for 6 years, earlier all
Linux Kernel versions were supported for 2 years only.
One important thing to consider is the life cycle of a kernel version – if the version you are
currently using is approaching its end of life, no more bug fixes will be provided after that date. For
more info, refer to the kernel Releases page.
However, this will only perform the upgrade to the most recent version available from the
distribution’s repositories or latest one available in the https://www.kernel.org/.
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-headers-
4.20.0-042000_4.20.0-042000.201812232030_all.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-headers-
4.20.0-042000-generic_4.20.0-042000.201812232030_amd64.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-image-
unsigned-4.20.0-042000-generic_4.20.0-042000.201812232030_amd64.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-modules-
4.20.0-042000-generic_4.20.0-042000.201812232030_amd64.deb
$ sudo dpkg -i *.deb
On 32-Bit System
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-headers-
4.20.0-042000_4.20.0-042000.201812232030_all.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-headers-
4.20.0-042000-generic_4.20.0-042000.201812232030_i386.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-image-
4.20.0-042000-generic_4.20.0-042000.201812232030_i386.deb
$ wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v4.20/linux-modules-
Finally, reboot your machine to apply the latest kernel, and then select latest kernel from the grub
4.20.0-042000-generic_4.20.0-042000.201812232030_i386.deb
menu.
$ sudo dpkg -i *.deb
# uname -sr
# grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot and verify that the latest kernel is now being used by default.
Summary
In this chapter we have explained how to easily upgrade the Linux kernel on your system. There is
yet another method which we haven’t covered as it involves compiling the kernel from source,
which would deserve an entire book and is not recommended on production systems.
Although it represents one of the best learning experiences and allows for a fine-grained
configuration of the kernel, you may render your system unusable and may have to reinstall it from
scratch.
If you are still interested in building the kernel as a learning experience, you will find instructions
on how to do it at the Kernel Newbies page.
It dynamically creates or removes device nodes (an interface to a device driver that appears in a file
system as if it were an ordinary file, stored under the /dev directory) at boot time or if you add a
device to or remove a device from the system. It then propagates information about a device or
changes to its state to user space.
It’s function is to 1) supply the system applications with device events, 2) manage permissions of
device nodes, and 3) may create useful symlinks in the /dev directory for accessing devices, or even
renames network interfaces.
One of the pros of udev is that it can use persistent device names to guarantee consistent naming of
devices across reboots, despite their order of discovery. This feature is useful because the kernel
simply assigns unpredictable device names based on the order of discovery.
In this chapter, we will learn how to use Udev for device detection and management on Linux
systems. Note that most if not all mainstream modern Linux distributions come with Udev as part of
the default installation.
Whenever you connect a device to the system, the kernel detects and initializes it, and a directory
with the device name is created under /sys/ directory which stores the device attributes.
The main configuration file for udev is /etc/udev/udev.conf, and to control the runtime behavior the
udev daemon, you can use the udevadm utility.
$ udevadm monitor
To find the name assigned to your USB disk, use the lsblk utility which reads the sysfs filesystem
and udev db to gather information about processed devices.
$ lsblk
#!/bin/bash
echo "USB device added at $(date)" >>/tmp/scripts.log
#!/bin/bash
echo "USB device removed at $(date)" >>/tmp/scripts.log
Next, let’s create a rule to trigger execution of the above scripts, called /etc/udev/rules.d/80-
test.rules.
$ vim /etc/udev/rules.d/80-test.rules
where:
• "==": is an operator to compare for equality.
• "+=": is an operator to add the value to a key that holds a list of entries.
• SUBSYSTEM: matches the subsystem of the event device.
• ACTION: matches the name of the event action.
• ENV{DEVTYPE}: matches against a device property value, device type in this case.
• RUN: specifies a program or script to execute as part of the event handling.
Save the file and close it.
Then as root, tell systemd-udevd to reload the rules files (this also reloads other databases such as
the kernel module index), by running.
$ ls -l /tmp/scripts.log
Then the file should have an entry such as “USB device removed at date_time”, as shown in the
screenshot.
$ cat /tmp/scripts.log
For more information on how to write udev rules and manage udev, consult the udev and udevadm
manual entries respectively, by running:
$ man udev
$ man udevadm
Summary
Udev is a remarkable device manager that provides a dynamic way of setting up device nodes in the
/dev directory. It ensures that devices are configured as soon as they are plugged in and discovered.
It propagates information about a processed device or changes to its state, to user space.
Another popular and widely-used MAC is AppArmor, which in addition to the features provided by
SELinux, includes a learning mode that allows the system to “learn” how a specific application
behaves, and to set limits by configuring profiles for safe application usage.
In CentOS 7, SELinux is incorporated into the kernel itself and is enabled in Enforcing mode by
default (more on this in the next section), as opposed to Ubuntu which uses AppArmor.
In this chapter we will explain the essentials of SELinux and AppArmor and how to use one of
these tools for your benefit depending on your chosen distribution.
• Enforcing: SELinux denies access based on SELinux policy rules, a set of guidelines that
control the security engine.
• Permissive: SELinux does not deny access, but denials are logged for actions that would
have been denied if running in enforcing mode.
SELinux can also be disabled. Although it is not an operation mode itself, it is still an option.
However, learning how to use this tool is better than just ignoring it. Keep it in mind!
To display the current mode of SELinux, use getenforce. If you want to toggle the operation
mode, use setenforce 0 (to set it to Permissive) or setenforce 1 (Enforcing).
Since this change will not survive a reboot, you will need to edit the /etc/selinux/config file and set
the SELINUX variable to either enforcing, permissive, or disabled to achieve persistence across
reboots:
One of the typical uses of setenforce consists of toggling between SELinux modes (from enforcing
to permissive or the other way around) to troubleshoot an application that is misbehaving or not
working as expected. If it works after you set SELinux to Permissive mode, you can be confident
you’re looking at a SELinux permissions issue.
Two classic cases where we will most likely have to deal with SELinux are:
To do this, we use the Port directive in /etc/ssh/sshd_config followed by the new port number as
follows (we will use port 9999 in this case):
Port 9999
After attempting to restart the service and checking its status we will see that it failed to start:
If we look at /var/log/audit/audit.log, we will see that sshd was prevented from starting on port 9999
At this point most people would probably disable SELinux but we won’t. We will see that there’s a
way for SELinux, and sshd listening on a different port, to live in harmony together. Make sure you
have the policycoreutils-python package installed and run:
To view a list of the ports where SELinux allows sshd to listen on. In the following image we can
also see that port 9999 was reserved for another service and thus we can’t use it to run another
service for the time being:
Of course, we could choose another port for SSH, but if we are certain that we will not need to use
this specific machine for any JBoss-related services, we can then modify the existing SELinux rule
and assign that port to SSH instead:
After that, we can use the first semanage command to check if the port was correctly assigned, or
the -lC options (short for list custom):
DocumentRoot “/websrv/sites/gabriel/public_html”
Apache will refuse to serve the content because the index.html has been labeled with the default_t
SELinux type, which Apache can’t access:
# wget http://localhost/index.html
# ls -lZ /websrv/sites/gabriel/public_html/index.html
As with the previous example, you can use the following command to verify that this is indeed a
SELinux-related issue:
The above command will grant Apache read-only access to that directory and its contents.
Finally, to apply the policy (and make the label change effective immediately), do:
# restorecon -R -v /websrv/sites/gabriel/public_html
# wget http://localhost/index.html
For more information on SELinux, refer to the Fedora 25 SELinux and Administrator guide.
Profiles are then used to place limits on how applications interact with processes and files in the
system. A set of profiles is provided out-of-the-box with the operating system, whereas others can
be put in place either automatically by applications when they are installed or manually by the
system administrator.
These logs will show through lines with the word audit in them - errors that would occur should the
profile be run in enforce mode. Thus, you can try out an application in complain mode and adjust its
behavior before running it under AppArmor in enforce mode.
$ sudo apparmor_status
The image above indicates that the profiles /sbin/dhclient, /usr/sbin/, and /usr/sbin/tcpdump are in
enforce mode (that is true by default in Ubuntu).
Since not all applications include the associated AppArmor profiles, the apparmor-profiles package,
which provides other profiles that have not been shipped by the packages they provide confinement
for.
By default, they are configured to run in complain mode so that system administrators can test them
and choose which ones are desired. We will make use of apparmor-profiles since writing our own
profiles is out of the scope of the certification.
AppArmor profiles are stored inside /etc/apparmor.d. Let’s look at the contents of that directory
before and after installing apparmor-profiles:
$ ls /etc/apparmor.d
For more information on AppArmor, please refer to the official wiki and to the documentation
provided by Ubuntu.
Summary
In this chapter we have gone through the basics of SELinux and AppArmor, two well-known
MACs. When to use one or the other? To avoid difficulties, you may want to consider sticking with
the one that comes with your chosen distribution.
In any event, they will help you place restrictions on processes and access to system resources to
increase the security in your servers.
# adduser [new_account]
# useradd [new_account]
When a new user account is added to the system, the following operations are performed:
2. The following hidden files are copied into the user’s home directory, and will be used to
provide environment variables for his/her user session.
.bash_logout
.bash_profile
.bashrc
4. A group is created and given the same name as the new user account.
Understanding /etc/passwd
The full account information is stored in the /etc/passwd file. This file contains a record per system
user account and has the following format (fields are delimited by a colon):
Understanding /etc/group
Group information is stored in the /etc/group file.
where
• [Group members]: a comma separated list of users who are members of [Group name].
To set the expiry date for an account, use the --expiredate flag followed by a date in YYYY-MM-
DD format.
To change the default location of the user’s home directory, use the -d, or --home options, followed
by the absolute path to the new home directory.
To change the shell the user will use by default. Use --shell, followed by the path to the new shell.
# groups tecmint
# id tecmint
In the example above, we will set the expiry date of the tecmint user account to October 30th, 2014.
We will also add the account to the root and users group. Finally, we will set sh as its default shell
and change the location of the home directory to /tmp:
Unlocking password: use the --u or the --unlock option to unlock a user’s password that was
previously blocked.
Creating a new group for read and write access to files that need to be accessed by several users.
# groupdel [group_name]
If there are files owned by group_name, they will not be deleted, but the group owner will be set to
the GID of the group that was deleted.
You can delete an account (along with its home directory, if it’s owned by the user, and all the files
residing therein, and also the mail spool) using the userdel command with the --remove option:
Group Management
Every time a new user account is added to the system, a group with the same name is created with
the username as its only member. Other users can be added to the group later.
One of the purposes of groups is to implement a simple access control to files and other system
resources by setting the right permissions on those resources.
All of them need read and write access to a file called common.txt located somewhere on your local
system, or maybe on a network share that user1 has created. You may be tempted to do something
like:
However, this will only provide read and write access to the owner of the file and to those users
who are members of the group owner of the file (user1 in this case).
Again, you may be tempted to add user2 and user3 to group user1, but that will also give them
access to the rest of the files owned by user user1 and group user1.
This is where groups come in handy, and here’s what you should do in a case like this.
Like the basic permissions discussed earlier, they are set using an octal file or through a letter
(symbolic notation) that indicates the type of permission.
SETUID
When the setuid permission is applied to an executable file, a user running the program inherits the
effective privileges of the program's owner. Since this approach can reasonably raise security
concerns, the number of files with setuid permission must be kept to a minimum.
You will likely find programs with this permission set when a system user needs to access a file
owned by root. Summing up, it isn’t just that the user can execute the binary file, but also that he
can do so with root’s privileges.
For example, let’s check the permissions of /bin/passwd. This binary is used to change the password
of an account, and modifies the /etc/shadow file.
The superuser can change anyone’s password, but all other users should only be able to change their
own.
SETGID
When the setgid bit is set, the effective GID of the real user becomes that of the group owner. Thus,
any user can access a file under the privileges granted to the group owner of such file.
In addition, when the setgid bit is set on a directory, newly created files inherit the same group as
the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory.
You will most likely use this approach whenever members of a certain group need access to all the
files in a directory, regardless of the file owner's primary group.
To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions.
To set the sticky bit in octal form, prepend the number 1 to the current (or desired) basic
permissions.
Without the sticky bit, anyone able to write to the directory can delete or rename files. For that
reason, the sticky bit is commonly found on directories, such as /tmp, that are world-writable.
# chattr +i file1
# chattr +a file2
After executing those two commands, file1 will be immutable (which means it cannot be moved,
renamed, modified or deleted) whereas file2 will enter append-only mode (can only be open in
append mode for writing).
$ su
If authentication succeeds, you will be logged on as root with the current working directory as the
same as you were before.
$ su -
The above procedure requires that a normal user knows root’s password, which poses a serious
security risk.
For that reason, the sysadmin can configure the sudo command to allow an ordinary user to execute
commands as a different user (usually the superuser) in a very controlled and limited way.
Thus, restrictions can be set on a user to enable him to run one or more specific privileged
commands and no others.
To authenticate using sudo, the user uses his/her own password. After entering the command, we
will be prompted for our password (not the superuser's) and if the authentication succeeds (and if
the user has been granted privileges to run the command), the specified command is carried out.
$ visudo
Defaults secure_path="/usr/sbin:/usr/bin:/sbin"
root ALL=(ALL) ALL
tecmint ALL=/bin/yum update
gacanepa ALL=NOPASSWD:/bin/updatedb
%admin ALL=(ALL) ALL
Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin"
This line lets you specify the directories that will be used for sudo, and is used to prevent using
user-specific directories, which can harm the system.
• The first ALL keyword indicates that this rule applies to all hosts.
• The second ALL indicates that the user in the first column can run commands with the
privileges of any user.
If no user is specified after the = sign, sudo assumes the root user. In this case, user tecmint will be
able to run yum update as root.
gacanepa ALL=NOPASSWD:/bin/updatedb
Finally,
The % sign indicates that this line applies to a group called “admin”. The meaning of the rest of the
line is identical to that of a regular user. This means that members of the group “admin” can run all
commands as any user on all hosts.
To see what privileges are granted to you by sudo, use the “-l” option to list them:
$ sudo -l
This tool – present on all modern Linux distributions - overcame the problem often faced by
developers in the early days of Linux, when each program that required authentication had to be
compiled specially to know how to get the necessary information.
For example, when the login program needs to authenticate a user, PAM provides dynamically the
library that contains the functions for the right authentication scheme.
Thus, changing the authentication scheme for the login application (or any other program using
PAM) is easy since it only involves editing a configuration file (most likely, a file named after the
application, located inside /etc/pam.d, and less likely in /etc/pam.conf).
Files inside /etc/pam.d indicate which applications are using PAM natively. In addition, we can tell
whether a certain application uses PAM by checking if it the PAM library (libpam) has been linked
to it:
In the above image we can see that the libpam has been linked with the login application. This
makes sense since this application is involved in the operation of system user authentication,
whereas top does not.
Let’s examine the PAM configuration file for passwd – yes, the well-known utility to change user’s
passwords. It is located at /etc/pam.d/passwd:
# cat /etc/pam.d/passwd
The first column indicates the type of authentication to be used with the module-path (third
column). When a hyphen appears before the type, PAM will not record to the system log if the
module cannot be loaded because it could not be found in the system.
The second column (called control) indicates what should happen if the authentication with this
module fails:
• requisite: if the authentication via this module fails, overall authentication will be denied
immediately.
• required is similar to requisite, although all other listed modules for this service will be
called before denying authentication.
• sufficient: if the authentication via this module fails, PAM will still grant authentication even
if a previous marked as required failed.
• optional: if the authentication via this module fails or succeeds, nothing happens unless this
is the only module of its type defined for this service.
• include means that the lines of the given type should be read from another file.
• substack is similar to includes but authentication failures or successes do not cause the exit
of the complete module, but only of the substack.
The fourth column, if it exists, shows the arguments to be passed to the module.
The first three lines in /etc/pam.d/passwd (shown above), load the system-auth module to check that
the user has supplied valid credentials (account).
If so, it allows him / her to change the authentication token (password) by giving permission to use
passwd (auth).
remember=2
in /etc/pam.d/system-auth:
For more information refer to the Linux-PAM System Administrator’s guide and in man 5
pam.conf.
Summary
Effective user and file management skills are essential tools for any system administrator. In this
chapter we have covered the basics and hope you can use it as a good starting to point to build
upon.
The LDAP information model is based on entries. An entry in a LDAP directory represents a single
unit or information and is uniquely identified by what is called a Distinguished Name (DN). Each of
the entry’s attributes has a type and one or more values.
An attribute is a piece of information associated with an entry. The types are typically mnemonic
strings, such as “cn” for common name, or “mail” for email address. Each attribute is assigned one
or more values consisting in a space-separated list.
In this chapter, we will show how to install and configure OpenLDAP server for centralized
authentication in Ubuntu 16.04/18.04 and CentOS 7.
On Ubuntu, during the package installation, you will be prompted to enter the password for the
admin entry in your LDAP directory, set a secure password and confirm it.
When the installation is complete, you can start the service as explained next.
On CentOS 7, run the following commands to start the openldap server daemon, enable it to auto-
start at boot time and check if its up and running (on Ubuntu the service should be auto-started
under systemd, you can simply check its status):
Next, allow requests to the LDAP server daemon through the firewall as shown.
$ slappasswd
Then create an LDIF file (ldaprootpasswd.ldif) which is used to add an entry to the LDAP directory.
dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}PASSWORD_CREATED
• olcDatabase: indicates a specific database instance name and can be typically found
inside /etc/openldap/slapd.d/cn=config.
• cn=config: indicates global config options.
• PASSWORD: is the hashed string obtained while creating the administrative user.
Next, import some basic LDAP schemas from the /etc/openldap/schema directory as follows.
Now add your domain in the LDAP database and create a file called ldapdomain.ldif for your
domain.
Add the following content in it (replace example with your domain and PASSWORD with the
hashed value obtained before):
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=example,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=Manager,dc=example,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}PASSWORD
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by
dn="cn=Manager,dc=example,dc=com" write by anonymous auth by self write by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=Manager,dc=example,dc=com" write by * read
Then add the above configuration to the LDAP database with the following command.
dn: dc=example,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: example com
dc: example
dn: cn=Manager,dc=example,dc=com
objectClass: organizationalRole
cn: Manager
description: Directory Manager
dn: ou=People,dc=example,dc=com
objectClass: organizationalUnit
ou: People
dn: ou=Group,dc=example,dc=com
objectClass: organizationalUnit
ou: Group
Save the file and then add the entries to the LDAP directory.
The next step is to create a LDAP user for example, tecmint, and set a password for this user as
follows.
Then create the definitions for a LDAP group in a file called ldapgroup.ldif with the following
content.
dn: cn=Manager,ou=Group,dc=example,dc=com
objectClass: top
objectClass: posixGroup
gidNumber: 1005
Next, create another LDIF file called ldapuser.ldif and add the definitions for user tecmint.
dn: uid=tecmint,ou=People,dc=example,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: tecmint
uid: tecmint
uidNumber: 1005
gidNumber: 1005
homeDirectory: /home/tecmint
userPassword: {SSHA}PASSWORD_HERE
loginShell: /bin/bash
gecos: tecmint
shadowLastChange: 0
shadowMax: 0
shadowWarning: 0
Once you have setup a central server for authentication, the final part is to enable the client to
authenticate using LDAP as explained in the next chapter:
For more information, see the appropriate documentation from OpenLDAP Software document
catalog and Ubuntu users can refer to the OpenLDAP server guide.
Summary
OpenLDAP is a open source implementation of LDAP in Linux. In this chapter, we have shown
how to install and configure OpenLDAP server for centralized authentication, in Ubuntu
16.04/18.04 and CentOS 7.
Directory services play an important role in developing intranet and Internet applications by helping
you share information about users, systems, networks, applications, and services throughout the
network.
A typical use case for LDAP is to offer a centralized storage of usernames and passwords. This
allows various applications (or services) to connect to the LDAP server to validate users.
After setting up a working LDAP server, you will need to install libraries on the client for
connecting to it.
In this chapter, we will show how to configure an LDAP client to connect to an external
authentication source.
$ sudo apt update && sudo apt install libnss-ldap libpam-ldap ldap-utils nscd
During the installation, you will be prompted for details of your LDAP server (provide the values
according to your environment).
Next, enter the name of the LDAP search base, you can use the components of their domain names
for this purpose as shown in the screenshot.
Next, disable login requirement to the LDAP database using the next option.
Next, enter the password to use when ldap-auth-config tries to login to the LDAP directory using
the LDAP account for root.
The results of the dialog will be stored in the file /etc/ldap.conf. If you want to make any alterations,
open and edit this file using your favorite command line editor.
Then configure the system to use LDAP for authentication by updating PAM configurations. From
the menu, choose LDAP and any other authentication mechanisms you need. You should now be
able to log in using LDAP-based credentials.
In case you want the home directory of the user to be created automatically, then you need to
perform one more configuration in the common-session PAM file.
Note: If you are using replication, LDAP clients will need to refer to multiple servers specified
in /etc/ldap.conf. You can specify all the servers in this form:
This implies that the request will time out and if the Provider (ldap1.example.com) becomes
unresponsive, the Consumer (ldap2.example.com) will attempt to be reached to process it.
To check the LDAP entries for a particular user from the server, run the getent command, for
example.
If the above command displays details of the specified user from the /etc/passwd file, your client
machine is now configured to authenticate with the LDAP server, you should be able to log in using
LDAP-based credentials.
Next, enable the client system to authenticate using LDAP. You can use the authconfig utility, which
is an interface for configuring system authentication resources.
Run the following command and replace example.com with your domain
and dc=example,dc=com with your LDAP domain controller.
In the above command, the --enablemkhomedir option creates a local user home directory at the
first connection if none exists.
The above command should display details of the specified user from the /etc/passwd file, which
implies that the client machine is now configured to authenticate with the LDAP server.
Important: If SELinux is enabled on your system, you need to add a rule to allow creating home
directories automatically by mkhomedir.
For more information, consult the appropriate documentation from OpenLDAP Software document
catalog.
Summary
LDAP, is a widely used protocol for querying and modifying a directory service. In this chapter, we
have shown how to configure an LDAP client to connect to an external authentication source, in
Ubuntu and CentOS client machines.
It integrates multiple low-level authentication modules into a high-level API that provides dynamic
authentication support for applications. This allows developers to write applications that require
authentication, independently of the underlying authentication system.
• As a system administrator, the most important thing is to master how PAM configuration
file(s) define the connection between applications (services) and the pluggable
authentication modules (PAMs) that perform the actual authentication tasks. You don’t
necessarily need to understand the internal working of PAM.
• PAM has the potential to seriously alter the security of your Linux system. Erroneous
configuration can disable access to your system partially, or completely. For instance an
accidental deletion of a configuration file(s) under /etc/pam.d/* and/or /etc/pam.conf can
lock you out of your own system!
The format of each rule is a space separated collection of tokens (the first three are case-
insensitive). We will explain the these tokens in subsequent sections.
Where:
The syntax of each file in /etc/pam.d/ is similar to that of the main file and is made up of lines of the
following form:
This is a example of a rule definition (without module-arguments) found in the /etc/pam.d/sshd file,
which disallows non-root logins when /etc/nologin exists:
• account: provide services for account verification: has the user’s password expired?; is this
user permitted access to the requested service?.
• authentication: authenticate a user and set up user credentials.
• password: are responsible for updating user passwords and work together with
authentication modules.
• session: manage actions performed at the beginning of a session and end of a session.
• requisite: failure instantly returns control to the application indicating the nature of the first
module failure.
• required: all these modules are required to succeed for libpam to return success to the
application.
• sufficient: given that all preceding modules have succeeded, the success of this module leads
to an immediate and successful return to the application (failure of this module is ignored).
• optional: the success or failure of this module is generally not recorded.
In addition to the above are the keywords, there are two other valid control flags:
• include: include all lines of given type from the configuration file specified as an argument
to this control.
• substack: include all lines of given type from the configuration file specified as an argument
to this control.
Next, we need to create the file /etc/ssh/deniedusers and add the name root in it:
Save the changes and close the file, then set the required permissions on it:
From now on, the above rule will tell PAM to consult the /etc/ssh/deniedusers file and deny access
to the SSH and login services for any listed user.
Where valueN corresponds to the return code from the function invoked in the module for which
the line is defined. You can find supported values from the on-line PAM Administrator’s Guide. A
special value is default, which implies all valueN’s not mentioned explicitly.
The actionN can take one of the following forms:
• ignore: if this action is used with a stack of modules, the module’s return status will not
contribute to the return code the application obtains.
Each of the four keywords: required; requisite; sufficient; and optional, have an equivalent
expression in terms of the [...] syntax, which allow you to write more complicated rules and they
are:
The following is an example from a modern CentOS 7 system. Let’s consider these rules from
the /etc/pam.d/postlogin PAM file:
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
session [success=1 default=ignore] pam_succeed_if.so service !~ gdm* service !~ su* quiet
session [default=1] pam_lastlog.so nowtmp showfailed
session optional pam_lastlog.so silent noupdate showfailed
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required pam_env.so
auth [success=done ignore=ignore default=die] pam_pkcs11.so nodebug wait_for_card
auth required pam_deny.so
account required pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_succeed_if.so uid < 1000 quiet
account required pam_permit.so
password required pam_pkcs11.so
session optional pam_keyinit.so revoke
© 2016-2019 Tecmint.com
session required – Last revised: January 2019 – All rights reserved
pam_limits.so
session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
For more information, see the pam.d man page:
$ man pam.d
Lastly, a comprehensive description of the Configuration file syntax and all PAM modules can be
found in the documentation for Linux-PAM.
Summary
PAM is a powerful high-level API that allows programs that rely on authentication to authentic
users to applications in a Linux system.
In this chapter, we’ve explained how to configure advanced features of PAM in Ubuntu and
CentOS.
Testing Environment:
• Local Host: 192.168.43.31
• Remote Host: CentOS 7 VPS with hostname server1.example.com.
Usually, you can securely connect to a remote server using SSH as follows. In this example, I have
configured passwordless SSH login between my local and remote hosts, so it has not asked for user
admin’s password.
$ ssh admin@server1.example.com
follows. The -L flag defines the port forwarded to the remote host and remote port.
Adding the -N flag means do not execute a remote command, you will not get a shell in this case.
Now, on your local machine, open a browser, instead of accessing the remote application using the
Look for the required directive, uncomment it and set its value to yes , as shown in the screenshot.
GatewayPorts yes
Save the changes and exit. Next, you need to restart sshd to apply the recent change you made.
Next run the following command to forward port 5000 on the remote machine to port 3000 on the
local machine.
You can enable dynamic port forwarding using the -D option. The following command will start a
SOCKS proxy on port 1080 allowing you to connect to the remote host.
From now on, you can make applications on your machine use this SSH proxy server by editing
their settings and configuring them to use it, to connect to your remote server. Note that
the SOCKS proxy will stop working after you close your SSH session.
Summary
In this chapter, we explained the various types of port forwarding from one machine to another, for
tunneling traffic through the secure SSH connection.
Attention: SSH port forwarding has some considerable disadvantages, it can be abused: it can be
used to by-pass network monitoring and traffic filtering programs (or firewalls). Attackers can use it
for malicious activities.
It’s easy to use and configure, and it’s now the default firewall management tool on RHEL/CentOS,
Fedora and several other Linux distributions.
In this chapter, we will discuss how to configure system firewall with firewalld and implement
basic packet filtering in CentOS and Ubuntu.
The global configuration file for firewalld is located at /etc/firewalld/firewalld.conf and firewall
features are configured in XML format.
The default configuration comes with a number of predefined zones sorted according to the default
trust level of the zones from untrusted to trusted: drop, block, public, external, dmz, work, home,
internal and trusted. They are defined in files stored under the /usr/lib/firewalld/zones directory.
You can configure or add your custom zones using the CLI client or simply create or copy a zone
file in /etc/firewalld/zones from existing files and edit it.
Another important concept under firewalld is services. A service is defined using ports and
protocols; these definitions represent a given network service such as a web server or remote access
service. Services are defined in files stored under the /usr/lib/firewalld/services/ or
/etc/firewalld/services/ directory.
If you know basic iptables/ip6tables/ebtables concepts, you can also use the direct interface (or
configuration) to gain direct access to the firewall. But, for those without any iptables knowledge,
you can employ the rich language for creating more complex firewall rules for IPv4 and Ipv6.
Installing Firewalld
On CentOS 7, the firewalld package comes pre-installed and you can verify using following
command.
On Ubuntu 16.04 and 18.04, you can install it using the default package manager as shown.
Managing Firewalld
Firewalld is a regular systemd service that can be managed via the systemctl command.
If you happen to save any changes permanently, you can reload firewalld. This will reload firewall
rules and keep state information. The current permanent configuration will become new runtime
configuration.
The default zone is the zone that is used for every firewall feature that is not explicitly bounded to
another zone. You can get the default zone set for network connections and interfaces by running.
To set the default zone, for example to external, use the following command.
Note that adding the option --permanent sets the configuration permanently (or enables querying of
information from the permanent configuration environment).
Next, let’s look at how to add an interface to a zone. This example shows how to add your wireless
network adapter (wlp1s0) to zone home, which is used in home areas.
An interface can only be added to a single zone. To move it to another zone, use the --change-
interface switch as shown, or remove it from the previous zone using the –remove-interface switch,
then add it to the new zone.
Assuming you want to connect to a public WI-FI network, you should move your wireless interface
back to the public zone, like this:
You can use many zones at the same time. To get a list of all active zones with the enabled features
such as interfaces, services, ports, protocols, run:
Another useful option is --get-target , which shows you the target of a permanent zone. A target is
one of: default, ACCEPT, DROP, REJECT. You can check the target of various zones:
Blocking or closing a port in the firewall is equally easy, simply remove it from a zone with the --
remove-port option. For example, to close ports 80 and 443 in the public zone.
Instead of using port or port/protocol combination, you can use the service name to which a port is
assigned as explained in the next section.
A typical use case for masquerading is to perform port forwarding. Assuming you want to SSH
from a remote machine to a host in your internal network with the IP 10.20.1.3, on which the sshd
daemon is listening on port 5000.
You can forward all connections to port 22 on your Linux server to the intended port on your target
host by issuing:
Here is an example of how to pass raw iptables rule, using the --add-rules switch. You can easily
remove these rules by replacing --add-rule with --remove-rule :
If you aren’t familiar with iptables syntax, you can opt for firewalld’s “rich language” for creating
more complex firewall rules in an easy to understand manner as explained next.
The --add-rich-rule is used to add rich rules. This example shows how to allow
new IPv4 and IPv6connections for service http and log 1 per minute using audit:
To remove the added rule, replace the --add-rich-rule option with --remove-rich-rule .
To enable panic mode, use the --panic-on option. You can test if it is working using the ping
command as shown. Because the packet is dropped, the name www.google.com can not be
resolved, hence the error displayed.
Lockdown Firewalld
Remember, we mentioned under the basics about firewalld that local applications or services are
able to alter the firewall configuration if they are running with root privileges. You can control
which applications are able to request firewall changes, by specifying then in a lockdown whitelist.
This feature is turned off by default, you can enable or disable it with the --lockdown-on or --
Note that it is recommended to enable or disable this feature by editing the main config file, because
the firewall-cmd may not exist on lockdown whitelist when you enable lockdown.
Find the parameter Lockdown and change its value from no (means off) to yes (means on).
Lockdown=yes
Summary
Firewalld is an easy to use replacement for the iptables service, which uses iptables as a backend.
In this chapter, we have shown how to install firewalld package, explained firewalld’s important
features and discussed how to configure them in the runtime and permanent configuration
environments.
Note that this chapter is not supposed to be a comprehensive guide on Apache, but rather a starting
point for self-study about this topic for the LFCE exam.
By now, you should have the Apache web server installed and running. You can verify this with the
following command:
Note that the above command checks for the presence of either apache or httpd (the most common
names for the web daemon) among the list of running processes. If Apache is running, you will get
output like the following:
The ultimate method of testing the Apache installation and checking whether it’s running is
launching a web browser and pointing to the IP of the server.
Configuring Apache
The main configuration file for Apache can be in different directories depending on your
distribution:
/etc/apache2/apache2.conf # Ubuntu
/etc/httpd/conf/httpd.conf # CentOS
Fortunately for us, the configuration directives are extremely well documented in the Apache
project web site. We will refer to some of them throughout this chapter.
The DocumentRoot directive specifies the directory out of which Apache will serve web pages and
other documents.
Note that by default, all requests are taken from this directory, but you can also use symbolic links
and / or aliases to point to other locations as well.
Unless matched by the Alias directive (which allows documents to be stored in the local filesystem
instead of under the directory specified by DocumentRoot), the server appends the path from the
requested URL to the document root to make the path to the document.
The access log is typically found inside /var/log/httpd (CentOS) or /var/log/apache2 (Ubuntu) under
a descriptive name, such as access.log or access_log. Otherwise, the failed event will still be logged
to the access log but with a 404 (Not Found) response.
The format of the access log can be customized according to your needs using the LogFormat
directive in the main configuration file, whereas you cannot do the same with the error log.
where each of the letters preceded by a percent sign indicates the server to log a certain piece of
information:
String Description
%h Remote hostname or IP address
%l Remote log name
%u Remote user if the request is authenticated
%t Date and time when the request was
received
%r First line of request to the server
%>s Final status of the request
%b Size of the response [bytes]
and nickname is an optional alias that can be used to customize other logs without having to enter
the whole configuration string again.
Both log files (access and error) represent a great resource to quickly analyze at a glance what’s
happening on the Apache server. They are the first tool a system administrator uses to troubleshoot
issues.
Finally, another important directive is Listen, which tells the server to accept incoming requests on
the specified port or address/port combination:
• If only a port number is defined, Apache will listen on the given port on all network
interfaces (the wildcard sign * is used to indicate ‘all network interfaces’).
• If both IP address and port is specified, then Apache will listen on the combination of given
port and network interface.
Please note (as you will see in the examples below) that multiple Listen directives can be used at the
same time to specify multiple addresses and ports to listen to. This option instructs the server to
respond to requests from any of the listed addresses and ports.
where -c is used to create the file (use only if it does not exist previously) and -B to encrypt the
password. Note that it is not required that this user exists in /etc/passwd). Don’t forget to take note
of the password since you will need it to access the protected resource later.
Next, let’s assign the proper permissions and ownership (replace www-data with apache if you’re
using CentOS instead of Ubuntu):
Now add the following lines in the Apache configuration file to password-protect
/var/www/html/secret:
What we have just discussed also applies to virtual hosts, our next topic.
This process is transparent to the end user, to whom it appears that the different sites are being
served by distinct web servers.
Name-based virtual hosting allows the server to rely on the client to report the hostname as part of
the HTTP headers. Thus, using this technique, many different hosts can share the same IP address.a
Each virtual host is configured in a directory within DocumentRoot. For our case, we will use the
following dummy domains for the testing setup, each located in the corresponding directory:
• ilovelinux.com - /var/www/html/ilovelinux.com/public_html
• linuxrocks.org - /var/www/html/linuxrocks.org/public_html
For pages to be displayed correctly, we will chmod each virtual host directory to 755:
<html>
<head>
<title>www.ilovelinux.com</title>
</head>
<body>
<h1>This is the main page of www.ilovelinux.com</h1>
</body>
</html>
<VirtualHost *:80>
ServerAdmin admin@ilovelinux.com
DocumentRoot /var/www/html/ilovelinux.com/public_html
ServerName www.ilovelinux.com
ServerAlias www.ilovelinux.com ilovelinux.com
ErrorLog /var/www/html/ilovelinux.com/error.log
LogFormat "%v %l %u %t \"%r\" %>s %b" myvhost
CustomLog /var/www/html/ilovelinux.com/access.log myvhost
</VirtualHost>
<VirtualHost *:80>
ServerAdmin admin@linuxrocks.org
DocumentRoot /var/www/html/linuxrocks.org/public_html
ServerName www.linuxrocks.org
ServerAlias www.linuxrocks.org linuxrocks.org
ErrorLog /var/www/html/linuxrocks.org/error.log
LogFormat "%v %l %u %t \"%r\" %>s %b" myvhost
CustomLog /var/www/html/linuxrocks.org/access.log myvhost
</VirtualHost>
Please note that you can also add each virtual host definition in separate files inside the
/etc/httpd/conf.d directory. If you choose to do so, each configuration file must be named as
follows:
/etc/httpd/conf.d/ilovelinux.com.conf
/etc/httpd/conf.d/linuxrocks.org.conf
In other words, you need to add .conf to the site or domain name.
# a2ensite /etc/apache2/sites-available/ilovelinux.com.conf
# a2dissite /etc/apache2/sites-available/ilovelinux.com.conf
# a2ensite /etc/apache2/sites-available/linuxrocks.org.conf
# a2dissite /etc/apache2/sites-available/linuxrocks.org.conf
To be able to browse to both sites from another Linux box, you will need to add the following lines
in the /etc/hosts file of the client machine to redirect requests to those domains to a specific IP
address:
As a security measure, SELinux will not allow Apache to write logs to a directory other than the
default /var/log/httpd. You can either disable SELinux, or set the right security context:
where xxxxxx is the directory inside /var/www/html where you have defined your Virtual Hosts.
After restarting Apache, you should see the following page at the above addresses:
However, if your server will expose content to the outside world over the Internet, you will want to
install a certificate signed by a 3rd party to corroborate its authenticity.
Either way, a certificate will allow you to encrypt the information that is transmitted to, from, or
within your site.
# a2enmod ssl
The following steps are explained using a CentOS test server, but your setup should be almost
identical in the other distributions (if you run into any kind of issues, don’t hesitate to leave your
questions using the comments form).
# mkdir /etc/httpd/ssl-certs
Step 2: Generate your self-signed certificate and the key that will protect it:
• -days 365 is the number of days the certificate will be valid for.
SSLEngine on
SSLCertificateFile /etc/httpd/ssl-certs/apache.crt
SSLCertificateKeyFile /etc/httpd/ssl-certs/apache.key
Finally, check “Permanently store this exception” and click “Confirm Security Exception”:
Summary
In this chapter we have shown how to configure Apache and name-based virtual hosting with SSL
to secure data transmission. If for some reason you ran into any issues, feel free to let us know. We
will be more than glad to help you perform a successful set up.
You may refer to the Let’s Encrypt section to further setup free SSL/TLS certificates needed for
your server to run securely, making a smooth browsing experience for your users, without any
errors.
In this chapter will discuss how to use Nginx as a HTTP server, configure it to serve web content,
and set up name-based virtual hosts, and create and install SSL for secure data transmissions,
including a self-signed certificate on Ubuntu and CentOS.
After the Nginx package is installed, you need to start the service for now, enable it to auto-start at
boot time and view it’s status, using the following commands.
Note that on Ubuntu, it should be started and enabled automatically while the package is pre-
configured.
If your system has a firewall enabled, you need to open port 80 and 443 to
allow HTTP and HTTPS traffic respectively, through it, by running.
Nginx is made up of modules that are controlled by various configuration options, known
as directives. A directive can either be simple (in the form name and values terminated with a ; )
or block ( has extra instructions enclosed using {} ). And a block directive which contains other
directives is called a context.
All the directives are comprehensively explained in the Nginx documentation in the project website.
You can refer to it for more information.
and this is referred to as the main context, which contains many other simple and block directives.
All web traffic is handled in the http context.
user nginx;
worker_processes 1;
..…
The following is a sample Nginx main configuration (/etc/nginx/nginx.conf) file, where the http
block above contains an include directive which tells Nginx where to find website configuration
files (virtual host configurations).
user www-data;
worker_processes auto;
pid /run/nginx.pid;
Events {
worker_connections 768;
# multi_accept on;
}
Http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
©
}
2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
Note that on Ubuntu, you will also find an additional include directive (include /etc/nginx/sites-
enabled/*;), where the directory /etc/nginx/sites-enabled/ stores symlinks to the websites
configuration files created in /etc/nginx/sites-available/, to enable the sites. And deleting a symlink
disables that particular site.
Based on your installation source, you’ll find the default website configuration file
at /etc/nginx/conf.d/default.conf (if you installed from official NGINX repository and EPEL)
or /etc/nginx/sites-enabled/default (if you installed from Ubuntu repositories).
This is our sample default nginx server block located at /etc/nginx/conf.d/default.conf on the test
system.
Server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /var/www/html/;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
From a web browser, when you point to the server using the hostname localhost or its IP address, it
processes the request and serves the file /var/www/html/index.html, and immediately saves the
event to its access log (/var/log/nginx/access.log) with a 200 (OK) response. In case of an error
(failed event), it records the message in the error log (/var/log/nginx/error.log).
As an example, let’s add user admin to this list (you can add as many users as possible), where the -
c option is used to specify the password file, and the -B to encrypt the password. Once you
hit [Enter], you will be asked to enter the users password:
Then, let’s assign the proper permissions and ownership to the password file (replace the user and
group nginx with www-data on Ubuntu).
As we mentioned earlier on, you can restrict access to your webserver, a single website (using its
server block) or specific directory or file. Two useful directives can be used to achieve this:
Server {
listen 80 default_server;
server_name localhost;
root /var/www/html/;
index index.html;
location / {
try_files $uri $uri/ =404;
}
location /protected/ {
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
}
}
The next time you point your browser to the above directory (http://localhost/protected) you will be
asked to enter your login credentials (username admin and the chosen password).
A successful login allows you to access the directory’s contents, otherwise you will get a a “401
Authorization Required” error.
• wearetecmint.com – /var/www/html/wearetecmint.com/
• welovelinux.com – /var/www/html/welovelinux.com/
<html>
<head>
<title>www.wearetecmint.com</title>
</head>
<body>
<h1>This is the main page of www.wearetecmint.com</h1>
</body>
</html>
Next, create the server block configuration files for each site inside the /etc/httpd/conf.d directory.
$ sudo vi /etc/nginx/conf.d/wearetecmint.com.conf
$ sudo vi /etc/nginx/conf.d/welovelinux.com.conf
Server {
listen 80;
server_name wearetecmint.com;
root /var/www/html/wearetecmint.com/public_html ;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
Next, add the following server block declaration in the welovelinux.com.conf file.
Server {
listen 80;
server_name welovelinux.com;
root /var/www/html/welovelinux.com/public_html ;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
and pointing your web server to the above addresses should make you see the main pages of the
dummy domains.
http://wearetecmint.com
http://welovelinux.com
Important: If you have SELinux enabled, its default configuration does not allow Nginx to access
files outside of well-known authorized locations (such as /etc/nginx for
configurations, /var/log/nginx for logs, /var/www/html for web files etc..).
You can handle this by either disabling SELinux, or setting the correct security context. For more
information, refer to this guide: using Nginx and Nginx Plus with SELinux on the Nginx Plus
website.
Then generate your self-signed certificate and the key using the openssl command line tool.
Next, open your virtual host configuration file and add the following lines to a server block
declaration listening on port 443. We will test with the virtual host
file /etc/nginx/conf.d/wearetecmint.com.conf.
$ sudo vi /etc/nginx/conf.d/wearetecmint.com.conf
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
Then add the ssl directive to nginx configuration file, it should look similar to below.
Server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl-certs/nginx.crt;
ssl_trusted_certificate /etc/nginx/ssl-certs/nginx.crt;
ssl_certificate_key /etc/nginx/ssl-certs/nginx.key;
server_name wearetecmint.com;
root /var/www/html/wearetecmint.com/public_html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
Now restart the Nginx and point your browser to the https://www.wearetecmint.com address.
Then, you need to send the CSR that is generated to a CA to request the issuance of a CA-signed
SSL certificate. Once you receive your certificate from the CA, you can configure it as shown
above.
Summary
In this chapter, we have explained how to install and configure Nginx; covered how to setup name-
based virtual hosting with SSL to secure data transmissions between the web server and a client.
The most common method to sync system time over a network in Linux desktops or servers is by
executing the ntpdate command which can set your system time from an NTP time server.
In this case, the ntpd daemon must be stopped on the machine where the ntpdate command is
issued.
ntpdate 1.ro.pool.ntp.org
To just query the server and not set the clock and use an unprivileged port to send the packets from
and bypass firewalls, issue ntpdate with the below flags:
Always try to query and sync the time with the closest NTP servers available for your zone. The list
of the NTP server pools can be found at http://www.pool.ntp.org/en/. http://www.pool.ntp.org/en/..
Just open the file for editing, and add or uncomment the following lines after [Time] section, as
illustrated in the below excerpt:
[Time]
NTP=0.ro.pool.ntp.org 1.ro.pool.ntp.org
FallbackNTP=ntp.ubuntu.com 0.arch.pool.ntp.org
Summary
By now you should have NTP network service described in this chapter installed, and possibly
running with the default configuration.
Rsyslog is a powerful, secure and high-performance log processing system which accepts data from
different types of source (systems/applications) and outputs it into multiple formats.
It has evolved from a regular syslog daemon to a fully-featured, enterprise level logging system. It
is designed in a client/server model, therefore it can be configured as a client and/or as a central
logging server for other servers, network devices, and remote applications.
Testing Environment
For the purpose of this guide, we will use the following hosts:
• Server: 192.168.241.140
• Client: 172.31.21.58
Once rsyslog installed, you need to start the service for now, enable it to auto-start at boot and check
it’s status with the systemctl command.
The main rsyslog configuration file is located at /etc/rsyslog.conf, which loads modules, defines the
global directives, contains rules for processing log messages and it also includes all config files
in /etc/rsyslog.d/ for various applications/services.
To configure rsyslog as a network/central logging server, you need to set the protocol
(either UDP or TCP or both) it will use for remote syslog reception as well as the port it listens on.
If you want to use a UDP connection, which is faster but unreliable, search and uncomment the
lines below (replace 514 with the port you want it to listen on, this should match the port address
that the clients send messages to, we will look at this more when configuring a rsyslog client).
$ModLoad imudp
$UDPServerRun 514
To use TCP connection (which is slower but more reliable), search and uncomment the lines below.
$ModLoad imtcp
$UDPServerRun 514
In this case, we want to use both UDP and TCP connections at the same time.
Next, you need to define the ruleset for processing remote logs in the following format.
Where:
• facility: is type of process/application generating message, they include auth, cron, daemon,
• severity_level: is type of log message: emerg-0, alert-1, crit-2, err-3, warn-4, notice-5, info-
6, debug-7. Using * means all severity levels and none implies no severity level.
• destination: is either local file or remote rsyslog server (defined in the form IP:port).
We will use the following ruleset for collecting logs from remote hosts, using
the RemoteLogs template.
Note that these rules must come before any rules for processing local messages, as shown in the
screenshot.
$template RemoteLogs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log"
*.* ?RemoteLogs
& ~
The directive $template tells rsyslog daemon to gather and write all of the received remote
messages to distinct logs under /var/log, based on the hostname (client machine name) and remote
client facility (program/application) that generated the messages as defined by the settings present
in the template RemoteLogs.
The second line “*.* ?RemoteLogs” means record messages from all facilities at all severity levels
using the RemoteLogs template configuration.
The final line “& ~” instructs rsyslog to stop processing the messages once it is written to a file. If
you don’t include “& ~”, messages will instead be be written to the local files.
There are many other templates that you can use, for more information, see the rsyslog
configuration man page (man rsyslog.conf) or refer to the Rsyslog online documentation.
That’s it with configuring the rsyslog server. Save and close the configuration file. To apply the
recent changes, restart rsyslog daemon with the following command.
Now verify the rsyslog network sockets. Use the ss command (or netstat with the same flags)
command and pipe the output to grep to filter out rsyslogd connections.
Next, on CentOS 7, if you have SELinux enabled, run the following commands to allow rsyslog
traffic based on the network socket type.
If the system has firewall enabled, you need to open port 514 to allow both UDP/TCP connections
to the rsyslog server, by running.
Most Linux distributions come with the rsyslog package preinstalled. In case it’s not installed, you
can install it using your Linux package manager tool as shown.
Once rsyslog installed, you need to start the service for now, enable it to auto-start at boot and check
it’s status with the systemctl command.
Once the rsyslog service is up and running, open the main configuration file where you will perform
changes to the default configuration.
To force the rsyslog daemon to act as a log client and forward all locally generated log messages to
the remote rsyslog server, add this forwarding rule, at the end of the file as shown in the following
screenshot.
*.* @@192.168.100.10:514
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
The above rule will send messages from all facilities and at all severity levels. To send messages
from a specific facility for example auth, use the following rule.
auth. * @@192.168.100.10:514
Save the changes and close the configuration file. To apply the above settings, restart the rsyslog
daemon.
Run a ls command to long listing of the parent logs directory and check if there is a directory
called ip-172.31.21.58 (or whatever your client machine’s hostname is).
$ ls -l /var/log/
$ sudo ls -l /var/log/ip-172-31-21-58/
Summary
Rsyslog is a high-performance log processing system, designed in a client/server architecture. We
hope you are able to install and configure Rsyslog as a central/network logging server and as a
client as demonstrated in this chapter.
You may also want to refer to relevant rsyslog manual pages for more help. Feel free to give us any
feedback or ask questions.
The IP address assigned by a DHCP server to a DHCP client is on a “lease”, the lease time normally
varies depending on how long a client computer is likely to require the connection or the DHCP
configuration.
In this chapter, we will explain how to configure a DHCP server in CentOS and Ubuntu Linux
distributions to assign IP address automatically to a client machine.
Once the installation is complete, configure the interface on which you want the DHCP daemon to
serve requests in the configuration file /etc/default/isc-dhcp-server or /etc/sysconfig/dhcpd.
For example, if you want the DHCPD daemon to listen on eth0 , set it using the following directive.
This file basically consists of a list of statements grouped into two broad categories:
• Global parameters: specify how to carry out a task, whether to carry out a task, or what
network configuration parameters to provide to the DHCP client.
• Declarations: define the network topology, state a clients is in, offer addresses for the clients,
or apply a group of parameters to a group of declarations.
Start by defining the global parameters which are common to all supported networks, at the top of
the file. They will apply to all the declarations:
Next, you need to define a sub-network for an internal subnet i.e 192.168.1.0/24 as shown.
Note that hosts which require special configuration options can be listed in host statements (see
the dhcpd.conf man page).
Now that you have configured your DHCP server daemon, you need to start the service for the
mean time and enable it to start automatically from the next system boot, and check if its up and
running using following commands.
Next, permit requests to the DHCP daemon on Firewall, which listens on port 67/UDP, by running.
# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=dhcp
TYPE=Ethernet
ONBOOT=yes
$ sudo vi /etc/network/interfaces
On Ubuntu 18.04, networking is controlled by the Netplan program. You need to edit the
appropriate file under the directory /etc/netplan/, for example.
Then enable dhcp4 under a specific interface for example under ethernets, ens0, and comment out
static IP related configs:
network:
version: 2
renderer: networkd
ethernets:
ens0:
dhcp4: yes
Save the changes and run the following command to effect the changes.
For more information, see the dhcpd and dhcpd.conf man pages.
$ man dhcpd
$ man dhcpd.conf
Summary
In this chapter, we have explained how to configure a DHCP server in CentOS and Ubuntu Linux
distributions.
Note that our setup will only cover a mail server for a local area network where the machines
belong to the same domain. Sending email messages to other domains require a more complex
setup, including domain name resolution capabilities. That is out of the scope of the certifications.
To
make this possible, several things happen behind the scenes. For an email message to be delivered
from a client application (such as Thunderbird, Outlook, or webmail services such as Gmail or
Yahoo! Mail) to his / her mail server and from there to the destination server and finally to its
intended recipient, a SMTP (Simple Mail Transfer Protocol) service must be in place in each server.
In order for these components to be able to “talk” to each other, they must
“speak” the same “language” (or protocol), namely SMTP as defined in the RFC
2821. Most likely, you will have to refer to that RFC while setting up your mail
server environment.
Other protocols that we need to consider are IMAP (Internet Message Access Protocol), which
allows to manage email messages directly on the server without downloading them to our client’s
hard drive, and POP3 (Post Office Protocol), which allows to download the messages and folders to
the user’s computer.
Thus, emails sent to user1 will be also delivered to user2. Note that if you omit the word user1 after
the colon, as in:
user1: user2
the messages sent to user1 will only be sent to user2, and not to user1.
In the above example, user1 and user2 should already exist on the system.
In our specific case, we will use the following alias as explained before (add the following line in
/etc/aliases):
and run:
# postalias /etc/postfix/aliases
to create or refresh the aliases lookup table. Thus, messages sent to sysadmin@example.com.ar will
be delivered to the inbox of the users listed above.
However, you should become acquainted with the full configuration parameters (which can be
listed with man 5 postconf) to set up a secure and fully customized mail server.
Note that this chapter is only supposed to get you started in that process and does not represent a
comprehensive guide on email services with Linux.
1) myorigin specifies the domain that appears in messages sent from the server. You may see the
/etc/mailname file used with this parameter. Feel free to edit it if needed.
myorigin = /etc/mailname
If the value above is used, mails will be sent as user@debian.gabrielcanepa.com.ar, where user is
the user sending the message.
2) mydestination lists what domains this machine will deliver email messages locally, instead of
forwarding to another machine (acting as a relay system). The default settings will suffice in our
case.
The /etc/postfix/transport file defines the relationship between domains and the next server to which
mail messages should be forwarded. In our case, since we will be delivering messages to our local
area network only (thus bypassing any external DNS resolution), the following configuration will
suffice:
example.com.ar local:
.example.com.ar local:
# postmap /etc/postfix/transport
You will need to remember to recreate this table if you add more entries to the corresponding text
file.
3) mynetworks defines the authorized networks Postfix will forward messages from. The default
value, subnet, tells Postfix to forward mail from SMTP clients in the same IP subnetworks as the
local machine only.
mynetworks = subnet
4) The relay_domains variable specifies the destinations to which emails should be sent to. We will
leave the default value untouched, which points to mydestination. Remember that we are setting up
a mail server for our LAN.
relay_domains = $mydestination
Note that you can use $mydestination instead of listing the actual contents.
5) The inet_interfaces variable defines which network interfaces the mail service should listen on.
The default, all, tells Postfix to use all network interfaces.
inet_interfaces = all
6) mailbox_size_limit and message_size_limit will be used to set the size of each user’s mailbox
and the maximum allowed size of individual messages, respectively, in bytes.
mailbox_size_limit = 51200000
message_size_limit = 5120000
# Require that a remote SMTP client introduces itself with the HELO
or EHLO command before sending the MAIL command or other commands that
require EHLO negotiation.
smtpd_helo_required = yes
# Permit the request when the client IP address matches any network or
network address listed in $mynetworks
\\\
# Reject the request when the client HELO and EHLO command has a bad
hostname syntax
smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname
# Reject the request when Postfix does not represent the final destination
for the sender address
smtpd_sender_restrictions = permit_mynetworks, reject_unknown_sender_domain
The Postfix configuration parameters page may come in handy in order to further explore the
available options.
Configuring Dovecot
Right after installing Dovecot, it supports out-of-the-box the POP3 and IMAP protocols, along with
their secure versions, POP3S and IMAPS, respectively.
If you check your home directory, you will notice there is a mail subdirectory with the following
contents:
Also, please note that the /var/mail/%u file is where the user’s mails are stored on most systems.
Add the following directive to /etc/dovecot/dovecot.conf (note that imap and pop3 imply imaps and
pop3s as well):
And make sure /etc/dovecot/conf.d/10-ssl.conf includes the following lines (otherwise, add them):
ssl_cert = </etc/dovecot/dovecot.pem
ssl_key = </etc/dovecot/private/dovecot.pem
Now let’s restart Dovecot and verify that it listens on the ports related to imap, imaps, pop3, and
pop3s:
Repeat the process above for the next account (gacanepa@example.com.ar) and the following two
inboxes should appear in Thunderbird’s left pane:
The mail log (/var/log/mail.log) seems to indicate that the email that was sent to sysadmin was
relayed to jdoe@example.com.ar and gacanepa@example.com.ar, as can be seen in the following
image:
We can verify if the mail was delivered to our client, where the IMAP accounts were configured in
Thunderbird:
Summary
In this chapter we have explained how to set up an IMAP mail server for your local area network
and how to restrict access to the SMTP server.
If you happen to run into an issue while implementing a similar setup in your testing environment,
you will want to check the online documentation of Postfix and Dovecot (specially the pages about
the main configuration files, /etc/postfix/main.cf and /etc/dovecot/dovecot.conf, respectively).
They have a wide range of purposes, most popular being online anonymity, but there are other ways
you can take advantage of web proxies.
• Online anonymity
• Improve online security
• Improve loading times
• Block malicious traffic
• Log your online activity
• To circumvent regional restrictions
• In some cases can reduce bandwidth usage
The proxy server then checks its local disk cache and if the data can be found in there, it will return
the data to the client, if not cached, it will make the request in the client’s behalf using the proxy IP
address (different from the clients) and then return the data to the client. The proxy server will try to
cache the new data and will use it for future requests made to the same server.
For the purpose of this article, I will be installing Squid on a CentOS 7 VPS and use it as an HTTP
proxy server.
Once your packages are up to date, you can proceed further to install squid and start and enable it
on system startup using following commands.
At this point your Squid web proxy should already be running and you can verify the status of the
service with.
Here are some important file locations you should be aware of:
• Squid configuration file: /etc/squid/squid.conf
• Squid Access log: /var/log/squid/access.log
• Squid Cache log: /var/log/squid/cache.log
A minimum squid.conf configuration file (without comments in it) looks like this:
# vim /etc/squid/squid.conf
Where XX.XX.XX.XX is the actual client IP address you wish to add. The line should be added in
the beginning of the file where the ACLs are defined. It is a good practice to add a comment next to
ACL which will describe who uses this IP address.
It is important to note that if Squid is located outside your local network, you should add the public
IP address of the client.
You will need to restart Squid so the new changes can take effect.
For the changes to take effect, you will need to restart squid once more.
Now create a file called “passwd” that will later store the username for the authentication. Squid
runs with user “proxy” so the file should be owned by that user.
# touch /etc/squid/passwd
# chown squid: /etc/squid/passwd
Now we will create a new user called “proxyclient” and setup its password.
# vim /etc/squid/squid.conf
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
After the ports ACLs add the following lines:
Save the file and restart squid so that the new changes can take effect:
# touch /etc/squid/blacklisted_sites.acl
You can add some domains you wish to block. For example:
.badsite1.com
.badsite2.com
That command will display the current IP address of your client (192.168.0.104 in the following
image).
2) In your client, use a web browser to open any given web site (www.tecmint.com in this case).
tail -f /var/log/squid/access.log
and you’ll get a live view of requests being served through Squid:
1) Define a new ACL directive as follows (I’ve named it ubuntuOS but you can name it whatever
you want)
2) Add the ACL directive to the localnet access list that is already in place, but prefacing it with an
exclamation sign. This means, “Allow Internet access to clients matching the localnet ACL directive
except to the one that matches the ubuntuOS directive”:
3) Now we need to restart Squid in order to apply changes. Then if we try to browse to any site we
will find that access is denied now:
where forbidden_domains is a plain text file that contains the domains that we desire to deny access
to:
Or maybe we will only want to allow access to those sites during a certain time of the day (10:00
until 11:00 am) only on Monday (M), Wednesday (W), and Friday (F).
Add the following lines to your /etc/squid/squid.conf file (in CentOS 7, the NCSA plugin will be
found in /usr/lib64/squid/basic_nsca_auth).
auth_param basic realm Squid proxy-caching web server for Tecmint's LFCE
series
A few clarifications:
• credentialsttl 30 minutes will require entering your username and password every 30
minutes (you can specify this time interval with hours as well).
• realm represents the text of the authentication dialog that will be used to authenticate to
squid.
Run the following command to create the file and to add credentials for user gacanepa (omit the -
c flag if the file already exists):
Open a web browser in the client machine and try to browse to any given site:
If authentication succeeds, access is granted to the requested resource. Otherwise, access will be
denied.
maximum_object_size 100 MB
Where:
• /var/cache/squid is a top-level directory where cache files will be stored. This directory must
exist and be writable by Squid (Squid will NOT create this directory for you).
• The maximum_object_size directive specifies the maximum size of allowed objects in the
cache.
• refresh_pattern tells Squid how to deal with specific file types (.mp4 and .iso in this case)
and for how long it should store the requested objects in cache (2880 minutes = 2 days). The
first and second 2880 are is an lower and upper limits, respectively, on how long objects
without an explicit expiry time will be considered recent, and thus will be served by the
cache, whereas 0% is the percentage of the objects’ age (time since last modification) that
each object without explicit expiry time will be considered recent.
Case study: downloading a .mp4 file from 2 different clients and testing the cache
First client (IP 192.168.0.104) downloads a 71 MB .mp4 file in 2 minutes and 52 seconds:
That is because the file was served from the Squid cache (indicated by TCP_HIT/200) in the second
case, as opposed to the first instance, when it was downloaded directly from the Internet
(represented by TCP_MISS/200).
The HIT and MISS keywords, along with the 200 http response code, indicate that the file was
served successfully both times, but the cache was HIT and MISSed respectively. When a request
cannot be served by the cache for some reason, then Squid attempts to serve it from the Internet.
To verify, you can check the Squid logs (typically /var/log/squid/access.log, or given by the
cache_log directive in /etc/squid/squid.conf).
Then test:
w3m gacanepa.github.com
and you should see the proxy events recorded in the logs:
Note: This assumes that traffic through port 8000 is allowed in your firewall.
In Firewalld:
firewall-cmd --add-port=8000/tcp
firewall-cmd --add-port=8000/tcp --permanent
In Iptables:
Summary
In this chapter we have discussed how to set up a Squid web caching proxy. You can use the proxy
server to filter contents using some chosen criteria, and also to reduce latency (since identical
incoming requests are served from the cache, which is closer to the client than the web server that is
actually serving the content, resulting in faster data transfers) and network traffic as well (reducing
the amount of used bandwidth, which saves you money if you’re paying for traffic).
You may want to refer to the Squid web site for further documentation (make sure to also check the
wiki).
• limit the allowed web access for some users to a list of accepted/well known web servers
and/or URLs only, while denying access to other blacklisted web servers and/or URLs.
• block access to sites (by IP address or domain name) matching a list of regular expressions
or words for some users.
• use distinct access rules based on time of day, day of the week, date etc.
• detect or block embedded scripting languages like JavaScript, Python, or VBscript inside
HTML code.
In this chapter I will show you how to integrate the blacklists provided by Shalla Secure Services
(http://www.shallalist.de/) to your squidGuard installation.
These blacklists are free for personal / non-commercial use and are updated on a daily basis. They
include, as of today, over 1,700,000 entries.
For our convenience, let’s create a directory to download the blacklist package:
# mkdir /opt/3rdparty
# cd /opt/3rdparty
# wget http://www.shallalist.de/Downloads/shallalist.tar.gz
After untarring the newly downloaded file, we will browse to the blacklist (BL) folder:
You can think of the directories shown in the output of ls as backlist categories, and their
corresponding (optional) subdirectories as subcategories, descending all the way down to specific
URLs and domains, which are listed in the files urls and domains, respectively.
Next, I will show you how to install the anonvpn, hacking, and chat blacklists and how to configure
squidGuard to use them.
Please note that this chapter was written using CentOS 7. If you are using another distribution, the
squidGuard database should be located in a similar directory under /var.
Step 1: Copy recursively the anonvpn, hacking, and chat directories from /opt/3rdparty/BL to
/var/squidGuard/db
# cp -a /opt/3rdparty/BL/anonvpn /var/squidGuard/db
# cp -a /opt/3rdparty/BL/hacking /var/squidGuard/db
# cp -a /opt/3rdparty/BL/chat /var/squidGuard/db
Step 2: Use the domains and urls files to create squidguard’s database files. Please note that the
following command will work for creating .db files for all the installed blacklists - even when a
certain category has 2 or more subcategories.
squidGuard –d -C all
Step 3: Change the ownership of the /var/squidGuard/db/ directory and its contents to the proxy
user so that Squid can read the database files
# which squidGuard
# echo "url_rewrite_program $(which squidGuard)" >> /etc/squid/squid.conf
# tail -n 1 /etc/squid/squid.conf
Please refer to the screenshot after the following code for further clarification
src localnet {
ip 192.168.0.0/24
dest anonvpn {
domainlist anonvpn/domains
urllist anonvpn/urls
dest hacking {
domainlist hacking/domains
urllist hacking/urls
dest chat {
domainlist chat/domains
urllist chat/urls
localnet {
redirect http://www.lds.org
default {
Open a web browser in a client within local network and browse to a site found in any of the
blacklist files (domains or urls - we will use http://spin.de/chat in the following example) and you
will be redirected to another URL, www.lds.org in this case.
Removing Restrictions
If for some reason you need to enable a category that has been blocked in the past, remove the
corresponding directory from /var/squidGuard/db and comment (or delete) the related acl in the
squidguard.conf file.
For example, if you want to enable the domains and urls blacklisted by the anonvpn category, you
would need to perform the following steps:
rm -rf /var/squidGuard/db/anonvpn
Please note that parts highlighted in yellow under BEFORE have been deleted in AFTER doing:
squidGuard –d -C all
squid –k reconfigure
squidGuard -C all
As before, the parts highlighted in yellow indicate the changes that need to be added. Note that the
myWhiteLists string needs to be first in the row that starts with pass.
Summary
After following the steps outlined in this tutorial you should have a powerful content filter and URL
redirector working hand in hand with your Squid proxy. If you experience any issues during your
installation / configuration process or have any questions or comments, you may want to refer to
squidGuard’s web documentation.
This chapter explains how you can install and configure a PXE Server on CentOS 7 x64-bit with
mirrored local installation repositories, sources provided by CentOS 7 DVD ISO image, with the
help of DNSMASQ Server.
Which provides DNS and DHCP services, Syslinux package which provides bootloaders for
network booting, TFTP-Server, which makes bootable images available to be downloaded via
network using Trivial File Transfer Protocol (TFTP) and VSFTPD Server which will host the local
mounted mirrored DVD image – which will act as an official CentOS 7 mirror installation
repository from where the installer will extract its required packages.
DNSMASQ main default configuration file located in /etc directory is self-explanatory but intends
to be quite difficult to edit, do to its highly commented explanations.
First make sure you backup this file in case you need to review it later and, then, create a new blank
configuration file using your favorite text editor by issuing the following commands.
# mv /etc/dnsmasq.conf /etc/dnsmasq.conf.backup
# nano /etc/dnsmasq.conf
Now, copy and paste the following configurations on dnsmasq.conf file and assure that you change
the below explained statements to match your network settings accordingly.
• interface – Interfaces that the server should listen and provide services.
• dhcp-range – Replace it with IP range defined by your network mask on this segment.
• dhcp-option=6,92.168.1.1 – Replace the IP Address with your DNS Server IP – several DNS
IPs can be defined.
• dhcp-option=42,0.0.0.0 – Put your network time servers – optionally (0.0.0.0 Address is for
self-reference).
• pxe=service – Use x86PC for 32-bit/64-bit architectures and enter a menu description
prompt under string quotes. Other values types can be: PC98, IA64_EFI, Alpha, Arc_x86,
Intel_Lean_Client, IA32_EFI, BC_EFI, Xscale_EFI and X86-64_EFI.
For other advanced options concerning configuration file feel free to read dnsmasq manual.
The PXE bootloaders files reside in /usr/share/syslinux absolute system path, so you can check it by
listing this path content. This step is optional, but you might need to be aware of this path because
on the next step, we will copy of all its content to TFTP Server path.
# ls /usr/share/syslinux
Installing TFTP-Server
Now, let’s move to next step and install TFTP-Server and, then, copy all bootloders files provided
by Syslinux package from the above listed location to /var/lib/tftpboot path by issuing the following
commands.
# mkdir /var/lib/tftpboot/pxelinux.cfg
# touch /var/lib/tftpboot/pxelinux.cfg/default
Now it’s time to edit PXE Server configuration file with valid Linux distributions installation
options. Also note that all paths used in this file must be relative to the /var/lib/tftpboot directory.
Below you can see an example configuration file that you can use it, but modify the installation
images (kernel and initrd files), protocols (FTP, HTTP, HTTPS, NFS) and IPs to reflect your
network installation source repositories and paths accordingly.
# nano /var/lib/tftpboot/pxelinux.cfg/default
default menu.c32
prompt 0
timeout 300
ONTIMEOUT local
label 1
menu label ^1) Install CentOS 7 x64 with Local Repo
kernel centos7/vmlinuz
append initrd=centos7/initrd.img method=ftp://192.168.1.20/pub devfs=nomount
label 2
menu label ^2) Install CentOS 7 x64 with http://mirror.centos.org Repo
kernel centos7/vmlinuz
append initrd=centos7/initrd.img method=http://mirror.centos.org/centos/7/os/x86_64/
devfs=nomount ip=dhcp
label 3
menu label ^3) Install CentOS 7 x64 with Local Repo using VNC
kernel centos7/vmlinuz
append initrd=centos7/initrd.img method=ftp://192.168.1.20/pub devfs=nomount
inst.vnc inst.vncpassword=password
label 4
menu label ^4) Boot from local drive
As you can see CentOS 7 boot images (kernel and initrd) reside in a directory named centos7
relative to /var/lib/tftpboot (on an absolute system path this would mean /var/lib/tftpboot/centos7)
Important: As you see in the above configuration, we’ve used CentOS 7 for demonstration purpose,
but you can also define RHEL 7 images, and following whole instructions and configurations are
based on CentOS 7 only, so be careful while choosing distribution.
If your machine has no DVD drive you can also download CentOS 7 DVD ISO locally
using wget or curl utilities from a CentOS mirror and mount it.
# wget http://mirrors.xservers.ro/centos/7.0.1406/isos/x86_64
/CentOS-7.0-1406-x86_64-DVD.iso
# mount -o loop /path/to/centos-dvd.iso /mnt
After the DVD content is made available, create the centos7 directory and copy CentOS 7 bootable
kernel and initrd images from the DVD mounted location to centos7 folder structure.
# mkdir /var/lib/tftpboot/centos7
# cp /mnt/images/pxeboot/vmlinuz /var/lib/tftpboot/centos7
# cp /mnt/images/pxeboot/initrd.img /var/lib/tftpboot/centos7
Further install vsftpd daemon, copy all DVD mounted content to vsftpd default server path (/var/ftp/
pub) – this can take a while depending on your system resources and append readable permissions
to this path by issuing the following commands.
Now that the PXE server configuration is finally finished, start DNSMASQ and VSFTPD servers,
verify their status and enable it system-wide, to automatically start after every system reboot, by
running the below commands.
To get a list of all ports that needs to be open on your Firewall in order for client machines to reach
and boot from PXE server, run netstat command and add CentOS 7 Firewalld rules accordingly to
dnsmasq and vsftpd listening ports.
# netstat -tulpn
# firewall-cmd --add-service=ftp --permanent ## Port 21
# firewall-cmd --add-service=dns --permanent ## Port 53
# firewall-cmd --add-service=dhcp --permanent ## Port 67
# firewall-cmd --add-port=69/udp --permanent ## Port for TFTP
# firewall-cmd --add-port=4011/udp --permanent ## Port for ProxyDHCP
# firewall-cmd --reload ## Apply rules
ftp://192.168.1.20/pub
To debug PXE server for eventual misconfigurations or other information and diagnostics in live
mode run the following command.
# tailf /var/log/messages
\# umount /mnt
In this chapter we’ll show how to install Ubuntu Server via a PXE server with local HTTP sources
mirrored from Ubuntu server ISO image via Apache web server. The PXE server used in this
tutorial is Dnsmasq Server.
Next, backup dnsmasq main configuration file and then start editing the file with the following
configurations.
# mv /etc/dnsmasq.conf /etc/dnsmasq.conf.backup
# nano /etc/dnsmasq.conf
interface=ens33,lo
bind-interfaces
domain=mypxe.local
dhcp-range=ens33,192.168.1.230,192.168.1.253,255.255.255.0,1h
dhcp-option=3,192.168.1.1
dhcp-option=6,192.168.1.1
dhcp-option=6,8.8.8.8
server=8.8.4.4
dhcp-option=28,10.0.0.255
dhcp-option=42,0.0.0.0
dhcp-boot=pxelinux.0,pxeserver,192.168.1.14
# mkdir /srv/tftp
# systemctl restart dnsmasq.service
# systemctl status dnsmasq.service
# wget http://releases.ubuntu.com/16.04/ubuntu-16.04.3-server-amd64.iso
After Ubuntu server ISO has been downloaded, mount the image in /mnt directory and list the
mounted directory content by running the below commands.
Then, copy the content of the mounted Ubuntu DVD to Apache web server web root path by
executing the below commands. List the content of Apache web root path to check if Ubuntu ISO
mounted tree has been completely copied.
Next, open HTTP port in firewall and navigate to your machine IP address via a browser
(http://192.168.1.14/ubuntu) in order to test if you can reach sources via HTTP protocol.
# nano /var/www/html/ubuntu/preseed/local-sources.seed
Here, make sure you replace the IP address accordingly. It should be the IP address where web
resources are located. In this guide the web sources, the PXE server and TFTP server are hosted on
the same system. In a crowded network you might want to run PXE, TFTP and web services on
separate machines in order to improve PXE network speed.
# nano /srv/tftp/ubuntu-installer/amd64/boot-screens/txt.cfg
In Ubuntu PXE txt.cfg configuration file replace the following line as illustrated in the below
excerpt.
default install
label install
menu label ^Install Ubuntu 16.04 with Local Sources
menu default
kernel ubuntu-installer/amd64/linux
append auto=true url=http://192.168.1.14/ubuntu/preseed/local-sources.seed
vga=788 initrd=ubuntu-installer/amd64/initrd.gz --- quiet
label cli
menu label ^Command-line install
kernel ubuntu-installer/amd64/linux
append tasks=standard pkgsel/language-pack-patterns= pkgsel/
install-language-support=false vga=788 initrd=ubuntu-installer/amd64/initrd.gz --- quiet
In case you want to add the preseed url statement to Ubuntu Rescue menu, open the below file and
make sure you update the content as illustrated in the below example.
# nano /srv/tftp/ubuntu-installer/amd64/boot-screens/rqtxt.cfg
label rescue
menu label ^Rescue mode
kernel ubuntu-installer/amd64/linux
append auto=true url=http://192.168.1.14/ubuntu/preseed/local-sources.seed
vga=788 initrd=ubuntu-installer/amd64/initrd.gz rescue/enable=true --- quiet
# nano /srv/tftp/ubuntu-installer/amd64/boot-screens/menu.cfg
#menu hshift 13
#menu width 49
#menu margin 8
Now run netstat command with root privileges to identify dnsmasq, tftp and web open ports in
listening state on your server as illustrated in the below excerpt.
# netstat -tulpn
After you’ve identified all required ports, issue the below commands to open the ports in ufw
firewall.
On the next screen, add your mirror archive directory [/ubuntu] as shown below and press enter key
to continue with the installation process and usually.
# tail –f /var/log/syslog
After the installation of the Ubuntu server finishes, login to the newly installed system and run the
following command with root privileges in order to update the repositories packages from local
network sources to official Ubuntu mirrors.
The mirrors needs to be changed in order to update the system using the internet repositories.
Summary
That’s all! You can now update your Ubuntu server system and install all required software.
Installing Ubuntu via PXE and a local network source mirror can improve the installation speed and
can save internet bandwidth and costs in case of deploying a large number of servers in a short
period of time at your premises.
And what about if we needed to visit multiple websites or use several applications that reside in the
same machine or virtual host? That would be one of the worst headaches I can think of - not to
mention the possibility that the IP address associated with a website or application can change
without prior notice. Just the very thought of it would be enough reason to desist using the Internet
after a while.
That’s precisely what a world without Domain Name System (also known as DNS) would be.
Fortunately, this service solves all of the issues mentioned above - even if the relationship between
an IP address and a name changes.
For that reason, in this chapter we will learn how to configure and use a caching DNS server, a
service that will allow to translate domain names into IP addresses and vice versa.
For example,
For larger networks, or those that are subject to frequent changes, using the /etc/hosts file to resolve
domain names into IP addresses would not be an acceptable solution. That’s where the need for a
dedicated service comes in.
Under the hood, a DNS server queries a large database in the form of a tree, which starts at the root
(“.”) zone. The following image will help us to illustrate:
1) When a client makes a query to a DNS server for web1.sales.me.com, the server sends the query
to the top (root) DNS server, which points the query to the name server in the .com zone.
This, in turn, sends the query to the next level name server (in the me.com zone), and then to
sales.me.com. This process is repeated until the FQDN (Fully Qualified Domain Name,
web1.sales.me.com in this example) is returned by the name server of the zone where it belongs.
2) In this example, the name server in sales.me.com. responds for the address web1.sales.me.com
and returns the desired domain name-IP association and other information as well (if configured to
do so).
All this information is sent to the original DNS server, which then passes it back to the client that
requested it in the first place. To avoid repeating the same steps for future identical queries, the
results of the query are stored in the DNS server.
These are the reasons why this kind of setup is commonly known as a recursive or caching DNS
server.
Next, let’s make a copy of the configuration file before making any changes:
The forwarders settings are used to indicate which name servers should be queried first (in the
following example we use Google’s) for hosts outside our domain:
options {
...
recursion yes;
forwarders {
8.8.8.8;
8.8.4.4;
};
Outside the options block we will define our sales.me.com zone (in Ubuntu this is usually done in a
separate file called named.conf.local) that maps a domain with a given IP address and a reverse
zone to map the IP address to the corresponding domain.
However, the actual configuration of each zone will go in separate files as indicated by the file
directive (“master” indicates we will only use one DNS server).
zone "sales.me.com." IN {
type master;
file "/var/named/sales.me.com.zone";
};
zone "0.168.192.in-addr.arpa" IN {
type master;
file "/var/named/0.162.198.in-addr.arpa.zone";
Note that in-addr.arpa (for IPv4 addresses) and ip6.arpa (for IPv6) are conventions for reverse zone
configurations.
After saving the above changes to named.conf, we can check for errors as follows:
named-checkconf /etc/named.conf
If any errors are found, the above command will output an informative message with the cause and
the line where they are located. Otherwise, it will not return anything.
0) At the top of the file you will find a line beginning with TTL (short for Time To Live), which
specifies how long the cached response should “live” before being replaced by the results of a new
query.
In the line immediately below, we will reference our domain and set the email address where
notifications should be sent (note that the root.sales.me.com means root@sales.me.com).
1) A SOA (Start Of Authority) record indicates that this system is the authoritative nameserver for
machines inside the sales.me.com domain. The following settings are required when there are two
nameservers (one master and one slave) per domain (although such is not our case since it is not
required in the exam, they are presented here for your reference):
• The Serial is used to distinguish one version of the zone definition file from a previous one
(where settings could have changed). If the cached response points to a definition with a
different serial, the query is performed again instead of feeding it back to the client.
• In a setup with a slave (secondary) nameserver, Refresh indicates the amount of time until
the secondary should check for a new serial from the master server. In addition, Retry tells
the server how often the secondary should attempt to contact the primary if no response
from the primary has been received, whereas Expire indicates when the zone definition in
the secondary is no longer valid after the master server could not be reached, and Negative
TTL is the time that a Non-existent domain (NXdomain) should be cached.
2) A NS record indicates what is the authoritative DNS server for our domain (referenced by the @
sign at the beginning of the line).
3) An A record (for IPv4 addresses) or an AAAA (for IPv6 addresses) translates names into IP
addresses. In the example below:
4) A MX record indicates the names of the authorized mail transfer agents (MTAs) for this domain.
The hostname should be prefaced by a number indicating the priority that the current mail server
should have when there are two or more MTAs for the domain (the lower the value, the higher the
priority - in the following example, mail1 is the primary whereas mail2 is the secondary MTA).
$TTL 604800
2016051101 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
@ IN NS dns.sales.me.com.
dns IN A 192.168.0.18
web1 IN A 192.168.0.29
mail1 IN A 192.168.0.28
mail2 IN A 192.168.0.30
@ IN MX 10 mail1.sales.me.com.
@ IN MX 20 mail2.sales.me.com.
$TTL 604800
2016051101 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
@ IN NS dns.sales.me.com.
28 IN PTR mail1.sales.me.com.
29 IN PTR web1.sales.me.com.
30 IN PTR mail2.sales.me.com.
Otherwise, you will get an error message stating the cause and how to fix it:
In Ubuntu 16.04:
Finally, you will have to edit the configuration of the network interface in the clients:
The following commands will return the IP address associated with the host web1:
host web1.sales.me.com
host web1
host www.web1
host -t mx sales.me.com
Likewise, let’s perform a reverse query. This will help us find out the name behind an IP address:
host 192.168.0.28
host 192.168.0.29
host -t mx linux.com
host 8.8.8.8
To verify that queries are indeed going through our DNS server, let’s enable logging:
rndc querylog
rndc querylog
In Ubuntu, enabling logging will require adding the following independent block (same level as the
options block) to /etc/bind/named.conf:
logging {
channel query_log {
file "/var/log/bind9/query.log";
severity dynamic;
print-category yes;
print-severity yes;
print-time yes;
};
};
Note that the log file must exist and be writable by named.
To ensure the proper operation of your DNS server, don’t forget to allow the service in your firewall
(port TCP/UDP 53) as follows:
firewall-cmd --add-port=53/tcp
firewall-cmd --add-port=53/udp
Logical Volumes Management (also known as LVM), which have become a default for the
installation of most (if not all) Linux distributions, have numerous advantages over traditional
partitioning management. Perhaps the most distinguishing feature of LVM is that it allows logical
divisions to be resized (reduced or increased) at will without much hassle.
• One or more entire hard disks or partitions are configured as physical volumes (PVs).
• A volume group (VG) is created using one or more physical volumes. You can think of a
volume group as a single storage unit.
• Multiple logical volumes can then be created in a volume group. Each logical volume is
somewhat equivalent to a traditional partition - with the advantage that it can be resized at
will as we mentioned earlier.
In this chapter we will use three disks of 8 GB each (/dev/sdb, /dev/sdc, and /dev/sdd) to create
three physical volumes. You can either create the PVs directly on top of the device, or partition it
first.
pvs
pvdisplay /dev/sdX
(where X is b, c, or d)
If you omit /dev/sdX as parameter, you will get information about all the PVs.
To create a volume group named vg00 using /dev/sdb and /dev/sdc (we will save /dev/sdd for later
to illustrate the possibility of adding other devices to expand storage capacity when needed):
vgdisplay vg00
Since vg00 is formed with two 8 GB disks, it will appear as a single 16 GB drive:
When it comes to creating logical volumes, the distribution of space must take into consideration
both current and future needs. It is considered good practice to name each logical volume according
to its intended use.
For example, let’s create two LVs named vol_projects (10 GB) and vol_backups (remaining space),
which we can use later to store project documentation and system backups, respectively.
The -n option is used to indicate a name for the LV, whereas -L sets a fixed size and -l (lowercase L)
is used to indicate a percentage of the remaining space in the container VG.
As before, you can view the list of LVs and basic information with
lvs
lvdisplay
To view information about a single LV, use lvdisplay with the VG and LV as parameters, as follows:
lvdisplay vg00/vol_projects
mkfs.ext4 /dev/vg00/vol_projects
mkfs.ext4 /dev/vg00/vol_backups
In the next section we will explain how to resize logical volumes and add extra physical storage
space when the need arises to do so.
Due to the nature of LVM, we can easily reduce the size of the latter (say 2.5 GB) and allocate it for
the former, while resizing each filesystem at the same time.
It is important to include the minus (-) or plus (+) signs while resizing a logical volume. Otherwise,
you’re setting a fixed size for the LV instead of resizing it.
It can happen that you arrive at a point when resizing logical volumes cannot solve your storage
needs anymore and you need to buy an extra storage device.
Keeping it simple, you will need another disk. We are going to simulate this situation by adding the
remaining PV from our initial setup (/dev/sdd).
If you run vgdisplay vg00 before and after the previous command, you will see the increase in the
size of the VG:
Now you can use the newly added space to resize the existing LVs according to your needs, or to
create additional ones as needed.
blkid /dev/vg00/vol_projects
blkid /dev/vg00/vol_backups
mkdir /home/projects
mkdir /home/backups
mount -a
mount | grep home
When it comes to using the LVs, you will need to assign proper ugo+rwx permissions as explained
in Chapter 10 (“User management and file attributes”).
Summary
In this chapter we have introduced Logical Volume Management, a versatile tool to manage storage
devices that provides scalability.
When combined with RAID, you can enjoy not only scalability (provided by LVM) but also
redundancy (offered by RAID).
In this type of setup, you will typically find LVM on top of RAID, that is, configure RAID first and
then configure LVM on top of it.
A mount point is a directory that is used as a way to access the filesystem on the partition, and
mounting the filesystem is the process of associating a certain filesystem (a partition, for example)
with a specific directory in the directory tree.
In other words, the first step in managing a storage device is attaching the device to the file system
tree. This task can be accomplished on a one-time basis by using tools such as mount (and then
unmounted with umount) or persistently across reboots by editing the /etc/fstab file.
Mounting Filesystem
The mount command (without any options or arguments) shows the currently mounted filesystems:
In addition, mount is used to mount filesystems into the filesystem tree. Its standard syntax is as
follows:
This command instructs the kernel to mount the filesystem found on device (a partition, for
example, that has been formatted with a filesystem type) at the directory dir, using all options. In
this form, mount does not look in /etc/fstab for instructions.
or
mount tries to find a mount point and if it can’t find any, then searches for a device (both cases in
the /etc/fstab file), and finally attempts to complete the mount operation (which usually succeeds,
except for the case when either the directory or the device is already being used, or when the user
invoking mount is not root).
You will notice that every line in the output of mount has the following format:
For example,
Reads:
/dev/mapper/debian-home is mounted on /home, which has been formatted as ext4, with the
following options: rw,relatime,user_xattr,barrier=1,data=ordered
Mount options
Most frequently used mount options include:
• async: allows asynchronous I/O operations on the file system being mounted.
• auto: marks the file system as enabled to be mounted automatically using mount -a. It is the
opposite of noauto.
• loop: Mounts an image (an .iso file, for example) as a loop device. This option can be used
to simulate the presence of the disk’s contents in an optical media reader.
• noexec: prevents the execution of executable files on the particular filesystem. It is the
opposite of exec.
• nouser: prevents any users (other than root) to mount and unmount the filesystem. It is the
opposite of user.
• rw: mounts the file system with read and write capabilities.
For example, in order to mount a device with ro and noexec options, you will need to do:
In this case we can see that attempts to write a file to or to run a binary file located inside our
mounting point fail with corresponding error messages:
touch /mnt/myfile
In the following scenario, we will try to write a file to our newly mounted device and run an
executable file located within its filesystem tree using the same commands as in the previous
example:
Unmounting Devices
Unmounting a device (with the umount command) means finish writing all the remaining “on
transit” data so that it can be safely removed.
Note that if you try to remove a mounted device without properly unmounting it first, you run the
risk of damaging the device itself or cause data loss.
In other words, your current working directory must be something else other than the mounting
point. Otherwise, you will get a message saying that the device is busy:
An easy way to “leave” the mounting point is typing the cd command which, in lack of arguments,
will take us to our current user’s home directory, as shown above.
The following steps assume that Samba and NFS shares have already been set up in the server with
IP 192.168.0.10 (please note that setting up a NFS share is one of the competencies required for the
LFCE exam, which we will cover after the present book).
Then run the following command to look for available samba shares in the server:
smbclient -L 192.168.0.10
and enter the password for the root account in the remote machine:
STEP 2: When mounting a password-protected network share, it is not a good idea to write your
credentials in the /etc/fstab file. Instead, you can store them in a hidden file somewhere with
permissions set to 600, like so:
mkdir /media/samba
STEP 4: You can now mount your samba share, either manually (mount //192.168.0.10/gacanepa)
or by rebooting your machine so as to apply the changes made in /etc/fstab permanently.
mkdir /media/nfs
STEP 4: You can now mount your NFS share, either manually (mount 192.168.0.10:/NFS-SHARE)
or by rebooting your machine so as to apply the changes made in /etc/fstab permanently.
where:
<type>: The file system type code is the same as the type code used to mount a filesystem with the
mount command. A file system type code of auto lets the kernel auto-detect the filesystem type,
which can be a convenient option for removable media devices. Note that this option may not be
available for all filesystems out there.
<dump>: You will most likely leave this to 0 (otherwise set it to 1) to disable the dump utility to
backup the filesystem upon boot (The dump program was once a common backup tool, but it is
much less popular today.)
<pass>: This column specifies whether the integrity of the filesystem should be checked at boot
time with fsck. A 0 means that fsck should not check a filesystem. The higher the number, the
lowest the priority. Thus, the root partition will most likely have a value of 1, while all others that
should be checked should have a value of 2.
Mount Examples
To mount a partition with label TECMINT at boot time with rw and noexec attributes, you should
add the following line in /etc/fstab:
If you want the contents of a disk in your DVD drive be available at boot time:
Summary
You can rest assured that mounting and unmounting local and network filesystems from the
command line will be part of your day-to-day responsibilities as sysadmin.
You will also need to master /etc/fstab. For more information on this essential system file, you may
want to check the Arch Linux documentation on the subject at
https://wiki.archlinux.org/index.php/fstab.
• Storage: provide a consistent file system image across servers in a cluster, allowing the
servers to simultaneously read and write to a single shared file system.
• High Availability: eliminate single points of failure and by failing over services from one
cluster node to another in case a node goes becomes inoperative.
• Load Balancing: dispatch network service requests to multiple cluster nodes to balance the
request load among the cluster nodes.
• High Performance: carry out parallel or concurrent processing, thus helping to improve
performance of applications.
To setup a cluster, we need at least two servers. For the purpose of this chapter, we will use two
Linux servers:
• Node1: 192.168.10.10
• Node2: 192.168.10.11
In this chapter, we will demonstrate the basics of how to deploy, configure and maintain high
availability/clustering in Ubuntu 16.04/18.04 and CentOS 7. We will demonstrate how to add Nginx
HTTP service to the cluster.
192.168.10.10 node1.example.com
192.168.10.11 node2.example.com
Once the installation is complete, start the Nginx service for now and enable it to auto-start at boot
time, then check if it’s up and running using the systemctl command.
On Ubuntu, the service should be started automatically immediately after package pre-configuration
is complete, you can simply enable it.
After starting the Nginx service, we need to create custom webpages for identifying and testing
operations on both servers. We will modify the contents of the default Nginx index page as shown.
Once the installation is complete, make sure that pcs daemon is running on both servers.
Next, on one of the servers (Node1), run the following command to set up the authentication needed
for pcs.
Now check if the cluster service is up and running using the following command.
Configuring Cluster
The first option is to disable STONITH (or Shoot The Other Node In The Head), the fencing
implementation on Pacemaker.
This component helps to protect your data from being corrupted by concurrent access. For the
purpose of this guide, we will disable it since we have not configured any devices.
To turn off STONITH, run the following command:
Next, also ignore the Quorum policy by running the following command:
After setting the above options, run the following command to see the property list and ensure that
the above options, stonith and the quorum policy are disabled.
where:
• floating_ip: is the name of the service.
Once you have added the cluster services, issue the following command to check the status of
resources.
Looking at the output of the command, the two added resources: “floating_ip” and “http_server”
have been listed. The floating_ip service is off because the primary node is in operation.
If you have firewall enabled on your system, you need to allow all traffic to Nginx and all high
availability services through the firewall for proper communication between nodes:
To simulate a failure, run the following command to stop the cluster on the node2.example.com.
Then reload the page at 192.168.10.20, you should now access the default Nginx web page from the
node1.example.com.
Alternatively, you can simulate an error by telling the service to stop directly, without stopping the
the cluster on any node, using the following command on one of the nodes:
Then you need to run crm_mon in interactive mode (the default), within the monitor interval of 2
minutes, you should be able to see the cluster notice that http_server failed and move it to another
node.
For your cluster services to run efficiently, you may need to set some constraints. You can see the
pcs man page (man pcs) for a list of all usage commands.
For more information on Corosync and Pacemaker, check out: https://clusterlabs.org/
This makes LXC a very fast virtualization solution compared to other virtualization solutions, such
as KVM, XEN or VMware.
This chapter will explain how you can install, deploy and run LXC containers on a CentOS and
Ubuntu Linux distributions.
# yum install epel-release && yum install lxc lxc-templates [On CentOS]
$ sudo apt install lxc lxc-templates
After LXC service has been installed, verify if LXC daemon is running.
# lxc-checkconfig
# ls -alh /usr/share/lxc/templates/
The process of creating a LXC container is very simple. The command syntax to create a new
container is explained below.
In the below excerpt we’ll create a new container named mydeb based on a debian template that
will be pulled off from LXC repositories.
After a series of base dependencies and packages that will be downloaded and installed in your
system the container will be created.
When the process finishes a message will display your default root account password. Change this
password once you start and login to the container console in order to be safe.
Now, you can use lxc-ls to list your containers and lxc-info to obtain information about a
running/stopped container.
In order to start the newly created container in background (will run as a daemon by specifying the -
d option) issue the following command:
# lxc-start -n mydeb -d
After the container has been started you can list running containers using the lxc-ls --active
command and get detailed information about the running container.
# lxc-ls --active
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
In order to login to the container console issue the lxc-console command against a running
container name. Login with the user root and the password generated by default by lxc supervisor.
Once logged in the container you can run several commands in order to verify the distribution by
displaying the /etc/issue.net file content, change the root password by issuing passwd command or
view details about network interfaces with ifconfig.
# lxc-console -n mydeb
# cat /etc/issue.net
# ifconfig
# passwd
To detach from the container console and go back to your host console, leaving the container in
active state, hit Ctrl+a then q on the keyboard.
To stop the a running container issue the following command.
# lxc-stop -n mydcb
In order to create a LXC container based on an Ubuntu template, enter /usr/sbin/ directory and
create the following debootstrap symlink.
# cd /usr/sbin
# ln -s debootstrap qemu-debootstrap
Now open and edit qemu-debootstrap file with Vi editor and replace the following two MIRROR
lines as follows:
DEF_MIRROR=”http://mirrors.kernel.org/ubuntu”
DEF_HTTPS_MIRROR=”https://mirrors.kernel.org/ubuntu”
For reference, see the following content and place the above two lines as stated:
MAKE_TARBALL=""
EXTRACTOR_OVERRIDE=""
UNPACK_TARBALL=""
ADDITIONAL=""
EXCLUDE=""
VERBOSE=""
CERTIFICATE=""
CHECKCERTIF=""
PRIVATEKEY=""
DEF_MIRROR=”http://mirrors.kernel.org/ubuntu”
DEF_HTTPS_MIRROR=”https://mirrors.kernel.org/ubuntu”
© 2016-2019 Tecmint.com – Last revised: January 2019 – All rights reserved
Finally create a new LXC container based on Ubuntu template issuing the same lxc-create
command.
Once the process of generating the Ubuntu container finishes a message will display your container
default login credentials as illustrated on the below screenshot.
In order to create a specific container based on local template use the following syntax:
Here is an excerpt of creating a debian wheezy container with an amd64 system architecture.
For instance, specific containers for different distro releases and architectures can be also created
from a generic template which will be downloaded from LXC repositories as illustrated in the
below example.
• -n = name
• -t = template
• -d = distibution
• -a = arch
• -r = release
Containers can be deleted from your host with the lxc-destroy command issued against a container
name.
# lxc-destroy -n mywheez
# ls /var/lib/lxc
Summary
Although this LXC examples (along with the rest of the examples in the current tutorial) are a nice
starting point to begin experimenting with commands that are used to create, delete and manage
LXC containers from the Linux command line.
In this chapter we will briefly review how to install and secure a MariaDB database server and then
we will explain how to configure it.
This means that the client-side commands are the same on both MySQL and MariaDB, and the
configuration files are named identically and located in the same places.
Note that, in Ubuntu, you will be asked to enter a password for the RDBMS root user.
Once the above packages have been installed, make sure the database service is running and has
been activated to start on boot (in CentOS you will need to perform this operation manually,
whereas in Ubuntu the installation process will have already taken care of it for you):
Then run the mysql_secure_installation script. This process will allow you to 1) set / reset the
password for the RDBMS root user, 2) remove anonymous logins (thus enabling only users with a
valid account to log in to the RDBMS), 3) disable root access for machines other than localhost, 4)
remove the test database (which anyone can access), and 5) activate the changes associated with 1
through 4.
# mysql_secure_installation
Most often, only /etc/my.cnf exists. It is on this file that we will set the server-wide settings (which
can be overridden with the same settings in ~/.my.cnf for each user).
The first thing that we need to note about my.cnf is that settings are organized into categories (or
groups) where each category name is enclosed with square brackets.
Server system configurations are given in the [mysqld] section, where typically you will find only
the first two settings in the table below.
The rest are other frequently used options (where indicated, we will change the default value with a
custom one of our choosing):
In CentOS, we will need to tell SELinux to allow MariaDB to listen on a non-standard port (20500)
before restarting the service:
# wget https://github.com/major/MySQLTuner-perl/tarball/master
# tar xzf master
Then change directory into the folder extracted from the tarball (the exact version may differ in
your case):
# cd major-MySQLTuner-perl-7dabf27
and run it (you will be prompted to enter the credentials of your administrative MariaDB account)
# ./mysqltuner.pl
The output of the script is in itself very interesting, but let’s skip to the bottom where the variables
to adjust are listed with the recommended value:
The query_cache_type setting indicates whether the query cache is disabled (0) or enabled (1). In
this case, mysqltuner is advising us to disable it.
So why are we advised to deactivate it now? The reason is that the query cache is useful mostly in
high-read / low-write scenarios (which is not our case, since we just installed the database server).
WARNING: Before making changes to the configuration of a production server, you are highly
encouraged to consult an expert database administrator to ensure that a recommendation given by
mysqltuner will not impact negatively on an existing setting.
The configuration variables listed in the table above are only a few settings that you may want to
consider while preparing the server for use or when tuning it later. Always refer to the official
MariaDB documentation before making changes.
• In simple words, a packet is the basic unit that is used to transmit information within a
network. Networks that use TCP/IP as network protocol follow the same rules for
transmission of data: the actual information is split into packets that are made of both data
and the address where it should be sent to.
• Routing is the process of “guiding” the data from source to destination inside a network.
• Static routing requires a manually-configured set of rules defined in a routing table. These
rules are fixed and are used to define the way a packet must go through as it travels from
one machine to another.
• Dynamic routing, or smart routing (if you wish), means that the system can alter
automatically, as needed, the route that a packet needs to follow. However, in the context of
the LFCE exam, the term dynamic routing refers to the ability to performing routing “on-
the-fly” with the ip command.
The central utility in the iproute suite is called simply ip. Its basic syntax is as follows:
ip object command
where object can be only one of the following (only the most frequent objects are shown - you can
refer to man ip for a complete list):
whereas command represents a specific action that can be performed on object. You can run the
following command to display the complete list of commands that can be applied to a particular
object:
ip object help
ip link help
The above image shows, for example, that you can change the status of a network interface with the
following command:
ip link show
ip link set eth1 down
Instead of displaying all the network interfaces, we can specify one of them:
You can view your current main routing table with either of the following 3 commands:
ip route show
route -n
netstat -rn
Example 3: Using a Linux server to route packages between two private networks
We want to route icmp (ping) packets from dev2 to dev4 and the other way around as well (note that
both client machines are on different networks). The name of each NIC, along with its
corresponding IPv4 address, is given inside square brackets.
ip route show
and then modify it in order to use its enp0s3 NIC and the connection to 192.168.0.15 to access hosts
in the 10.0.0.0/24 network:
Which essentially reads, “Add a route to the 10.0.0.0/24 network through the enp0s3 network
interface using 192.168.0.15 as gateway”.
and
net.ipv4.ip_forward = 1
In addition, configure the NICs on both clients (look for the configuration file within /etc/sysconfig/
network-scripts on CentOS where it’s called ifcfg-enp0s3).
BOOTPROTO=static
BROADCAST=10.0.0.255
IPADDR=10.0.0.18
NETMASK=255.255.255.0
GATEWAY=10.0.0.15
NAME=enp0s3
NETWORK=10.0.0.0
ONBOOT=yes
Example 4: Using a Linux server to route packages between a private network and the Internet
Another scenario where a Linux machine can be used as router is when you need to share your
Internet connection with a private LAN.
• Router: Debian Wheezy 7.7 [eth0: Public IP, eth1: 10.0.0.15/24] - dev2
In addition to set up packet forwarding and the static routing table in the client as in the previous
example, we need to add a few iptables rules in the router:
The first command adds a rule to the POSTROUTING chain in the nat (Network Address
Translation) table, indicating that the eth0 NIC should be used as the “exit door” for outgoing
packages.
In a LAN with many hosts, the router keeps track of established connections in
/proc/net/ip_conntrack so it knows where to return the response from the Internet to.
cat /proc/net/ip_conntrack
where the origin (private IP of openSUSE box) and destination (Google DNS) of packets is
highlighted. This was the result of running
curl www.tecmint.com
As I’m sure you can already guess, the router is using Google’s 8.8.8.8 as nameserver, which
explains why the destination of outgoing packets points to that address.
Note that incoming packages from the Internet are only accepted is if they are part of an already
established connection (command #2), while outgoing packages are allowed “free exit” (command
#3).
Don’t forget to make your iptables rules persistent following the steps outlined Chapter 27 (“The
firewall”).
Summary
In this chapter we have explained how to set up static and dynamic routing, using a Linux box
router(s). Feel free to add as many routers as you wish, and to experiment as much as you want. .
Now let’s create a simple web page named docker.html inside /home/user/directory:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Learn Docker at Tecmint.com</title>
</head>
<body>
<h1>Learn Docker With Us</h1>
</body>
</html>
Congratulations for making it to the end of this book! Now please consider buying your exam
voucher using the following links to earn us a small commission. This will help us keep this book
updated.