Вы находитесь на странице: 1из 16

Configure NFS Collaborative Share in RHEL 7

A Collaborative share is a directory that has been shared across the network and a specific
group of users have permissions to access, create and modify files on that directory. Usually,
a collaborative directory is specific to a Project and rights have been given to the working
users.
We have already configured NFS shares and Kerberized NFS shares in our previous posts.
Now, we will create an NFS share for group collaboration.

Configure NFS Server:


To configure NFS Service, we have to install nfs-utils package. Usually, this package is
automatically installed during installation of Red Hat Enterprise Linux (RHEL) 7. However, you
can install it anytime from yum span> repository.
#yum install -y nfs-utils
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use
subscription-manager to register.
Package 1:nfs-utils-1.3.0-0.el7.x86_64 already installed and latest version
Nothing to do
nfs-utils is already installed on our system.
Create a directory to share with other clients.
#mkdir /nfsshare
#chgrp dba /nfsshare/
#chmod 2770 /nfsshare/
We have created a directory /nfsshare , change its user-group to dba and 2770 rights has
been given to user-group. So, the group-members can create files on this shared directory.
Adjust SELinux type of the /nfsshare directory.
#semanage fcontext -a -t nfs_t "/nfsshare(/.*)?"
#restorecon -Rv /nfsshare/
restorecon reset /nfsshare context
unconfined_u:object_r:default_t:s0-
>unconfined_u:object_r:nfs_t:s0
If semanage command does not available on your system then install policycoreutils-
python package.
Now export/share this directory to specific clients.
#echo '/nfsshare *.example.com(rw,sync)' >> /etc/exports
#exportfs -r
Enable and start the nfs-server service.
#systemctl start nfs-server ; systemctl enable nfs-server
ln -s '/usr/lib/systemd/system/nfs-server.service'
'/etc/systemd/system/nfs.target.wants/nfs-server.service'
Allow nfs and other required services through firewall.
#firewall-cmd --permanent --add-service={mountd,nfs,rpc-bind}
success
#firewall-cmd --reload
success

Configure NFS Client:


Connect to the client2.example.com and install nfs-utils package.
# yum install -y nfs-utils
# mkdir /mnt/nfsshare
Check the shared directories from ipaserver.example.com .
# showmount -e ipaserver.example.com
Persistently mount this shared directory by adding following entry in /etc/fstab .
[root@client2 ~]# echo 'ipaserver.example.com:/nfsshare
/mnt/nfsshare nfs defaults,_netdev 0 0' >> /etc/fstab
[root@client2 ~]# mount -a
Check the status of mounted directory.
[root@client2 mnt]# mount | grep nfsshare
ipaserver.example.com:/nfsshare on /mnt/nfsshare type nfs4
(rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=60
0,retrans=2,sec=sys,clientaddr=192.168.116.202,local_lock=none,addr=192.168.116.200,_netd
ev)
Login with user which is member of the dba group. and create a file in this shared directory,
to verify the file permissions.
[root@client2 ~]# su - imran
Last login: Wed Aug 1 08:29:23 PDT 2018 on pts/0
[imran@client2 ~]$ cd /mnt/nfsshare/
[imran@client2 nfsshare]$ touch test2
[imran@client2 nfsshare]$ ls -al
total 0
drwxrws---. 2 root dba 30 Aug 1 08:34 .
drwxr-xr-x. 4 root root 31 Jul 31 07:23 ..
-rw-rw-r--. 1 imran dba 0 Aug 1 08:34 test2
[imran@client2 nfsshare]$
We have successfully provided a network share for group collaboration and persistently
mount it on one client

Configure a Kerberized NFS Server in RHEL 7


Kerberos is a computer network authentication protocol that uses tickets to
authenticate computers and let them communicate over a non-secure network.
Whereas, NFS is the distributed file system to share files among Linux based
computers. We can combine the Kerberos with NFS to configure more secure network
shares.
In this article, we will configure a Kerberized NFS Server and configure a client to
access that share. To configure a Kerberized NFS Server, we must have an Identity
Management Server such as FreeIPA , that provides Kerberos tickets to clients. We
have already written about configuring a FreeIPA server in our previous post.
Therefore, we are not going to reinvent the wheel here. However, the reader can refer
to following articles to understand the Kerberos authentication.

Read Configure Identity Management (IdM) with FreeIPA Server


Also:

Configure a Linux Machine as FreeIPA Client

Configure SSO (Single Sign-on) with Kerberos 5

System Specification:
We are using two Red Hat Enterprise Linux (RHEL) 7 servers. One as the NFS Server as
well as Identity Management Server and the other as the NFS Client.
Identity Management Server ipaserver.example.com

Kerberized NFS Server ipaserver.example.com

Kerberized NFS Client client2.example.com

Note: we are configuring our same FreeIPA server as the Kerberized NFS Server.

Configure Kerberized NFS Server:


Make sure that you have already configured this machine as FreeIPA Client. (refer
to Configure a Linux Machine as FreeIPA Client)
Now, add NFS service to our FreeIPA server to create Kerberized NFS service as
follows.
# kinit admin
Password for admin@EXAMPLE.COM:

# ipa service-add nfs/ipaserver.example.com


-----------------------------------------------------

Added service "nfs/ipaserver.example.com@EXAMPLE.COM"

-----------------------------------------------------

Principal: nfs/ipaserver.example.com@EXAMPLE.COM

Managed by: ipaserver.example.com

# kadmin.local
Authenticating as principal admin/admin@EXAMPLE.COM with password.

kadmin.local: ktadd nfs/ipaserver.example.com


Entry for principal nfs/ipaserver.example.com with kvno 1, encryption type aes256-cts-
hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.

Entry for principal nfs/ipaserver.example.com with kvno 1, encryption type aes128-cts-


hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.

Entry for principal nfs/ipaserver.example.com with kvno 1, encryption type des3-cbc-sha1


added to keytab FILE:/etc/krb5.keytab.

Entry for principal nfs/ipaserver.example.com with kvno 1, encryption type arcfour-hmac


added to keytab FILE:/etc/krb5.keytab.

kadmin.local: quit

# klist -k
Keytab name: FILE:/etc/krb5.keytab

KVNO Principal

---- --------------------------------------------------------------------------

3 host/ipaserver.example.com@EXAMPLE.COM

3 host/ipaserver.example.com@EXAMPLE.COM

3 host/ipaserver.example.com@EXAMPLE.COM

3 host/ipaserver.example.com@EXAMPLE.COM

1 nfs/ipaserver.example.com@EXAMPLE.COM

1 nfs/ipaserver.example.com@EXAMPLE.COM

1 nfs/ipaserver.example.com@EXAMPLE.COM

1 nfs/ipaserver.example.com@EXAMPLE.COM

To configure NFS Service, we have to install nfs-utils package. Usually, this package
is automatically installed during installation of Red Hat Enterprise Linux (RHEL) 7.
However, you can install it anytime using yum command.

# yum install -y nfs-utils


Loaded plugins: langpacks, product-id, subscription-manager

This system is not registered to Red Hat Subscription Management. You can use
subscription-manager to register.

Package 1:nfs-utils-1.3.0-0.el7.x86_64 already installed and latest version

Nothing to do

nfs-utils is already installed on our system.

Create a directory to share with other clients.


# mkdir /nfsshare
# chgrp nfsnobody /nfsshare/
# chmod g+w /nfsshare/
We have created a directory nfsshare , change its group to nfsnobody and w rights has
been given to group. So, the anonymous users can create files on this shared
directory.
Adjust SELinux type of the /nfsshare directory.

# semanage fcontext -a -t nfs_t "/nfsshare(/.*)?"


# restorecon -Rv /nfsshare/
restorecon reset /nfsshare context unconfined_u:object_r:default_t:s0-
>unconfined_u:object_r:nfs_t:s0

If semanage command does not available on your system then install policycoreutils-
python package.

Now export/share this directory to specific clients.


# echo '/nfsshare client2.example.com(rw,sec=krb5p,sync)' >> /etc/exports
# exportfs -r

Enable and Start the nfs-server and nfs-secure-server services.


# systemctl start nfs-server ; systemctl enable nfs-server

ln -s '/usr/lib/systemd/system/nfs-server.service'
'/etc/systemd/system/nfs.target.wants/nfs-server.service'
# systemctl start nfs-secure-server; systemctl enable nfs-secure-server

ln -s '/usr/lib/systemd/system/nfs-secure-server.service'
'/etc/systemd/system/nfs.target.wants/nfs-secure-server.service'
Allow nfs and other supplementary services through Linux firewall.
# firewall-cmd --permanent --add-service={mountd,nfs,rpc-bind}
success

# firewall-cmd --reload
success

Configure Kerberized NFS Client:


Make sure that you have already configured this machine as FreeIPA Client. (refer
to Configure a Linux Machine as FreeIPA Client)
Connect to the client2.example.com . and install nfs-utils package.

# yum install -y nfs-utils


# mkdir /mnt/nfsshare
Check the shared directories from ipaserver.example.com .
# showmount -e ipaserver.example.com
Export list for ipaserver.example.com:

/nfsshare client2.example.com

Start and enable the nfs-secure service.


# systemctl start nfs-secure ; systemctl enable nfs-secure
ln -s '/usr/lib/systemd/system/nfs-secure.service'
'/etc/systemd/system/nfs.target.wants/nfs-secure.service'

Persistently mount this shared directory by adding following entry in /etc/fstab .


# echo 'ipaserver.example.com:/nfsshare /mnt/nfsshare nfs
sec=krb5p,_netdev 0 0' >> /etc/fstab
# mount -a

Check the status of mounted directory.


# mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)

ipaserver.example.com:/nfsshare on /mnt/nfsshare type nfs4


(rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=60
0,retrans=2,sec=krb5p,clientaddr=192.168.116.202,local_lock=none,addr=192.168.116.200,_ne
tdev)

Create a file in this shared directory, to verify the file permissions.


# cd /mnt/nfsshare/
# touch test1
# ls -al
total 0

drwxrwxr-x. 2 root nfsnobody 18 Jul 31 07:32 .

drwxr-xr-x. 4 root root 31 Jul 31 07:23 ..

-rw-r--r--. 1 nfsnobody nfsnobody 0 Jul 31 07:32 test1

[root@client2 nfsshare]#

We have successfully configured our Kerberized NFS Server.


Kickstart: Automate PXE Client Installations
Kickstart is an installation method, used by Red Hat to
automatically perform unattended Operating System installation and
configuration. With Kickstart, a system administrator can create a
single file containing the answers to all the questions that would
normally be asked during a typical installation.
In our previous post “Setup a PXE Boot Server in RHEL/CentOS 7”, we
have configured a PXE boot server for network installations of new
systems. However, the installation method is manual. Now, in this
article, we will combine the Kickstart with PXE boot Server to setup
fully automated, unattended and consistent installations for our PXE
clients.
To task can be broken down into following two simple steps.
1) Create a kickstart file
2) Configure PXE boot server to use Kickstart file
Note: In this article, we are performing everything from CLI, therefore,
it is highly recommended that, you should have Linux Pocket Guide:
Essential Commands for quick reference.

System Specification:
We use the same Linux server that we have configured as PXE Boot
Server in our previous article. The specifications have been re-
mentioned below for convenience of the readers.
CPU: 2 Core (2.4 Mhz)
Memory: 2 GB
Storage: 50 GB
Operating System: RHEL 7.5
Hostname: pxe-server.itlab.com
IP Address: 192.168.116.41/24

Create a Kickstart file:


Kickstart file is a text file and can be created using any available text
editor. Furthermore, we also have a very handy GUI tool in Linux
called Kickstart Configurator. With Kickstart Configurator, we can
simply select the options and the Kickstart file will automatically
generated by the software.
Kickstart configurator is provided in system-config-
kickstart.noarch package. And can be run using
command system-config-kickstart (you need an X-Server to
display the software interface). Some screenshots of the Kickstart
Configurator are as follows:

Kickstart Configurator is quiet handy tool and anyone can create a


complicated Kickstart file in just a few clicks.
Alternatively, we can use a system generated Kickstart template,
which is created by the Anaconda installer during operating system
installation in the home directory of root user (i.e. /root/anaconda-
ks.cfg ). This file contains the actual user inputs/selections that has
been made during the installation of Operating System on that
machine. Therefore, we can use this Kickstart template after adjusting
the contents according to our requirements.
Copy the anaconda-ks.cfg to our FTP public directory.
#cp anaconda-ks.cfg /var/ftp/pub/rhel7/rhel7.cfg
#chmod +r /var/ftp/pub/rhel7/rhel7.cfg
Now edit the rhel7.cfg file .

#vi /var/ftp/pub/rhel7/rhel7.cfg
The final contents of the rhel7.cfg are:
#platform=x86, AMD64, or Intel EM64T
#version=DEVEL
# Install OS instead of upgrade
install
# Keyboard layouts
keyboard 'us'
# Root password
rootpw --iscrypted $1$vyNMLtgd$VmtByshddZSBK..uuFhoH0
# Use network installation
url --url="ftp://192.168.116.41/pub/rhel7"
# System language
lang en_US
# System authorization information
auth --useshadow --passalgo=sha512
# Use graphical install
graphical
firstboot --disable
# SELinux configuration
selinux --enforcing

# Firewall configuration
firewall --enabled --ssh
# Network information
network --bootproto=dhcp --device=eth0
# Reboot after installation
reboot
# System timezone
timezone Asia/Karachi
# System bootloader configuration
bootloader --location=mbr --boot-drive=sda
autopart --type=lvm
# Partition clearing information
clearpart --none --initlabel
%addon com_redhat_kdump --disable --reserve-mb='auto'
%end
# Packages to be installed
%packages
@core
%end
We have successfully created a Kickstart file for automated
installations. To make it usable by our PXE boot server, we have to
include it in the menu command of tftp .
Configure PXE boot server to use Kickstart file:
Edit the PXE boot menu for BIOS based clients.
#vi /var/lib/tftpboot/pxelinux.cfg/default
and append the kickstart directive therein. Contents of this file after
editing are:
default menu.c32
prompt 0
timeout 30
menu title Ahmer's PXE Menu
label Install RHEL 7.5
kernel /networkboot/rhel7/vmlinuz
append initrd=/networkboot/rhel7/initrd.img
inst.repo=ftp://192.168.116.41/pub/rhel7
ks=ftp://192.168.116.41/pub/rhel7/rhel7.cfg
Similarly, edit the PXE boot menu for UEFI based clients.
#vi /var/lib/tftpboot/grub.cfg
and append the kickstart directive therein. Contents of this file after
editing are:
set timeout=60

menuentry 'Install RHEL 7.5' {


linuxefi /networkboot/rhel7/vmlinuz
inst.repo=ftp://192.168.116.41/pub/rhel7/
inst.ks=ftp://192.168.116.41/pub/rhel7/rhel7.cfg
initrdefi /networkboot/rhel7/initrd.img
}
Test the configurations with BIOS and UEFI based machines. Now, the
whole installation is automated, and operating system will be installed
and configured as per our Kickstart file.

Install MariaDB Galera Cluster on CentOS 7


MariaDB Galera Cluster is a synchronous multi-master cluster
for MariaDB. It is a fork of Galera Cluster, the world's most
advanced, free and open source cluster engine. Currently, it
only supports InnoDB storage engines.
MariaDB Galera Cluster is a true Multi-Master and Active-
Active cluster. Due to it's synchronous behaviour, there will be
no data lost in case of a node crash because all nodes always
hold the same state.
MariaDB Galera Cluster also provides Automatic node
provisioning. It means we do not have to manually backup the
database and restore it on new node before adding it to Galera
cluster. This features gives the additional benefit of Cloud
support due to simple scale-in and scale-out operations.
In this article, we will create a two-node MariaDB Galera
Cluster of MariaDB 10.3 Database on CentOS 7. However, the
same steps can be used to configure a MariaDB Galera Cluster
of larger size.

System Specification:
For this article, we are using two CentOS 7 virtual machines as
the Galera Cluster nodes.
Hostname: mariadb-01.example.com mariadb-02.example.com
IP Address: 192.168.116.81 /24 192.168.116.82/24
CPU: 2.4 Ghz (2 cores) 2.4 Ghz (2 cores)
Memory: 2 GB 2 GB
Operating System: CentOS 7.6 CentOS 7.6
MariaDB Version: 10.3.12 10.3.12
We are assuming that the reader has some intermediate
knowledge of MariaDB and Linux platform. Therefore, we
highly recommend the readers to read and build some basic
understanding of these topics before reading this article. I
recommend following two books on these topics.
1 - Getting Started with MariaDB - Second Edition
2 - Mastering CentOS 7 Linux Server

Install MariaDB 10.3 Database Server on CentOS 7:


Connect to mariadb-01.example.com using ssh.
Install MariaDB and MaxScale yum repositories.
# curl -sS
https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | bash
[info] Repository file successfully written to
/etc/yum.repos.d/mariadb.repo.
[info] Adding trusted package signing keys...
[info] Succeessfully added trusted package signing keys.
Build yum cache for all repositories.
# yum makecache fast

# yum install -y mariadb-server galera

Configure MariaDB Galera Cluster on CentOS 7:


Allow MariaDB and Galera service ports in Linux firewall.
# firewall-cmd --permanent --add-service=mysql
success
# firewall-cmd --permanent --add-port={4567,4568,4444}/tcp
success
# firewall-cmd --reload
success
Set SELinux to permissive mode for now, and we will enable
the enforcing mode later, after creating an SELinux policy for
MariaDB Galera cluster.
# setenforce 0
Now edit MariaDB configuration file.
# vi /etc/my.cnf.d/server.cnf
and configure galera section as follows:
[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.116.81,192.168.116.82
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
Perform above steps on each node.
Start Galera cluster on mariadb-01.example.com .
# galera_new_cluster
Start MariaDB service on all other nodes.
# systemctl start mariadb.service
If the service started successfully then, it shows that we have
successfully sonfigured our Galera cluster.
Configure MariaDB database instance on each node.
# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL
MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP
CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):


OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] Y


New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!

By default, a MariaDB installation has an anonymous user, allowing anyone


to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y


... Success!

Normally, root should only be allowed to connect from 'localhost'. This


ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] Y


... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] Y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y


... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!


Connect with MariaDB instance on any node and
check wsrep_cluster_size .
# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 10.3.12-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and


others.

Type 'help;' or '\h' for help. Type '\c' to clear the current
input statement.

MariaDB [(none)]> show global status like 'wsrep_cluster_size';


+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 2 |
+--------------------+-------+
1 row in set (0.083 sec)
wsrep_cluster_size confirms that all of our nodes are now
connected in Galera cluster.

Create SELinux Policy for MariaDB Galera cluster:


Since, we have configured SELinux in permissive mode.
Therefore, all the violations by MariaDB and Galera has been
recorded in /var/log/audit/audit.log . We can use it to
create a concrete SELinux policy.
Use fgrep and audit2allow commands to extract policy
violations log into a text file.
# fgrep "mysqld" /var/log/audit/audit.log | audit2allow
-m MySQL_galera -o MySQL_galera.te

Compile these logs to a SELinux policy module.


# checkmodule -M -m MySQL_galera.te -o MySQL_galera.mod
checkmodule: loading policy configuration from galera.te
checkmodule: policy configuration loaded
checkmodule: writing binary representation (version 19) to
MySQL_galera.mod
Create a package of compiled policy module.
# semodule_package -m MySQL_galera.mod -o MySQL_galera.pp
Import this policy into SELinux.
# semodule -i MySQL_galera.pp
Set SELinux to run in enforcing mode.
# setenforce 1
Test SELinux is working fine by restart MariaDB service on
each node.
Finally, enable the MariaDB service on all nodes.
# systemctl enable mariadb.service
We have successfully configured a MariaDB Galera Cluster on
CentOS 7. Although we have configured a two node cluster,
but the same steps are good enough for configuring a
MariaDB Galera cluster of larger size.
After configuring a MariaDB Galera Cluster, you should be
looking next for a database proxy to perform load balancing
and routing for our cluster. Therefore, it is high recommended
that you should read my next post “Install MariaDB MaxScale
Database Proxy on CentOS 7”.