Вы находитесь на странице: 1из 108

` 120

A Comparative Study Getting Ready For ` 120


ISSN-2456-4885
On Various Open Source Remote Learning ISSN-2456-4885
Blockchain Platforms With FOSS

Volume: 08 | Issue: 09 | Pages: 108 | June 2020

What’s New In

DevOps
Building Gulp: A Understanding
The DevOps Based Continuous Integration
DevOps Tool For Web And Continuous
Pipeline Applications Delivery/Deployment
India Technology Week @ Home

THANK YOU
EARLY INVESTORS
We thank the VISIONARY team members of these brands. They chose to believe
and invest in our concept of an Online Only Expo-cum-Conference.
They helped us create history. We'll henceforth call them Our Early Investors,
and will always remain grateful to them.

Visit ‘Online Only’ expo-cum-conference from the comfort of your HOME…

JUNE EDITION
17th – 19th JUNE, 2020

VISIT NOW: https://IndiaTechnologyWeek.com


For more information on sponsoring, exhibiting or attending, please call +91-966-774-3666 or team@indiatechnologyweek.com
CONTENTS JUNE 2020 ISSN-2456-4885

FOR U & ME
20 The Rise of AI and its Impact 15
34 How Technology is Helping
Fight Coronavirus
35 Role of Technology in
Maintaining Law and Order
FOCUS
40 The Five Best DevOps Tools
43 Understanding Continuous
Integration and Continuous
Delivery/Deployment
46 Building the DevOps Pipeline
with Jenkins
52 Understanding DevOps:
A Revolution in Software
Development
54 How DevOps Differs from
Traditional IT and Why
56 RCloud is DevOps for
Data Science
Getting Ready for Remote Learning with FOSS
58 How Prometheus Helps to
Monitor a Kubernetes Deployment
62 DevOps vs Agile: What You Should
26 31
Know About Both
65 Gulp: A DevOps Based Tool for
Web Applications

DEVELOPERS
89 Using spaCy for Natural Language
Processing and Visualisation
92 SPA JS: Building Cross-Platform
SPAs with Less Code

COLUMNS
70 CodeSport Breaking Down the Buzz Introduction to Green
ADMIN Around Quantum Computing Computing and its Importance
98 The Benefits of Using Terraform
as a Tool for Infrastructure-as-
Code (IaC)
OPEN GURU REGULAR FEATURES
103 Lighttpd: A Lightweight HTTP 07 FossBytes
Server for Embedded Systems

4  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


EDITOR
RAHUL CHOPRA

EDITORIAL, SUBSCRIPTIONS & ADVERTISING


DELHI (HQ)
D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020

37 67
Ph: (011) 26810602, 26810603; Fax: 26817563
E-mail: info@efy.in

MISSING ISSUES
E-mail: support@efy.in

BACK ISSUES
Kits ‘n’ Spares
New Delhi 110020
Ph: (011) 26371661, 26371662
E-mail: info@kitsnspares.com

NEWSSTAND DISTRIBUTION
Ph: 011-40596600
E-mail: efycirc@efy.in

ADVERTISEMENTS
MUMBAI
Ph: (022) 24950047, 24928520
E-mail: efymum@efy.in

BENGALURU
Ph: (080) 25260394, 25260023 Six Things to Consider for a DevOps DevOps is the Future
E-mail: efyblr@efy.in
Transformationto the Cloud of Software Development
PUNE
Ph: 08800295610/ 09870682995
E-mail: efypune@efy.in

GUJARAT
Ph: (079) 61344948
E-mail: efyahd@efy.in
72 78
JAPAN
Tandem Inc., Ph: 81-3-3541-4166
E-mail: japan@efy.in

SINGAPORE
Publicitas Singapore Pte Ltd
Ph: +65-6836 2272
E-mail: singapore@efy.in

TAIWAN
J.K. Media, Ph: 886-2-87726780 ext. 10
E-mail: taiwan@efy.in

UNITED STATES
E & Tech Media
A Study of Various Open A Few Surprising Programming
Ph: +1 860 536 6677
E-mail: usa@efy.in
Source Blockchain Platforms Language Features

Printed, published and owned by Ramesh Chopra. Printed at Tara


Art Printers Pvt Ltd, A-46,47, Sec-5, Noida, on 28th of the previous
month, and published from D-87/1, Okhla Industrial Area, Phase I, New

83 94
Delhi 110020. Copyright © 2020. All articles in this issue, except for
interviews, verbatim quotes, or unless otherwise explicitly mentioned,
will be released under Creative Commons Attribution-NonCommercial
3.0 Unported License a month after the date of publication. Refer to
http://creativecommons.org/licenses/by-nc/3.0/ for a copy of the
licence. Although every effort is made to ensure accuracy, no responsi-
bility whatsoever is taken for any loss due to publishing errors. Articles
that cannot be used are returned to the authors if accompanied by a
self-addressed and sufficiently stamped envelope. But no responsibility
is taken for any loss or delay in returning the material. Disputes, if any,
will be settled in a New Delhi court only.

SUBSCRIPTION RATES
Year Newstand Price You Pay Overseas
(`) (`)
Five 7200 4320 —
Three 4320 3030 —
One 1440 1150 US$ 120 Image Feature Processing in Deep A Headless CMS: Delivering
Kindly add ` 50/- for outside Delhi cheques.
Please send payments only in favour of EFY Enterprises Pvt Ltd. Learning using Convolutional Pure Content in the Age of
Non-receipt of copies may be reported to support@efy.in—do mention
your subscription number. Neural Networks: An Overview Mobile-first Internet

6  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


FOSSBYTES
Microsoft offers up to GitHub and Major League
Hacking announce remote
US$ 100,000 to hack its open source internship
initiative
custom Linux OS GitHub and Major League Hacking
(MLH) have set up the MLH
Fellowship. This remote internship
programme will support the next
generation of developers with open
source projects. Facebook, Royal
Bank of Canada and DEV have
already signed on as supporters and
sponsors.
Mike Swift, CEO of MLH
said, “Enabling students to spend
their summer contributing to the
software that runs the world is
a unique opportunity for them.
They’ll work on meaningful
projects with their peers, under the
guidance of some of the world’s
most talented engineers. The
remote nature of this programme
Tech giant Microsoft is offering hackers up to US$ 100,000 if they can break the will democratise access to
security of the company’s custom Linux OS. The company had built a compact internships for countless students
and custom version of Linux for its Azure Sphere OS last year, which is designed worldwide.”
to run on specialised chips for its Internet of Things (IoT) platform. The OS has
been purpose-built for this platform to enable basic services and apps to run
isolated in a sandbox for security purposes.
Sylvie Liu, a security programme manager at Microsoft’s Security Response
Centre, said that the company will award a bounty of up to US$ 100,000 if the
Pluton security subsystem or the Secure World sandbox is breached in the Azure
Sphere OS. The bug bounty programme is part of a three-month research challenge
that runs from June 1 until August 31. 
Microsoft said that the challenge is focused on the Azure Sphere OS
itself, and not on the underlying cloud portion that’s already eligible for Azure
bounty programme awards. Azure Sphere was announced at last year’s Build
developer conference.
The programme will be for
Eclipse Foundation moves to Europe as part of its a 12-week period and will run
continued global expansion through the summer. It will train
The Eclipse Foundation has announced that in order to expand globally, it is up to 1,000 students full time on
establishing itself as a Europe based organisation. It said that through the creation of major open source projects under
Eclipse Foundation AISBL based in Brussels, it will be in a position to use its recent the mentorship of experienced
international growth and bring about global industry collaboration on open source engineers. The students will work
projects in strategic technologies. These include the cloud, edge computing, artificial in ‘pods’ of eight to ten students
intelligence, connected vehicles, telecommunications, and the Internet of Things. with open source projects and
The establishment of the new legal entity is expected to be finalised by July 2020. mentors. They will receive a
The Foundation said that it recognises the important role open source will play stipend for the programme through
in driving the digital and industrial transformations called for by the European GitHub sponsors.
Commission in its recent strategies. It added that contributions from a broad

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  |  JUNE 2020  |  7


FOSSBYTES

cross-section of European companies and governmental organisations to open


GitHub launches new source projects will be crucial to ensure that these emerging technologies are fit for
features and updates Europe. These contributions will also need to see whether these technologies have
GitHub has announced various been designed with due consideration to the privacy and security of individuals and
new features and updates in its organisations, as well as to the environment.
online Satellite 2020 event.  These Juergen Mueller, chief technology officer and member of the executive board
include GitHub Codespaces, GitHub of SAP SE said, “The Eclipse Foundation has a long track record of fostering
Discussions, code scanning, secret industry collaboration between global organisations and developers who share the
scanning and GitHub Private Instances. goal of creating scalable open source software. As a founding strategic member of
GitHub said that GitHub the Eclipse Foundation, SAP actively participates in several Eclipse projects and
Codespaces is a complete dev working groups. With the legal move of the Eclipse Foundation to Brussels, we
environment within GitHub that lets expect more international collaboration across industries in an open environment.
developers contribute immediately. It We look forward to collaborating with members from around the world to create
said that every repository has its own and innovate together.”
way of configuring a dev environment,
which often requires dozens of steps Mozilla announces first three recipients of
before developers can write any code. COVID-19 Solutions Fund
Sometimes, the environments of the Mozilla has announced the first three COVID-19 Solutions Fund recipients. A
two projects they are working on, month back, it had announced
conflict with one another. the creation of a COVID-19
GitHub Codespaces will give Solutions Fund as part of the
developers a fully-featured cloud- Mozilla Open Source Support
hosted dev environment that spins up Program (MOSS). It aimed to
in seconds directly within GitHub, provide awards of up to US$
company sources claim. Codespaces 50,000 each to open source
can be configured by developers to technology projects working
load their code and dependencies, towards reducing the spread of
developer tools, extensions, and the COVID-19 pandemic in some
dotfiles. The company said that way. The first three recipients
switching between the environments include VentMon, created by Public Invention in Austin, Texas; Recidiviz based in the
has also been made simple and can Bay area, and 3DBrooklyn for the COVID-19 Supplies NYC project.
be navigated at any time. And when VentMon can be used for testing of ventilator designs before deployment and
switched back, the codespace is for ICU patient monitoring. The makers received a US$ 20,000 award, which will
automatically reopened.  enable them to buy parts for the VentMon to support more than 20 open source
Codespaces in GitHub includes engineering teams trying to build ventilators.
a browser based version of the full Recidiviz is a tech non-profit that has built a modelling tool which helps prison
VS code editor. It has support for administrators and government officials forecast the impact of COVID-19 on their
code completion and navigation, prisons and jails. The data equips them to assess changes they can make to slow the
extensions, terminal access, etc. If spread, such as reducing the density in prison populations or granting early release
developers prefer to use their desktop to people who are deemed to pose a low risk to public safety. The MOSS committee
IDE, they will be able to start a approved a US$ 50,000 award for this recipient. Recidiviz’s tool was downloaded
codespace in GitHub and connect to by 47 states within 48 hours of launch.
it from their desktop.
GitHub said that it has not ScyllaDB releases Scylla Open Source 4.0
finalised the pricing for Codespaces ScyllaDB has announced the release of Scylla Open Source 4.0, a NoSQL database
but also mentioned that the for real-time Big Data workloads. The company said that Scylla has moved beyond
code-editing functionality in the feature parity with Apache Cassandra. It is now serving as an open source drop-in,
codespaces IDE will always be free. no-lock-in alternative to Amazon DynamoDB.
GitHub plans to offer simple pay- Scylla Open Source 4.0 builds on Scylla’s close-to-the-hardware design.
as-you-go pricing similar to GitHub The company said that it is written from the ground-up in C++ and delivers a
Actions for computationally intensive performance of millions of OPS on a single node. It scales out to hundreds of nodes,
tasks such as builds. During the beta and consistently achieves a 99 per cent tail latency of less than one millisecond.
phase, Codespaces is free.  Dor Laor, CEO and co-founder, ScyllaDB said, “Scylla 4.0 is our largest release
ever, with improvements ranging from ease-of-use to new APIs, functionality and

8  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


FOSSBYTES

performance. We welcome developers to discover the great performance Scylla


offers out of the box, at a reasonable price and without lock-in.” Nvidia plans to acquire
The company said that Scylla delivers better, more consistent performance Cumulus Networks to
along with a significant boost networking software
reduction in the total capabilities
cost of ownership (TCO) Nvidia has announced its plans to
and greater deployment acquire Cumulus Networks. The
flexibility. The production- company said that this will help boost
ready features in Scylla its networking software capabilities and
Open Source 4.0 offer launch a new era of the accelerated,
certain advantages over software-defined data centre. Recently,
Cassandra. As per the Nvidia had completed its acquisition of
company, these include Mellanox Technologies.
scaling up to any number of cores, streaming data to 60TB mega nodes, built-in In a blog post, the company
schedulers, unified cache, and self-tuning operations. stated, “With Cumulus, Nvidia can
innovate and optimise across the entire
Linux Foundation launches ToIP Foundation to boost data privacy networking stack, from chips and
Governments, non-profits, and the private sector, across the finance, health care systems to software including analytics
and enterprise software domains have joined hands with the Linux Foundation to like Cumulus NetQ, delivering great
improve universal security and privacy protocols for consumers and businesses in performance and value to customers.
the digital era. The ToIP Foundation is being developed with global, pan-industry This open networking platform is
support from organisations with sector-specific expertise. extensible, and allows enterprise and
Businesses today are struggling to protect and manage digital assets and data. cloud-scale data centres full control
This happens especially in an increasingly complex enterprise environment that over their operations.”
includes the Internet of Things (IoT), edge computing, artificial intelligence, etc. It
results in reducing the already low consumer confidence in how companies use or
profit from users’ personal
data, and is slowing
innovation on opportunities
like digital identity as well
as the adoption of other
new services.
The ToIP Foundation Cumulus is based in Mountain
will use digital identity View, California. It supports more
models that leverage than 100 hardware platforms with
interoperable digital wallets Cumulus Linux, its operating system
and credentials. It will also for network switches. Nvidia said
use the new W3C Verifiable Credentials standard to address these challenges and that its Nvidia Mellanox Spectrum
help consumers, businesses and governments to better manage risk, improve digital switches already ship with Cumulus
trust, and protect all forms of identity online. Linux and SONiC. This is the open
Jim Zemlin, executive director at the Linux Foundation said, “The ToIP source offering forged in Microsoft’s
Foundation promises to provide the digital trust layer that was missing in the Azure cloud and managed by the Open
original design of the Internet and to trigger a new era of human possibility. The Compute Project.
combination of open standards and protocols, pan-industry collaboration, and our The ONIE environment Cumulus
neutral governance structure will support this new category of digital identity and created is a software foundation for
verifiable data exchange.” Mellanox’s bare metal switches. Nvidia
added in the blog post, “Together,
Researchers unveil AI-powered open source platform for we built DENT, a distributed Linux
COVID-19 vaccine development software framework for retail and other
A team of machine learning, immunology, and bioinformatics researchers have enterprises at the edge of the network.
unveiled Epitopes.world. This is an AI-powered, open source and interactive Web And our Onyx operating system
platform to help accelerate vaccine development for COVID-19. continues to expand, especially in
The team is led by Tariq Daouda, PhD, who is currently a post- Ethernet Storage Fabrics (ESF).”
doctoral researcher at Harvard Medical School. It consists of volunteers

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  |  JUNE 2020  |  9


FOSSBYTES

who have doctoral degrees in machine learning and immunobiology,


UC Berkeley researchers bioinformaticians, and Web developers.
open source Reinforcement The process of developing a vaccine is typically a very lengthy and costly
Learning with Augmented one due to the number of virus-infected cells that need to be analysed. Dr Daouda
Data has developed an AI algorithm that can predict which parts of a virus are more
As per a report by Venturebeat, likely to be exposed at the surface of infected cells, which are called epitopes.
a group of researchers from the He did this while obtaining his doctorate degree at the Institute for Research in
University of California, Berkeley, Immunology and Cancer at the Université de Montréal.
have open sourced Reinforcement
Learning with Augmented Data
(RAD). In an accompanying paper,
the authors said that this module can
improve any existing reinforcement
learning algorithm, and RAD can
achieve better compute and data
efficiency than Google AI’s PlaNet
and other cutting-edge algorithms
like DeepMind’s Dreamer and SLAC
from UC Berkeley.
The researchers said that
RAD achieved state-of-art results
on common benchmarks and
matched or beat every baseline
in terms of performance and data These predictions can be used by researchers to generate a significantly shorter
efficiency across 15 DeepMind list of potential targets to test in the creation of a vaccine. It reduces a process that
control environments. It added typically takes weeks or months, to just hours.
that it did this in part by applying
data augmentations for visual Authentic8 launches Open Source Intelligence Academy
observations. The co-authors of Authentic8 has launched the Open Source Intelligence Academy (OSINT), which
the paper on RAD include Michael will have an integrated suite of resources and tools available exclusively to its
‘Misha’ Laskin, Kimin Lee, and customers. It aims to augment their training securely from home and other remote
Berkeley AI Research co-director work environments. The resources
and Covariant founder Pieter Abbee. will be delivered within Silo,
As per the report, RAD was released Authentic8’s Web isolation platform.
on preprint repository arXiv. OSINT Academy will include the
The paper said that, for the OSINT Insider talk webcast series,
first time, it can be shown that data the OSINT Forum for collaboration
augmentations alone can significantly and peer-to-peer exchange, analysis
improve the data-efficiency and case studies, demonstration videos,
generalisation of RL methods and OSINT Tradecraft Training. The
operating from pixels. This can academy will provide access to a
be done without any changes to range of resources designed to get
the underlying RL algorithm, on new analysts up to speed, streamline
the DeepMind Control Suite and their intelligence cycle, and equip
the OpenAI ProcGen benchmarks, experienced researchers with new skills to stay current.
respectively. It added that by using The curriculum was developed in partnership with training specialist OSINT
multiple augmented views of the Combine to create more than 40 targeted training modules. It includes videos,
same data point as input, CNNs presentations, study guides and interactive quiz materials designed to build and
are forced to learn consistencies in test analysts’ skillsets.
their internal representations. This
results in a visual representation SUSE joins hands with iValue to offer open source
that improves generalisation, data- solutions for enterprises
efficiency, and transfer learning. Technology aggregator iValue InfoSolutions has announced that it is partnering
with SUSE to offer enterprise-grade, open source solutions for Linux, software-

10  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


FOSSBYTES

defined infrastructure, and


application delivery. This will Cloud Foundry community
give enterprises greater control, and foundation to provide
flexibility, and cost efficiencies tutorial hub for new users
while undergoing digital The Cloud Foundry Foundation
transformation. has launched a hub for Cloud
Harsh Marwah, chief growth Foundry-related tutorials. This
officer at iValue InfoSolutions, will streamline the discovery and
said that the company is excited learning process for developers
to partner with SUSE. He added that iValue’s partnership with SUSE will help interested in learning more
partners and customers build sustainable, reliable, cost-effective, and secure about the family of open source
IT stacks based on open source technologies. iValue and SUSE aim to help projects. The tutorials have
customers transform their enterprises by tapping into open source innovations been created and curated by the
that offer no vendor lock-in, and are highly flexible, scalable and cost-effective. community, and will be provided
Rajarshi Bhattacharyya, country manager at SUSE India, said, “iValue’s in a free, simple way to learn
go-to-market strategy complements SUSE’s for its open source offerings. about Cloud Foundry.
SUSE is excited to partner with iValue to enable customers to realise their
digital transformation goals. This will be done by simplifying, modernising,
and accelerating their traditional and cloud-native applications across any IT
landscape in any environment.”

Facebook AI launches open source Chatbot named Blender


Facebook’s AI has built an open source open-domain chatbot called Blender as per
a blog post by the company. Engadget reported that Blender has been trained on 9.4
billion parameters, which is more than ten times as many as the previous largest OS
chatbots available on the internet.
Facebook said that Blender is the first chatbot to come with a diverse set As developers discover more
of conversational skills like empathy, knowledge and personality, all in one about the technology, they are
system. The blog stated that in terms of engagement, the bot feels ‘more human’ able to provide comments directly
according to human evaluators. It has been designed in such a manner that it can to the community -- about what
assume a persona, discuss nearly any topic, and show empathy in natural, 14-turn they have found valuable and
conversation flows. what is missing. This generates
a cycle of feedback that enables
the foundation to create new
materials to fill gaps in topics.
This feedback loop mimics the
technical contributions made to
open source projects. It builds
a collaborative ethos for Cloud
Foundry and other open source
communities.
Steve Greenberg, founder of
Resilient Scale said, “Resilient
Scale is happy to lend our services
by partnering with Cloud Foundry
and the community in order to
build this new resource. The
tutorial hub will make it easier
The tech giant also said the bot uses transformer neural networks on large for people unfamiliar with Cloud
amounts of conversational data. It uses previously available public domain Foundry to give it a try and reach
conversations that have 1.5 billion training examples of extracted conversations. out to enterprise developers
Facebook said that it has introduced Blended Skill Talk (BST) for training struggling with inefficient
and evaluating desirable skills linked to personality, knowledge, and the deployment environments.”
display of empathy.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  |  JUNE 2020  |  11


FOSSBYTES

mParticle launches open source developer toolset


Venafi to acquire Jetstack Customer data platform mParticle has announced the beta release of a new
Venafi has announced a definitive open source developer toolset. The company has said that this will give
agreement to acquire open source engineering teams instant data quality protection and feedback in their integrated
machine identity software provider, development environments (IDE).
Jetstack. The company has said that
this acquisition will transform the
way modern applications required by
digital transformation are secured.
Jeff Hudson, CEO of Venafi said,
“In the race to virtualise everything,
businesses need faster application
innovation and better security; both
are mandatory. Most people see these
requirements as opposing forces,
but we don’t. We see a massive
opportunity for innovation. This
acquisition brings together two
leaders who are already working
jointly to accelerate the development
process while simultaneously
securing applications against attack,
and there’s a lot more to do. Our The toolset also comes with the Smartype feature. It translates any JavaScript
mutual customers are urgently asking Object Notation (JSON) based data model into strongly-typed code. It also pairs
for more help to solve this problem mParticle’s data planning application program interface (API), the command-line
because they know that speed wins, interface (CLI), and static code analysis (linting).
as long as you don’t crash.” In the earlier part of this year, mParticle released the next generation of its
Venafi said that its innovative Data Master product. This allows engineering, product and marketing teams to
solutions protect TLS, SSH, and collaboratively design and enforce a consistent data model. But once a development
code signing machine identities for team has created a data plan, developers still need an easy and fool-proof way to
the largest, most security-conscious implement the plan without having to leave their editor to cross-reference a Web UI.
organisations and government According to company sources, this newly available data planning API will
agencies in the world. Jetstack sources allow engineering teams to use Data Master programmatically through an HTTP
have said that it supports and advises API. Developers will be able to store data plans in their source code, and use their
enterprises using Kubernetes in own software development life cycle (SDLC) and approval processes to define the
mission-critical infrastructure. The data model to best suit their needs.
company also fosters the cert-manager The company said that all constants called for in the data plan (event names,
open source community, which has attribute names and enum values) will be available as a machine-readable JSON
hundreds of code contributors and schema. Smartype will programmatically perform all CRUD operations on data
millions of downloads. plans to decrease the time-to-data-quality and time to implement. Feedback
developers can use this to conform to a data plan.
The new CLI will offer an easy way to interface with the mParticle platform
through the command-line instead of the user interface, mParticle sources
claim. Developers can create, maintain, update, or delete data plans with version
control provided by a Git repository by using the CLI. The CLI also enables
The two companies have been developers to use linting tools to statically lint code against a data plan. This
working together over the last two helps to ensure adherence to their data plan, and that only high-quality data is
years. They have been working to logged to their workspace.
accelerate the speed of innovation
for next-generation machine AWS and Facebook team up for new open source
identity protection in Kubernetes, projects for PyTorch
multi-cloud, service mesh and Amazon Web Services and Facebook have jointly announced some new open
microservices ecosystems. source projects for PyTorch. One is an open source machine learning framework
used to train artificial intelligence models.

12  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


FOSSBYTES

The new PyTorch projects announced include TorchServe and TorchElastic.


Altitude Angel launches The former is a model-serving framework for PyTorch that makes it easier for
open source hardware and developers to move new models into production. TorchElastic is a library that
software platform called developers can use to build fault-tolerant training jobs on Kubernetes clusters.
Scout PyTorch was created by Facebook’s AI research group to function as a
Altitude Angel, a provider of UTM machine learning library of functions for the Python programming language.
(unmanned traffic management) It has been designed for use with deep learning.
technology, has released an
open-sourced project named Dell launches set of open
Scout. It consists of hardware source networking solutions using SONiC
and firmware that will help drone Dell Technologies has announced a set of open source networking
manufacturers, software developers, solutions designed to simplify the management of data centres at scale.
and commercial drone pilots to The solutions, called Enterprise SONiC Distribution, by Dell Technologies
quickly connect to its global UTM, are built on Software for Open Networking in the Cloud (SONiC), an
the company reports. Altitude open source project headed by Microsoft.
Angel does not yet have plans to Tom Burns, senior VP and GM, Dell Technologies integrated products and
manufacture the device. It said that solutions said, “Our customers tell us that, while a hybrid cloud approach is
it is open to the possibility as part critical to their success, they struggle to maintain and scale their networks, and
of its work supporting the emerging manage them to effectively avoid multiple points of failure. By breaking switch
drone ecosystem. software into multiple, containerised components, we are providing enterprises
Scout is primarily intended for the means to drastically simplify the management of massive and complex
use in commercial and industrial networks and increase availability in a cloud model.”
drone applications. It comes with Dell has said that organisations are increasingly relying on modern hybrid
the capability to securely obtain and cloud models to do business. This leads to a monolithic and proprietary approach
broadcast a form of ‘network remote to networking, creating inefficiencies and unneeded complexity. According to
ID’. This is seen as a necessary Dell, its Enterprise SONiC Distribution removes complexity and creates a flexible
step for enabling routine drone use network through an approach built on open standards by integrating SONiC into
and ‘beyond visual line of sight’ the DNA of the Dell EMC PowerSwitch Open Networking hardware.
(BVLOS) flight. The hardware and
the firmware can be enhanced and
incorporated into a virtually limitless
set of scenarios due to its open
source nature.
Richard Parker, Altitude Angel,
CEO and founder said, “For
routine automated commercial and
industrial use of drones to take
flight, several challenges need
to be solved. Not least of these
challenges is, how will the drone be
connected to the digital air traffic
control systems of the future?
Some proprietary devices exist
which provide only a small part of
the solution -- getting the drone’s
location to a single UTM.”
He further said, “Scout represents Yousef Khalidi, corporate VP of Microsoft Azure Networking, Microsoft
one of the latest projects to emerge Corp., said, “SONiC is a leading open source network switch OS, empowering
from our R&D lab and we’re saying customers with modern and efficient cloud networking software. We’re pleased to
to users ‘take this device, experiment see Dell bringing enterprise support to its customers.”
with it, improve it and integrate it
with your solutions, and share your
findings with the community’.”
For more news, visit www.opensourceforu.com

14  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight For U & Me

Getting Ready
for Remote
Learning with

FOSS
The COVID-19 pandemic has made it mandatory to adapt to information and
communication technology tools to enable remote learning. This article explores various
free and open source tools that enable academia to cater to the needs of students. Tools
for various functions such as learning management systems (LMS), video conferencing,
building educational resources and evaluation are explored in this article.

T
he application of information Why FOSS? Is it only for video conferencing?
and communication Though there are many proprietary It’s no wonder that video conferencing
technology (ICT) has and paid software to enable the tools have suddenly become household
been a topic of discussion teaching-learning process, they names with the onset of the COVID-19
in academic circles for the past few involve recurrent licensing costs pandemic. Teachers across the
decades. However, with the sudden that might not be affordable for all globe have also started using video
disruption caused by the COVID-19 in a diverse country like ours. One conferencing tools to communicate
pandemic, ICT has emerged from of the most important factors in with their students. However, video
being a nice-to-have component into selecting an ICT tool is inclusion. conferencing alone doesn’t constitute
becoming one that’s mandatory. Earlier, We need to make sure every possible the complete teaching-learning
the perception was that ICT could learner is included. Free and open process. It includes so many other
enhance the teaching-learning process. source software (FOSS) is certainly activities as well. Each of these
Now it is becoming a platform to enable better in terms of inclusion because activities might require specific tools
the teaching-learning process. it removes the cost of the software and hence it is important to know the
With the COVID-19-linked from the scheme of things. Another right tools for each task. The major
lockdown, academia is now forced important advantage is that FOSS can tasks involved are listed below:
to change its standard operating be customised to specific needs and ƒƒ Building learning resources
procedures. It has become necessary for redistributed to the needy without the ƒƒ Communicating with the learners
teachers, students and parents to adapt need for any permissions. ƒƒ Conducting the evaluation process
themselves to this sudden change with
the help of ICT tools, as these facilitate
uninterrupted continuation of the
teaching-learning process. This article
explores various free and open source
tools that enable academia to deliver
services in the best possible manner. Figure 1: Major tasks in remote learning

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  15


For U & Me Insight

ƒƒ C
oordinating all the tasks to meet equations, mathematical notations or There are powerful Web based
learning objectives algorithms, then Beamer will make presentation frameworks such as
your presentations look elegant and Impress.js. If you want to check out
Building learning resources professional. Web based presentations, try exploring
Learning resources form the core of any If you want to build browser https://github.com/impress/impress.js.
teaching-learning process. Learning based presentations, then you should Mind maps: Mind maps make
content and how it is delivered will give Reveal.js a try. If you have explanations more effective. There are
determine the success or failure of the introductory knowledge of HTML, many mind-mapping tools available
teaching-learning process. Though there you can build presentations that run like FreeMind, FreePlane and Xmind
are various types of learning resources, inside the browser using Reveal.js. A (Figure 4). Xmind provides many
for the sake of simplicity, let us classify section of the code customised from features and makes the process simple.
the resources that a teacher can build the official demo and its output (Figure Creating a video lecture: Video
for remote learning into the following 3) is shown below: lectures are an important component of
categories: remote teaching. The sequence of steps
ƒƒ Presentations <div class=”slides”> involved in building a screen-casting
ƒƒ Illustrations and mind maps <section> based video lecture is as follows:
ƒƒ Video lectures <h2>Open Source For You </h2> 1. Prepare the presentation slides (using
ƒƒ Podcasts <h3>Presentation on Remote Learning Impress, Beamer or Reveal.js).
ƒƒ Interactive content Tools </h3> 2. Use a screen-recording tool to
Let’s explore the tools belonging to <p> record the slides along with the
each of these categories. <small>Created by <a href=”http:// voice-over (Open Broadcaster
Presentations: Presentations are the kskuppusamy.in”>Dr. K.S. Kuppusamy </ Software – OBS).
most used method of delivering content. a> </small> 3. Use audio-editing tools to enhance
The popular open source tools to build <small>Created using <a href=””> your audio recording (Audacity).
presentations are listed below: reveal.js </a> </small> 4. Use video-editing tools to edit
ƒƒ Impress (Libreoffice) </p> your video (OpenShot, Kdenlive,
ƒƒ Beamer (LaTex) </section> Shotcut).
ƒƒ Reveal.js <section> 5. Upload the lecture to the Web or
If you are already using some sort of <h2>Reveal.js</h2> LMS to share it with your students.
proprietary presentation software then <p> Step 1: This process is already
you might find shifting to LibreOffice Reveal.js makes the presentation explained in the ‘Presentation’ section
Impress very simple and effective. building so simple and effective. of this article.
Beamer is LaTex’s tool for </p> Step 2: This step involves using
building presentations. If you are </section> screen recording software like Open
teaching a subject that involves Broadcaster Software (OBS), which is
a powerful tool. Indeed, it’s not only for
screen recording but also has powerful
streaming capabilities that enable users
to set up various scenes, sources, etc.
You can include the inputs through a
webcam, screen contents, microphone,
etc. In my opinion, everyone who wants
Figure 2: Various categories of resources to create a video lecture must spend
time on exploring OBS.

XMind

Mindmaps FreeMind

Freeplane

Figure 3: Reveal.js browser based presentation Figure 4: Mind mapping

16  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight For U & Me

#5.
Extract Audio From Video
Sha

reve

Imp
Bea .js

#1.
YouTube Link Sharing Do Noise Removal
re w

ress
als

Buil
Direct upload to LMS Equalize, Compress to Enhance

mer
Aud
Tools
ith t

dP
#3.
acit
Tools
he le

rese
Edit
y

ntat
arn

Aud
ers

ion
io
Building a Video Lecture

#2
#4.

. Re
Edit

co
Add Title

rd S
Vid
Out unncessary portions OBS

eo

cre
Add Captions

Rec
Ca d Scr
C

en
a
p
p

or
t
ture
ure een
Aud
Web
io
cam

Figure 5: The steps to building a video lecture

Crossplatform Kdenlive, which offer additional


Support for Formats
features.
3D animated titles
OpenShot Step 5: Once your video is
Curve based keyframe animation
Effects
ready, you can either upload it to the
Web (you can use a service such as
Configurable Interface
Titler
YouTube) and share the link with the
Many Effects and Transitions learners or if you don’t want to adopt
Open Source Video Editors Kdenline
Proxy Editing this method, you can directly upload it
Online Resources to your LMS itself.
Support for Various Formates Building interactive content:
Various audio effects
The lecture videos are unidirectional,
Video effects
ShotCut
i.e., they lack interactivity. Interactive
WebCam capture
Support for 4K
learning resources enable learners to
Network stream Playback interact with the content. Let’s explore
two services that enable interactivity:
Figure 6: Free and open source video editors ƒƒ TED-ED
ƒƒ H5P
Step 3: For a video lecture to track, which can be used for a podcast, TED-ED is a Web based service
be effective, the audio quality is a or an audio-only lecture. (https://ed.ted.com) to build interactive
key factor. As the teachers might be Step 4: This involves editing learning videos. To use TED-ED,
recording their lectures at their homes, the video, for which the FOSS tools just select an existing video, and add
there could be some sort of noise. The available are really impressive. The questions or prompts. This enables
audio quality can be greatly enhanced three important options are: reaction responses from the users. This
with the open source audio processing ƒƒ OpenShot (https://www.openshot. is a great tool that makes the content
software, Audacity. The audio from the org/) more interesting.
video file can be extracted by opening ƒƒ Kdenlive (https://kdenlive.org/en/) H5P (https://h5p.org/) stands
it in Audacity. Then the audio can be ƒƒ ShotCut (https://shotcut.org/) for HTML5 package. It is a free and
enhanced. The software has options to All these video editors are cross- open source content-collaboration
remove noise by identifying the noise platform. If you are a beginner, try framework. In other words, you can use
sample from the audio. Functions such exploring OpenShot, which is easy to it to build interactive videos, quizzes,
as Equalize and Compress can be used use. Even if you don’t have a higher find hotspots, etc.
to enhance the audio to sound like a configuration system, you should try
studio recording. Audacity can be used OpenShot. Once you are comfortable Evaluation tools
to record audio alone, or as a separate with it, try exploring ShotCut and Evaluation is an integral part of

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  17


For U & Me Insight

Free and open source video


conferencing
Though there are many useful
proprietary tools available to conduct
video conferencing, open source
software always score better in terms of
privacy. One easy-to-use open source
tool for video conferencing is Jitsi. It
can be used in two different ways —
to make a custom installation or by
using the Jitsi Meet (https://meet.jit.
si/) service, which is a live version.
The latter is very simple and anyone
can start a meeting by simply entering
a meeting ID. There is no need for any
Figure 7: H5P illustrated installation if you are using a desktop/
laptop. For smartphones, you have
to install the Jitsi app. Some of the
features of Jitsi are:
ƒƒ Browser based
ƒƒ No time limits
ƒƒ Capability to share screens
ƒƒ Option to stream in YouTube
ƒƒ Cloud based recording using
Figure 8: Evaluation tools Dropbox
ƒƒ Integrated chat option
Jitsi is certainly worth exploring.
A video tutorial is available at https://
youtu.be/ymAtXVbotoU.

Open resources
It is not always a viable option to build
each learning resource from scratch.
There are many open educational
Figure 9: Jitsi – A free and open source video conferencing tool resources (OERs) available. Some of
the important OER repositories are
the teaching-learning process. The ordering and gap-filling. listed below.
interactive content explained in the The major features of TCExam ƒƒ OER Commons (https://www.
earlier section can also be used as are that it is open source, platform- oercommons.org/): This is a public
evaluation tools at the micro level. independent, has community support, digital library of open educational
There are many open source evaluation accessibility (as per Web Content resources. It has features to explore,
tools to conduct holistic evaluation. Accessibility Guidelines that support create and collaborate. The Open
Three of these are listed below: persons with disabilities), and the Author tools make the creation of
ƒƒ Hot Potatoes (http://hotpot.uvic.ca/) capability to conduct paper testing resources simple by providing a
ƒƒ TCExam (https://tcexam.org/) with OMR (optical mark recognition) Web interface. If you have some
ƒƒ VirtualX (http://virtualx. sheets, etc. educational content to share, you
sourceforge.net/) The various features of VirtualX should explore Open Author.
Hot Potatoes is available as to conduct tests include the capability ƒƒ MERLOT (https://www.merlot.
freeware. It can be used to build the to author and organise questions. It org/merlot/): This stands for
following types of tests: multiple- has support for 12 different types of Multimedia Education Resource
choice, short-answer, jumbled- questions, for formulas and equations, for Learning and Online Teaching.
sentences, crosswords, matching/ as well as for graphical analysis. It provides access to a collection

18  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight For U & Me

of curated learning resources built


Initiative URL
by an international community of
educators, researchers and learners. CEC (Consortium for Educational Communi- http://cec.nic.in/cec/
ƒƒ NROER (https://nroer.gov.in/ cation) (Swayam, Swayam Prabha and NME-
welcome) is the National Repository ICT are pioneering projects from CEC)
of Open Educational Resources. It ePathshala http://epathshala.nic.in/
is an excellent initiative from the NPTEL https://nptel.ac.in/
government of India. It provides
Curricula for ICT in education https://ictcurriculum.gov.in/
open educational resources mapped
to school curricula, an e-library, Table 1
e-books and e-courses. resources accessible to students an important element of classroom
In addition to NROER, there are with disabilities. management involves the use of visual
other great initiatives from the Indian ƒƒ If some learners are facing and non-visual cues such as gauging
government to promote technology difficulties in accessing video the facial expressions of students,
aided learning. Some of them are resources due to the lack of stable assessing the mood of the class, etc.
shown in Table 1. network access, then we should In remote learning, when we present
provide resources in alternate content as recorded material, teachers
Learning management mediums such as in text form or cannot make use of these important
systems (LMS) audio (which might be delivered feedback cues which makes teaching
As stated in the beginning of this using less bandwidth). Of course, more challenging. With the ongoing
article, there is a need for coordinating there are still many other challenges COVID-19 pandemic, no one is sure
all the tasks associated with remote that would require detailed research how long social distancing will need
learning. Learning management systems before providing solutions. to be enforced or whether it is going to
(LMS) enable you to do this. There are There are plenty of other tools that become the new normal.
many choices in open source LMSs, as have not been listed in this article. This is certainly an enormous
listed below: Enumerating all of them is neither challenge to academia. So it would be
ƒƒ Moodle (https://moodle.org/) possible in this article nor required. In wise to convert this challenge into an
ƒƒ ILIAS (https://www.ilias.de/en/) my opinion, learning to use the tools opportunity to evolve a new ecosystem
ƒƒ SAKAI (https://www.sakailms.org/) is important but what is even more for teaching-learning. The most
ƒƒ OpenOLAT (https://www.openolat. crucial is the ability to adapt to the important thing is a change in mindset
com/?lang=en) current scenario. The skillsets required and the willingness to learn, unlearn
ƒƒ ATutor (https://atutor.github.io/) for remote learning are quite different and relearn among all the stakeholders,
In addition to these, there are certain from those needed for the live in- to deliver uninterrupted education to the
popular options available in freemium person teaching scenario. For example, next generation.
mode such as Edmodo, Google
Classroom, etc. The choice of the
LMS depends on various factors such References
as platform support, learner category,
These links are in addition to those provided in the body of the article itself.
types of features required, SCORM [1] Beamer, https://ctan.org/pkg/beamer?lang=en
compliance, etc. [2] mpress, https://www.libreoffice.org/discover/impress/
[3] Reveal.js, https://revealjs.com/#/
[4] mpress.js, https://github.com/impress/impress.js
Inclusion is the key [5] FreeMind, http://freemind.sourceforge.net/wiki/index.php/Main_Page
We need to do everything possible [6] Freeplane, https://www.freeplane.org/wiki/index.php/Home
to include every learner when we [7] XMind, https://www.xmind.net/
[8] OBS, https://obsproject.com/
are delivering learning content [9] Audacity, https://www.audacityteam.org/
using ICT tools.
ƒƒ There are a standard set of guidelines
for building resources, which can be By: Dr K.S. Kuppusamy
accessed by persons with disabilities. The author is assistant professor of computer science, School of Engineering and
Make sure to follow the Web Technology, Pondicherry Central University. He has 14+ years of teaching and research
Content Accessibility Guidelines experience in academia and industry. His research interests include accessible
computing, Web information retrieval, mobile computing, etc.
(WCAG) to make your online

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  19


For U & Me Insight

The Rise of AI and


its Impact
There is no
accepted or
standard definition
of good artificial
intelligence (AI).
However, good
AI is one that
can help users
understand
various options,

(Credit: https://i0.wp.com/www.techregister.co.uk/wp-content)
explain tradeoffs
among multiple
possible choices
and then help
make those
decisions. Good AI
will always honour
the final decision
made by humans.

I
t is a common phenomenon that when Alan Turing explored it further. credit-card-sized device. Advancement
if you repeat a word enough Progress on it was, however, limited in hardware has significantly enabled
number of times, it loses its due to the state of computer hardware technological augmentation by
meaning. This is happening with available at that time. leaps and bounds.
artificial intelligence (AI) already. When computers became more In the past few years, there has
Although AI has finally made it to powerful in later years, these were been sudden growth in all activities
the mainstream rather too quickly, its faster, affordable and had more power related to AI, which was mainly
journey is going to be rockier than it in terms of storage as well as computing underpinned by the realisation of the
was for other technologies in the past. speed. Since then, research in AI has Internet of Things (IoT) and other
AI, as a concept, is not been growing steadily. There was a time complementary technologies such as
anything new; it has been around when we had merely 1MB memory Big Data, cloud computing, etc. And
for centuries. However, it took systems housed in a big box. Now, since last year, we are seeing several
off significantly during the 1950s, we have 128GB memory systems in a AI implementations.

20  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight For U & Me

There is no doubt that AI is still an almost precise prediction of the teaching, coaching and mentoring
in its childhood, but it has reached health status of the patient. And yet, will soon become a high touch service
a critical mass, where research as medical recommendations without and still be in demand. Therefore it is
well as application can happen a doctor’s explicit approval is not a difficult to say whether AI has touched
simultaneously. We can undoubtedly commonplace practice. And this is this sector truly or just morphed it into
say that we have changed gears. good, because, when there is human something else.
AI is already making several life at stake, systems should not make Another area where AI has not yet
decisions that affect our life, whether a final call, ever. Therefore as far as touched and might not affect is live
we like it or not, and has covered the medical field is concerned, AI entertainment and art. These are such
significant ground in recent years. might only reach a status of assisted personalised and creative pursuits that
intelligence and may not be permitted without having a human in it, they
AI is not everywhere yet (should not be allowed) to become a would not have the same meaning.
While it would be natural to think mainstream phenomenon at all. There have been a few experiments
that AI has penetrated almost every While companies are continually with AI creating art, but those art forms
single vertical or market, this is far taking humans out of the customer have quite a different flavour to them.
from the truth. At best, there are service sector and replacing them with AI systems can create art based on what
only a few technology spot-fires in chatbots or automated responders that they have been trained for. Several
a few select industries where AI is are AI-driven, human touch is becoming of those are mainly geometrical and
making its mark. expensive. At an event that saw startups systematic shapes or pictures—nothing
Unfortunately, as always, pitching their company/product, one that a human would necessarily draw
marketing gimmicks are at play startup’s primary differentiation was with a slightly acceptable and natural
to make everyone feel that AI has that it provided personal support for imbalance in it. Real authorship of the
covered everything, while several all queries. Mostly, we are seeing an work of art cannot be yet bestowed to
sections are still untouched. exciting shift in terms of AI- and non- an artificial system.
Many image-recognition systems AI-based offerings. Creativity is some part process
are now better at detecting cancer Self-learning applications is another and some part randomness, which is
or micro-fractures from a patient’s area where AI is making an entry. the exact opposite of the rule-based
MRI or X-ray reports. Many pattern- Using customised learning, pace and method. It is not likely for AI to be able
recognition systems can correlate recommendations, it is becoming quite to contribute directly to the creative
several pathological reports and make popular. However, as that happens, industry any time soon.

AI-driven automated responders or chatbots

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  21


Your Favourite Website Has
IOT.electronicsforu.com
Secure | https://IOT.electronicsforu.com

Secure | https://electronicsforu.com

The Latest in IOT.


A platform for enablers, creators and providers of IOT solutions.

CAREERS.electronicsforu.com
Secure | https://CAREERS.electronicsforu.com

Jobs. Career. Technology.


Career advice and jobs related to electronics and IOT.
electronicsforu.com
TEST.electronicsforu.com
CHIPSNWAFERS.electronicsforu.com
Secure | https://TEST.electronicsforu.com

Secure | https://CHIPSNWAFERS.electronicsforu.com

It's All About Test & Measurement


Design. Electronics. Better.
Great resource for users, buyers & sellers of T&M solutions.
A resource for professional design engineers.

THANKS TO YOU—OUR ONLINE NETWORK IS

FACTS & FIGURES READERS EXPERTS


• Nine websites (two more coming soon) • You can access all content for FREE • Experts who want to share their
• Five major Facebook communities • You can subscribe to newsletters knowledge through articles,
• Seven major LinkedIN groups & pages for FREE --on most websites DIY Projects, etc are welcome
• Million-plus active users (monthly) • Register on our websites to get free invites • We also welcome experts who want to share
• Million-plus reach through Facebook to technical webinars and seminars their knowledge through webinars or seminars
• Fifty-thousand-plus industry connections through LinkedIN • You can contact us at editop@efy.in
Now Grown Into Many!
SMART.electronicsforu.com
Secure | https://SMART.electronicsforu.com

Profit from Technology


Enabling you to benefit from investments in technology.

BUSINESS.electronicsforu.com
Secure | https://BUSINESS.electronicsforu.com

Amazing DIY Projects. Latest Tech trends.


The hang-out for electronics enthusiasts. Business. Electronics. India.
Everything you want to know about India's electronics industry.

ACADEMIA.electronicsforu.com
DIRECTORY.electronicsforu.com
Secure | https://ACADEMIA.electronicsforu.com

Secure | https://DIRECTORY.electronicsforu.com

Everything related to Tech & Academia.


A platform for tech faculty and institutions. India. Electronics. Directory.
Enabling commerce between buyers & sellers of electronics in India.

AMONGST THE WORLD’S TOP 5 AND GROWING!

INDUSTRY
• You can advertise for as little as US$ 100 per month RESPONSE We now also act as marketing
partners for our clients and drive
• Special combo offers for advertisers in our print publications
• We’ve now enabled flexible CPM-based advertising
GUARANTEED entire marketing for them, where
we charge them on basis
• You can advertise on the platform of your choice (based on your target audience) SOLUTIONS of results and not efforts!
• We invite press releases at efy-edit-team@efy.in
• Press releases are published free of cost, subject to discretion of the editorial team
CONTACT US: Shrikant Rao • growmybiz@efy.in • +91-98111 55335
For U & Me Insight

Users and employees


have mixed reactions
As far as end-users of AI technology
are concerned, there is high-level fear,
uncertainty and doubt (FUD) amongst
the majority. The sheer duality of this
technology is a significant concern.
AI is a powerful tool, and just like any
other tool, humans can use it for good
or bad things. Moreover, since people
are not yet actively talking about how
to handle potential misuse of AI, this
has remained as a growing concern.
Another reason for having a
sceptical outlook towards AI is a
plausible fear of job losses. If there are
massive numbers of people losing jobs
without an alternative system in place,
it would be undoubtedly dangerous,
and this can create chaos. Use of AI in dermatology (Credit: https://www.airforcemedicine.af.mil)
But then again, if you think about
it deeply, you will realise that it is public AI system cancels credit cards of what problems these will solve. Some
not losing a job that concerns many. thousands of people because of some businesses discourage this approach, and
What people usually worry about is error. The scale of chaos this may cause are taking a prudent view, but they are
having nothing better to do when their is the main reason for this concern. in a minority. Most companies blindly
mainstream work is disrupted. As a general observation, everyone believe otherwise.
Unfortunately, majority of AI is comfortable as long as applications It is one thing to see breakthroughs
implementation projects do not are not touching or affecting core life in gaming AI, such as in the game of
address this issue upfront. Instead, matters. They are comfortable in areas Go and Chess, or having devices that
it is done as an afterthought. It is of entertainment and luxury, but not so turn on music at voice command. It
perhaps the most substantial reason much when critical aspects of life are is another thing to use AI and make
for being sceptical of AI. in the hands of an AI. changes in businesses, especially ones
At a superficial level, many of us that are not fundamentally digital.
appreciate the ease and convenience But enterprises have When it comes to improving and
these AI solutions are providing. different views changing how businesses get done, AI
However, our comfort erodes as these Despite mixed feelings and heightened and other tools form only small cogs
solutions start to increase their scope expectations from AI, the business of a giant wheel. Changes that bring
and touch critical areas of our life, such world still has some ability to see AI in about company-wide repercussions are a
as banking, social benefits, security, a relatively balanced manner. People different ballgame.
healthcare, jobs, driving and others. from a wide range of industries agree Change management has not been
Bias and racism have been front- that AI is tricky to deploy, and it could easy to handle in the past, and it is not
runners in the list of reasons for distrust suck a lot of money and time before going to be any different in the future
in AI. People also fear that AI may show becoming useful. It can be costly, and either. Several experts from various
blatant disregard for human control. initial payout can be quite modest (and business domains need to be involved
This, however, does not have any sometimes lower than that). Overall for any significant change to occur,
precedence, but it is practically possible payback period for any AI project has and they have to be the best if we are
and, hence, is a legitimate concern. not been attractive, and in many cases, it looking for effective outcomes. This
Errors-at-scale is not a widely- is hard to establish the same objectively. essentially means pulling the best people
known issue, but those who have been Several experts find it unsettling from routine business work and letting
victims of this problem in the past see it that some vendors are pushing AI them focus on AI implementation.
as one of the significant concerns when systems even before they figure This is a challenging proposition for
using AI in daily life. Imagine when a out the purpose and claim to know businesses of any size.

24  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight For U & Me

What the future holds to adapt to the right technology. And, and responsible disclosure about the
AI and other emerging technologies, most importantly, “What do you want functionality and methodology of the
apart from bringing efficiencies, are to do when you grow up?” will soon system. People involved and affected by
also bringing new possibilities. These become an obsolete question. this need to understand how outcomes
possibilities are creating new business AI will change the job market are derived and, if required, should be
models and opportunities. This will entirely as there will be growing able to challenge them.
continue to happen in the future as requirements for soft skills, since Any AI system should not cause
we progress. most hard skills will be automated. harm to users or general living beings,
Most daily tasks that depend on best Especially for the Indian economy, and must always function in a robust,
estimates or guesswork would also see a since we have mostly relied on hard secure and safe way throughout its
significant shift due to abundance of data. skills for local as well as global lifecycle. Creators and managers of
Due to access to more data, the need for opportunities, this will pose a these systems have the responsibility to
devices that can process this data on Edge significant challenge to keep up with assess continually and manage any risks
would increase and will be a key driver in declining demands. We will be forced in this regard.
maintaining this progression. to come up with new business models, Most importantly, on the
One of the significant drivers not just as businesses but also as accountability front, anyone creating,
of these technological advances is an economy. developing, deploying, operating
democratisation of resources. Whether or managing AI systems must
it is the Internet revolution, open source Maintaining a balanced approach always be held accountable for the
hardware and software revolution, Regardless of how the recent or long- system’s functioning and outcomes.
or anything else, as AI technology term future with AI looks like, there are Accountability can drive positive
becomes a part of our daily lives, we a few points that we must understand behaviours and thereby potentially
will see more of this democratisation and accept in entirety. Most of these ensure that all the above general
happening. This will be a crucial factor points align with OECD’s AI principles principles have been adhered to.
and will keep boosting progress. that were released in early 2019. There is a general feeling that
As of now, most AI applications AI systems should benefit humans, over-regulation limits innovation and
follow a supervised learning approach. the overall ecosystem on the planet, and advancements. However, there is no
In years to come, we will start seeing the planet itself by driving inclusive point in racing to be the first; instead,
more and more of unsupervised growth, sustainable development and let us strive to be better. Being fast and
learning that will keep systems well-being of all. These systems must first by compromising on ethics and
updated continuously. However, this always be designed such that they quality is certainly not an acceptable
will have one significant barrier to respect and follow the rule of law approach by any means.
cross, which is the trust factor. Unless and rights of the ecosystem (humans, It is unlikely that in the next
this trust factor improves, supervision animals, etc). These should also respect ten years or so, we will have robots
will remain a necessity. the general human value system and the controlling humans. However,
There is no accepted or standard diversity it exhibits. More importantly, technology consuming us, our time,
definition of good AI. However, there must be appropriate safeguards feelings and mindfulness is very much
good AI is one that can guide users in the system such that humans are a reality even today; and it is getting
understand various options, explain always in the loop when necessary, worse day by day already.
tradeoffs among multiple possible or can intervene if they feel the need, Just one wrong turn in this fast lane
choices and then help make those regardless of necessity. After all, a fair is what it will take to cause regression
decisions. Good AI will always honour and just society should be the goal of for society. The rise of AI should not
the final decision made by humans. any advancement. lead to the fall of humanity. Let us work
On the consumer front, several Creators of AI systems should towards keeping the technology, AI or
virtual support tools would increase always demonstrate transparency otherwise, in our control, always!
and become mainstream. It will be
almost expected to come across these By: Anand Tamboli
bots first before talking to any human
The author is a serial entrepreneur, speaker, award-winning published author and
at all. However, only businesses emerging technology thought leader.
that demonstrate a customer-centric
approach would thrive in these
scenarios, while others would struggle The article was originally published in the February 2020 issue of Electronics For You.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  25


For U & Me Insight

Breaking Down the Buzz


Around Quantum Computing
Quantum computing helps scientists and researchers in solving problems above a certain
complexity. Quantum computers derive their power by utilising quantum mechanics and
marvels such as superposition and entanglement, which allows them to perform a variety
of computational tasks exponentially faster than classical computers.

T
he invention of the computer arises: how is quantum computing A classical computer’s main
has undeniably been one of different from classical computing, purpose is to save and manipulate
the biggest technological and how did it go from being a purely data for working. Its chip uses bits to
revolutions in the history of theoretical subject to being used by store this information. These bits are
mankind. However, classical computing companies like Google and IBM? like tiny switches with two states—on
is not the only way that was formulated and off, represented by one and zero,
in the last century to solve complex Quantum computers versus respectively. From every pixel in an
problems. While it was in 1927 that classical computers image to the texts exchanged between
physicist Heisenberg introduced the We already have computers and even people, everything is ultimately made
uncertainty principle, it was not until supercomputers for faster processing up of these bits, a language that the
1970 that using quantum mechanics speeds, then why do we need quantum computer understands. But even
as a communication resource and the computers? To understand this we supercomputers cannot define the
term quantum information theory need to understand the difference uncertain state that exists between
came into being. So, the question between the two. on and off, especially on atomic and

(Credit: www.eweek.com)

26  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight For U & Me

molecular levels. This makes them states can also change (cancel or add)
capable of only analysing simple depending on whether they are in
molecules in practical applications phase or out of phase.
related to biology and chemistry.
This is where scientists and Making of a quantum computer
researchers needed to find a According to IBM researcher
better way of computing when DiVincenzo’s Criteria, there are five
the probability is involved (such minimal requirements for creating a
as spinning a coin instead of quantum computer—a well-defined
flipping it). Also, for problems scalable qubit array; an ability to
above a certain complexity, more initialise the state of the qubits to a
computational power is required, simple fiducial state; a universal set of
which is only possible with quantum gates; long coherence times,
quantum computers. much longer than the gate-operation
Dr Martin Laforest, senior time; and single-qubit measurement.
product manager and quantum There are different ways
technology expert, Isara, to create a qubit. In one of the
explains, “Quantum computers most commonly used methods,
leverage the surprising and often superconductivity is applied
counterintuitive behaviour of to create and preserve hard-to-
atoms and molecules, making them maintain quantum states. Quantum
radically different from today’s computers are isolated from any
computer. They derive their power sort of electrical interference and
by utilising quantum mechanics made to operate in an environment
and marvels such as superposition at almost absolute zero temperatures
and entanglement. Their quantum to prevent errors while working
behaviour makes them much with qubits. Superconductors
more powerful, allowing them to also minimise energy loss during
perform a variety of computational transmission.
tasks exponentially faster than IBM’s quantum computer model In order to achieve results close
classical computers.” to absolute zero, necessary cooling
between on and off. This means that power is made available and multiple
Behind the magic: Working of a if there is more than one option, a other steps are followed. This includes
quantum computer quantum computer can go through each attenuation during refrigeration to
Quantum computers use quantum bits option simultaneously and choose the protect qubits from thermal noise
(qubits) instead of the regular bits. By ultimate answer, instead of ruling out during the process of transmitting
combining qubits, a lot more data can previous options individually before signals to the quantum processor. Also,
be processed in less time as compared checking the next option. cryoperm shields protect the qubits
to basic computers. Underlying Entanglement is a quantum from electromagnetic radiation.
quantum computing is the principle phenomenon where states of two To create a fault-tolerant quantum
of quantum mechanics. Fundamental particles, even if they are physically system, it is necessary to increase the
quantum properties like superposition, separated, are tied such that they computational power of a quantum
entanglement, and interference are used cannot be described independently computer. For this, higher numbers
to manipulate the state of a qubit. of each other. Whatever is the result of qubits are preferable as states
Superposition refers to the of measuring one of these particles, increase exponentially with each qubit.
overlapping of usually independent the outcome of another one will Researchers have designed algorithms
states. Real-life examples include be mathematically related to it. for sequential quantum operations
sounds generated while playing an This correlation is necessary for that can run on fault-tolerant quantum
instrument where notes are played faster computations through special computers for extended periods of
simultaneously, or waves formed on instructions (algorithms) that can only time. To ensure that the results are
throwing a stone in a lake. Qubits can be written with quantum computers. accurate and noise-free, low error rates
be in superposition, that is, somewhere Just like wave interference, quantum need to be maintained.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  27


For U & Me Insight

New opportunities speeds, a quantum computer is also much Can it be a staple technology?
Quantum computers are not just about more vulnerable to errors than a classical Companies are competing to build
saving time and money with better computer would be. reliable quantum computers in a variety
speed and higher efficiency of doing This requires robust quantum of fields like manufacturing, security
tasks that can already be performed. processes and designing devices such and financial services. Numerous
Quantum computers are thought to be that the system is only sensitive to startups like Isara, Ionq and 1qbit have
useful in places where any uncertain the targeted measurement, protecting come up in this field. They are willing
system needs to be simulated. quantum states from decoherence. But to invest money in the technology at
Quantum chemistry is one of the one needs to keep in mind that high this early stage of development because
most promising applications of quantum sensitivity is crucial for precision. of its great potential. Cloud based
computing. By determining the lowest When it comes to cybersecurity, Dr quantum computing technology is
energy state among various molecular Laforest says, “It is important to note that increasingly being leveraged to make it
bond lengths that represents equilibrium quantum is a dual-use technology that more easily available for remote access
molecular configuration, it is possible also has the capacity to cause security and be user-friendly for enterprises,
to simulate a molecule. Modeling even chaos. Imagine a world where our no matter the size of their work
simple molecules and predicting particle existing encryption is no longer effective. teams. Microsoft in November 2019
interactions in chemical reactions can If hackers equipped with quantum announced that it would start providing
aid in discovery of new life-saving computers can break these algorithms, access to quantum computers in its
medicines and other compounds useful they can determine the private keys Azure cloud for select customers.
for making efficient devices, which is not used to secure the data and expose it. IBM Quantum designed and built
possible with conventional computing The combination of quantum computer the world’s first integrated quantum
memory and processing power. speed-ups and known algorithms computing system ‘IBM Q System
Another application is cryptography, developed by Peter Shor and Lov Grover One’ for commercial use in 2019.
as these computers can easily generate make this possible. To ensure we are Another company, D-Wave, recently
hard-to-break encryption keys for ready, preparation has to start now as it announced that it is freely opening
better cyber security. Dr Laforest says, will take a decade or more to fix many up its quantum computers to anyone
“Quantum computing promises many of our complex systems. Experts in the who has ideas for how to use them
positive disruptions. One such possibility field of quantum computer development to find a cure for COVID-19. For
is the use of quantum particles called agree it is highly probable we are assistance in developing solutions, the
photons to create secure communications only ten years away from large-scale company along with its customers like
channels for the distribution of quantum computing with this capability. Cineca, Volkswagen, Denso, Tohoku
quantum keys. This has the potential Cybersecurity experts are already University, Kyocera, Sigma-i, among
to revolutionise networking and the starting to prepare for potential quantum others, is offering access to their
protection of future data transmission.” computer attacks on the encryption engineering teams.
Other possibilities include improved algorithms we use to protect data today.” Quantum computers could change
solar panels, financial strategies for “The good news is that quantum-safe the world but they are still not advanced
prediction of financial markets, better algorithms exist. The most significant enough to replace the classical method.
weather forecasts, and so on. challenge we face is the development They are not so useful when it comes
of tools and methods that will make to the basic tasks like storing images
Problems encountered it easy and seamless to transition to that ordinary computers do. They
Holding an object in a superposition state these quantum-safe algorithms. Key are undoubtedly powerful, but not
for a long period is difficult. Interaction to this success will be successfully so reliable yet. As for now, the most
with the environment is necessary for completing this transition within a beneficial way will be to give users
quantum measurement. But if a qubit decade without any wholesale disruption access to both traditional and quantum
comes in contact with such occurrences to our current security systems and computers simultaneously.
as changing magnetic and electric infrastructure,” Dr Laforest adds.
fields and radiation from warm objects Dearth of required skill set, lack
By: Ayushee Sharma
nearby, or if there is a cross talk between of available resources, and cost are
The author is a technology journalist
qubits, it undergoes decoherence, and some other impediments for it to be the at EFY.
changes in the uncertain state cause technology in the making.
the information to be lost. Due to these
interference problems, in spite of high The article was originally published in the May 2020 issue of Electronics For You.

30  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Overview For U & Me

Introduction to Green
Computing and its Importance
Foundation of green computing was laid as far back as 1992 with the launch of
the Energy Star program in the USA. The success of Energy Star motivated other
countries to take up the subject for investigation and implementation.

contribute at least twenty per cent


towards carbon dioxide, ICT becomes
responsible for sixteen per cent of
global warming. This is undoubtedly a
cause of concern. As per one research-
based estimate, fifty billion devices
like computers, mobile phones,
sensors, actuators and robots shall
connect to the Internet by this year’s
end, creating even more havoc.
Of course different strategies would
be needed to nudge ICT towards green
computing, which is necessary to reduce
the pollutants generated in collection,
processing and transportation of the
information. In today’s scenario,
primary challenge in achieving green
computing is to realise energy-efficient
devices, energy-efficient processing
and energy-efficient networking.
Invariably, energy efficiency is
required to address reduction in heat
dissipation that is basically responsible
for emission of carbon dioxide.
In case of electrical, electronic or
computer systems, wasteful heat is
generated due to thermal vibration of
particles in the components. Therefore

A
ny technology that aspires Motivation for the subject of any green initiative should have direct
to be nature-friendly ought green computing arose to protect the or indirect motivation to reduce this
to be green. Recognition environment against hazards generated thermal vibration. Reduced circuitry or
of this fact has led to at three different states of ICT, namely, a minimal system helps in reducing the
development of green generators, information collection (by electronic number of vibrating particles.
green automobiles, green energy, green devices), information processing Minimal circuit designs, which
chemistry as well as green computing. (through algorithms and storage) and lead to technologies of very large
Green computing is a leap forward information transportation (through scale integration (VLSI) or ultra large
for information technology (IT), and networking and communication). scale integration (ULSI), are now
more specifically for information and Carbon dioxide accounts for about well-established technical solutions.
communication technology (ICT). eighty per cent of global warming. As a These solutions meet the objectives
Green computing has emerged as the rule of thumb, if world-wide increasing of realising low cost and smaller-size
next wave of ICT. application of ICT is assumed to systems. It was never thought these

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  31


For U & Me Overview

In the conventional computers


E-waste generation by region 2014 (metric tonnes/year)
20
we work with now, computing and
processing of data is based on transistors’
on and off states as binary representation
15
of ‘1’ or ‘0’ or vice versa. In quantum
computers, the basic principle is to use
10 quantum properties to represent data.
Here, computation and processing of
data is made with quantum mechanical
5
phenomena such as superposition,
parallelism and entanglement. Therefore,
0 whereas in conventional computers
Oceania Africa Europe Americas Asia
data is represented by binary ‘bits,’ in
Figure 1: Increasing e-waste trend world over (Credit: UNU - The Global E-Waste Monitor 2014) quantum computers representation is
done with ‘qubits’ (quantum bits).
would also indirectly provide a solution To be useful, molecular scale logic will Qubits are typically subatomic
for reducing particles in vibration. have to function close to the information particles such as electrons and photons.
In the process of minimisation, two theoretical limit of one bit on one carrier. Generating, processing and managing
more revolutionary technologies have Experimental practicalities suggest qubits is an engineering challenge.
emerged: molecular scale of electronics that it will be too easy to construct Superiority of quantum computing over
(MSE) and quantum computing. It regular molecular arrays, preferably by classical computing is multi-fold. First,
was the quest for an ever decreasing chemical and physical self-organisation. whereas in classical computing logical
size, but more complex electronic This suggests that the natural logic bits are represented by on and off states
components with high speed ability, architectures should be cellular automata: of transistors, in quantum computing
which gave rise to MSE. regular arrays of locally connected finite qubits are harnessed by properties of
The concept that molecules may be state machines where the state of each subatomic particles. Size of quantum
designed to operate as a self-contained molecule might be represented by colour computers will thus be much smaller
device was put forwarded by Forrest or by conformation. Schemes such as than that of present-day computers.
L. Carter. He proposed some molecular spectral hole burning already exist for Both MSE and QC are thus found to be
components analogous of conventional storing and retrieving information in indirect solutions for green computing.
electronic switches, gates and molecular arrays using light. The general At current state of technology
connections. Accordingly, the idea of a problem of interfacing to a molecular march, green ICT may be better looked
molecular P-N junction emerged. MSE system remains problematic. Molecular at as a challenge to realise eco-friendly
is a simple interpolation of IC scaling. structures may be the first to take practical and environmentally-responsible
Scaling is an attractive technology. advantages of novel logic concepts such solutions in order to reduce just not
Scaling of FET and MOS transistors as emergent computation and ‘floating heat dissipation but also to maximise
is more rigorous and well defined architecture’ in which computation is energy efficiency, recyclability and
than that of bipolar transistors. But viewed as a self-organising process in a bio-degradability. Fact is, fast-growing
there are problems in scaling of fluid-like medium.” production of electrical, electronic and
silicon technology. In scaling, while Change is the only thing that computing equipment has resulted in
propagation delay should be minimum is permanent in the universe. In enormous increase of e-waste, and
and packing density should be high, technology scenario, changes become especially carbon dioxide, which
these should not be at the expense of inevitable means of evolution and is responsible for creating havoc in
the power dissipated. With these scaling revolution. In tune, a new generation the environment and for increasing
rules in minds, scaling technology of of IT known as Quantum Computing pollution. As per a report published by
silicon is reaching a limit. (QC) has come up. Mechanical International Telecommunication Union
Dr Barker reported that, “change, computing, electronic computing, (ITU), e-waste has increased rapidly
spin, conformation, colour, reactivity quantum computing, DNA computing, and reached a global high. Increasing
and lock-and-key recognition are just a cloud computing, chemical trend of e-waste all over the world is
few examples of molecular properties, computing and bio computing are a shown in Figure 1.
which might be useful for representing few generation-wise migrations of Many studies have established that
and transforming logical information. information technology (IT). computers and IT industries dissipate

32  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Overview For U & Me

Total CO2 emissions 2017 (gigatons) 2017 CO2 emissions per person (tons/person) of e-waste, air management and
cooling management, among others.
For ICT scientists and engineers, the
challenge will be to design technology
and algorithms to minimise particle
vibration, travel path and heat loss
due to input-output mismatch. Design,
operational and transmission related
thermal loss are core issues of ICT.
This makes production of green ICT
a great challenge, although, as parts
of its implementation, energy-smart
devices, sleep-mode devices, cluster
computing, cloud computing, etc are
already in place.
Foundation of green ICT was laid
as far back as 1992 with the launch of
the Energy Star program in the USA.
The success of Energy Star motivated
other countries to take up the subject
for investigation and implementation.
Leading countries working on green
ICT now include Japan, Australia,
Canada and the European Union.
Formalisation of green ICT is in fact
due to standards proposed by IEEE
who has formalised Green Ethernet and
802.3az-enabled devices for green ICT.
Green ICT is a clean-environment-
based technology. However, fruitful
realisation of green ICT is equally
dependent upon awareness in society.
Society needs to practice common
ethics of ‘don’t keep computer on, when
not needed,’ ‘don’t use Internet as a free
tool, but as a valuable tool of necessity
only,’ ‘don’t unnecessarily replace
Figure 2: Global carbon dioxide emissions (Credit: Wikipedia.org) devices after devices just because
you can afford to’ and so on. Without
more energy than others. Impact of length, minimising protocol overhead, societal responsibility, technology alone
ICT industries on emission of carbon- protocol for compressed header, cannot ensure achieving the objectives
dioxide is immense. As shown in green networking, management of green ICT.
Figure 2, India is currently the third
largest producer of carbon dioxide.
By: Prof. Chandan Tilak Bhunia, Abhinandan Bhunia
Urgent solutions required at the
level of hardware design management Prof. Chandan Tilak Bhunia, PhD in computer engineering from Jadavpur University,
is fellow of Computer Society of India, Institution of Electronics & Telecommunication
include minimal configuration, adoptive Engineers, and Institution of Engineers (India).
configuration, consolidation by
virtualisation, algorithmic efficiency, Abhinandan Bhunia did B S in computer engineering from Drexel University, USA and
MBA from University of Washington, USA.
optimal resource utilisation, optimal
data centres, optimal link utilisation,
limiting power by reducing cable The article was originally published in the April 2020 issue of Electronics For You.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  33


For U & Me Overview

How Technology is Helping


Fight Coronavirus
Latest technologies like artificial intelligence,
blockchain, chatbots, face recognition, robots, drones
and software solutions are all contributing to the
fight against the fast-spreading coronavirus that has
caused havoc all over the world.

I
n the battle against novel coronavirus (Covid-19), emerging services for people infected with Covid-19. Mobile healthcare
technologies have stood out by making immense solutions empower healthcare providers and patients by
contribution in an unexpected, creative and amazingly creating a platform for interaction and medical care services.
responsive way. Delivery drones, disinfecting robots, Face recognition technology is being used in surveillance
smart helmets, and thermal imaging cameras are all being systems that can recognise people, even while they are wearing
deployed in the fight against Covid-19. Latest technologies masks, with a relatively high degree of accuracy. The surveillance
are being used to predict and combat spread of the infectious systems use facial recognition technology and temperature
disease. These technologies include artificial intelligence (AI), detection software to identify people who might have fever or
analytics software, chatbots, apps, telemedicine, blockchain, are not wearing masks. Facial recognition technology has been
and advanced facial recognition software. integrated with thermal imaging to make fever-detection cameras.
The better we can track the virus, the better we can fight it. Contactless temperature detection software, AI-powered non-
Advanced AI has been used to help diagnose the disease and contact infrared sensor systems, and smart helmets that can measure
accelerate the development of a vaccine. Google’s DeepMind the temperature of anyone within five-metre radius can quickly
division has used its latest AI algorithms and computing power detect a person who is suspected of having a fever. These are being
to understand the proteins that might make up the virus, and has deployed at stations, airports, schools, malls, community centres
published their findings to speed up the process of treatments. and other public places that have large gatherings.
Several drug companies are also using AI-powered drug Robots are being used to clean or sterilize hospitals,
discovery platforms to search for possible treatments. and perform basic diagnostic functions, to minimise the
AI-based systems are being used to detect coronavirus risk of cross-infection. The robots allow physicians to
infection via CT scans with 96 per cent accuracy. Portable lab- communicate with the patient via a screen, and are equipped
on-chip detection kits are helping medical teams on the ground with a stethoscope to help doctors take a person’s vitals while
to identify infected individuals for proper medical care quickly. minimising exposure to the staff. They can deliver food and
These tools are helping remote areas with limited medical medicine to reduce the amount of human-to-human contact.
resources to immediately screen out suspected coronavirus- Robots use ultraviolet light to autonomously kill bacteria and
infected patients for further diagnosis and treatment. viruses in quarantine wards without human intervention.
Blockchain-powered services are helping hospitals to Drones are being used for contactless medicine delivery,
spend less time on administrative work and allocate staff to the and for spraying disinfectant around the country, especially
frontlines. Blockchain platforms speed up claims processing in quarantine zones. These transport medical samples and
and minimise the need for face-to-face contact amidst the conduct thermal imaging. Drones are also deployed to
coronavirus outbreak. check travellers’ temperatures and disposal of hospitals’
Chatbots are being used to share information and free medical waste.
online health consultation services. These can answer queries
related to the virus such as symptoms, preventive measures By: Deepshikha Shukla
and treatment procedures.
The author is a freelance technology journalist.
Software solutions that are transforming the healthcare
industry include hospital management, mobile healthcare,
telemedicine, and wearables. Telemedicine enables remote The article was originally published in the April 2020 issue
monitoring and care for patients. It can provide necessary of Electronics For You.

34  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Overview For U & Me

Role of Technology
in Maintaining Law and Order
With criminals becoming tech-savvy, police and the courts also need to know the
latest tools and make use of the latest technologies like cyber policing, artificial
intelligence, data analytics, blockchain and cloud computing.

R
(Credit: www.tagesspiegel.de)
educing crime rates and participation of over 6,000 people across
delivering speedy justice to 46 cities in 24 countries was seen. past history of all records. Analysing the
the needy is a challenging As criminals become more tech- data from first-hand evidence at the site
task globally. It requires savvy, police personnel need to know in real time provides a good overview of
smart policing and effectively dealing the latest tools and scientific methods cases from the beginning.
with roadblocks in legal proceedings. to keep up with them. Automating Telangana Police is among those
Although the adoption is slower manual activities that are rule-based taking diverse initiatives since past
when compared to other sectors like and repetitive through robotic process few years to maintain law and order
finance and insurance due to the automation can save time and manpower. in the state. For instance, mobile
ethics constraints involved, science Investigating and understanding the apps like TSCOP, ePetty case, Cop
and technology are being increasingly intricacies of even the most pressing Connect, Facial Recognition System,
leveraged by enterprises to accomplish cases has become easier with the aid and e-Challan system launched by the
this mission. This is proved by the of CCTV footage, which also serves as department have played a huge role in
success of Global Legal Hackathon 2019 valuable evidence in the court, especially data sharing and structuring the system.
that was organised to develop technical in the absence of witnesses. Digitisation
solutions for the industry, in which of cases ensures full electronic access to Continue to page 39...

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  35


Focus
Why Devops is Popular
Six Things to Consider for a DevOps Gulp: A DevOps Based Tool
Transformation to the Cloud for Web Applications
The COVID-19 reality has This article highlights the
pushed the market towards features of Gulp, an open source
faster adoption of remote cross-platform streaming
access to IT by developers. task runner that lets software
Software vendors are developers automate many
therefore in a race to enable development tasks. It covers the
and expand cloud DevOps solutions. Increasingly, teams installation of Gulp, and touches
seek to adopt end-to-end DevOps platforms or tool bundles upon the code required for task
that decrease their reliance on multiple vendors and give and module management.
them ownership of the tooling infrastructure. But what
should an enterprise demand from cloud DevOps tooling,
and what key differentiators should be considered? How DevOps Differs from
Traditional IT and Why
The Five Best DevOps Tools DevOps is the buzzword in
DevOps emerged out of the the software development
agile software development industry. But how much of
movement and applies the hype associated with this
some similar standards to new technology is warranted?
the application life cycle This article demystifies
management (ALM) process. DevOps is hard to characterise DevOps and explains to the
since it’s to a greater degree a development or logic than reader why it scores over
an unbending arrangement of standards or practices. traditional IT practices.
Open source DevOps tools are used to streamline software
improvement and arrangement. Here’s a brief description of
the five best tools amongst these.
Understanding DevOps:
A Revolution in Software
Development
DevOps is the Future of Software
Development DevOps scores over legacy,
In the field of IT and software
development, DevOps
monolithic and agile implementation is gaining
software development. significant popularity. This
This article discusses article takes a quick look at the
the various stages of important concepts of DevOps
the DevOps software and the phases of its life cycle.
development cycle and It also maps the relevant
delineates the appropriate open source tools, and finally,
FOSS tools that can be highlights how it can bring
used at each of the stages. value to the IT industry.
Focus

Six Things to Consider


for a DevOps Transformation
to the Cloud
The COVID-19 reality has pushed the market towards faster adoption of remote
access to IT by developers. Software vendors are therefore in a race to enable and
expand cloud DevOps solutions. Increasingly, teams seek to adopt end-to-end DevOps
platforms or tool bundles that decrease their reliance on multiple vendors and give
them ownership of the tooling infrastructure. But what should an enterprise demand
from cloud DevOps tooling, and what key differentiators should be considered?

H
ere are six things one platform providers should offer a Class Universal package management
should consider when A tool stack as part of their platform, All of the metadata and dependencies
engaging with any in addition to very strong ecosystem in your myriad technologies must
individual vendor for integrations and plugins to make be supported (such as Docker, npm,
a DevOps digital transformation developers’ lives easier, respecting the Maven, PyPi, Golang, NuGet, Conan,
to the cloud. freedom of choice they want. etc – but also the 20+ more that
An E2E platform also requires a you may find in your portfolios).
End-to-end (E2E) solutions vendor commitment to allow a true Point solutions for single or limited
Developers today are looking for an ‘one-browser solution’ and not simply technology types will only serve to
E2E solution and an all-in-one user bundled tools that are integrated. frustrate your development teams
experience. However this doesn’t mean This ensures that the user will have a and require the adoption of multiple
that they will compromise on the ‘best full experience from a single UI that solutions and repositories within your
of breed’ approach. Therefore, DevOps connects all services. organisation. Large enterprises have

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  37


Focus

Figure 1: Comparing cloud based end-to-end DevOps solutions

not only myriad technologies, but also have completely separate solutions Multi-cloud
a long legacy of deployed, mission- that provide different features and While you might think one cloud is
critical applications that must be methods that don’t talk to each other, enough, you should select a vendor that
supported at scale with local, remote requiring you to learn a new product, provides services across and between
and virtual repositories. user experience and user interface. As all major clouds. Keep your options
you transition to a cloud environment, open and your peace of mind intact
Both cloud and on-premise both cloud and on-premise solutions by avoiding any vendor lock-in and
solutions need to be integrated need to be able to function in the same providing maximum resilience.
100 per cent way 100 per cent of the time to ensure
This isn’t about whether or not to have a smooth transition. For instance, as Security
a cloud solution. Many companies you go through a cloud migration, you Security is an integrated part of
that offer cloud solutions don’t have will need the same tools and functions the pipeline that supports all of
corresponding on-premise/self-hosted in both places in order to keep the your package types and it is now
options, or vice-versa. More still business running. a line-item for many companies.

38  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

Cloud DevSecOps tools should secure all software packages in the of today as well as keep pace with
make it possible to block artifact pipeline from build-to-production. technology evolution. It should
downloads (or break builds) that provide a way to assemble pipelines
contain vulnerabilities, requiring Cloud-ready CI/CD from pre-packaged building blocks,
tight integration all the way into the Traditionally, application rather than developing them from
repository. Security policies should be development teams were responsible scratch. These pipelines can be
easy to define and manage across your for creating localised CI/CD templatised and shared as libraries
repositories. And any cloud security (continuous integration/continuous across the organisation, thereby
solution should allow you to easily delivery) automation. This approach building a knowledge base that is
identify the impact of a vulnerability provides short-term gains for the constantly growing and improving.
across the entire DevOps pipeline. individual teams but ends up being In other words, your CI/CD provider
The world of containers also comes a constraint in the long run since should give you economies of scale
with a challenge. Your DevSecOps enterprises get no economies of scale over time in the cloud, and help you
tools should be able to ‘open’ any across their CI/CD implementations. ship code faster.
container and scan several tiers in, and A modern CI/CD provider
with all packages look for dependencies should support and scale enterprise-
By: Jens Eckels
that include vulnerabilities. wide workflows (aka the ‘software
A DevOps platform should always supply chain’) that span all popular The author is the director, product
marketing at JFrog.
strive to be ahead of any hacker and technologies and architectures

Continue from page 35...

Another such attempt comes from analysis. UK’s Tessian employs AI to which includes law firms, software
IIIT Delhi, where a research centre secure confidential data and emails companies, and universities, among
has been built to assist the capital’s for law firms. others.
police for such purposes as criminals’ In India, researchers from IIT According to a 2019 report titled
identification, cyber policing, traffic Kharagpur have recently developed LawTech Adoption Research by tech
management, and combating crimes an AI-powered method to automate analysts TechMarketView, the number
by using artificial intelligence (AI), the reading of legal case judgements, of lawtech companies has grown over
biometrics, image processing, Big case law analysis and enhance legal the past few years, but the adoption
Data, social media analysis and search across several domains. Deep rate is not that high among legal
network forensics. neural models enable understanding practitioners. One of the major reasons
Technologies like AI, analytics, the rhetorical roles of sentences or noted behind this is the partnership
blockchain, and cloud computing are jargons in a judgement when adequate model in which spending is done from
making their way into the courtrooms, data is available, and hence aid in the partners’ profit pool.
too. AI-powered tools can be used organising legal documents. To change this scenario, the US
by lawyers for most daily tasks, from Blockchain finds application in states like North Carolina and Florida
reviewing documents and performing email encryption, verifying processes have already mandated technology
legal research rapidly to predicting and securing evidence, like in financial training for CLE (continuing legal
various outcomes of a case. Businesses transactions and many other purposes. education) credits. Pressure from clients
can utilise AI to review contracts for For example, Legaler’s blockchain and for cheaper offerings is also pushing
partnerships without any bias, and developer tools provide infrastructure law firms to move to cloud computing
perform background checks before to build decentralised applications for and other tech solutions.
hiring new employees to avoid getting legal services. In 2017, Global Legal
into legal troubles later. Blockchain Consortium was formed to
By: Ayushee Sharma
Several companies in countries like drive the standardisation of blockchain
the US, Singapore, the UK, Canada technology in the legal sector. It has The author is a technology journalist
at EFY.
and Australia are using technologies already surpassed 150-member mark,
to solve issues in this space.
Canada-based Kira Systems uses
machine learning (ML) for contract The article was originally published in the April 2020 issue of Electronics For You.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  39


Focus

The Five Best DevOps Tools


DevOps emerged out of the agile software development movement and applies
some similar standards to the application life cycle management (ALM) process.
DevOps is hard to characterise since it’s to a greater degree a development or logic
than an unbending arrangement of standards or practices. Open source DevOps
tools are used to streamline software improvement and arrangement. Here’s a brief
description of the five best tools amongst these.

D
evOps, which began by administration processes has increased. 4. Containerisation – Kubernetes
uniting engineers and tasks, The DevOps mantra is, “Automate 5. Virtualisation – Parasoft Virtualize
has now turned out to be a and monitor the procedure of software Now, let’s list the top five tools
key tool in the most basic creation, extending from integration, among these.
parts of the software development testing and releasing, to deploying and
life cycle. With the introduction of overseeing it.” Ansible
cloud computing and virtualisation, This open source tool monitors
the requirement for new systems Stages of the DevOps life cycle application deployment, configuration
The following are the five phases of the management, orchestration and so on.
DevOps life cycle, and the popular tools The Ansible development steps are
used in each phase. given in Figure 1.
DEVELOP TEST
1. Continuous integration – Jenkins
2. Configuration management – Key features
Ansible, Chef and Puppet ƒƒ It has an agentless design.
3. Continuous inspection – Selenium ƒƒ It is powerful due to the work

MONITOR DEPLOY

Figure 1: Ansible development steps (Image source:


https://www.ansible.com/integrations/) Figure 2: Chef Code

40  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

Node

2
1 Catalog
3 Facts
Report

Puppet Master

Figure 3: Client server architecture of Docker Figure 4: Working flow of Puppet

process arrangement.
ƒƒ I t is straightforward and
simple to use.

Chef
This tool is used for checking the
software designs. Figure 2 shows how
to create, test and deploy Chef code.

Key features
ƒƒ It guarantees that your configuration
strategies will stay adaptable,
versionable, testable and
intelligible.
ƒƒ It helps to normalise configurations.
ƒƒ It automates the entire procedure of
guaranteeing that all frameworks
are accurately designed. Figure 5: Old vs new ways of Kubernetes

Docker innovations in containerisation, and follow up on the progressions


Docker uses the idea of containers frequently found in that occur in applications, alongside
that virtualise the operating system. independent units. the inside and out reports and real-
It very well may be used to bundle ƒƒ It bundles everything that time alerts. Clients can distinguish
the application (for instance, the an application requires to those changes, and remedy the issues.
WAR document) alongside the run — libraries, framework Please refer to Figure 4.
conditions to be utilised for sending apparatuses, runtime, and
in various situations. The client server so on. Key features
architecture of Docker empowers ƒƒ It can work on hybrid infrastructure
the customer to associate with the Puppet Enterprise and applications.
daemon, which plays out the tasks like This is an open source ƒƒ It has the client-server architecture.
structure, running and distributing the configuration management ƒƒ It bolsters the Windows, Linux and
compartments (Figure 3). tool that designers and activity UNIX working frameworks.
groups can use to safely work
Key features on programs (infrastructure, Kubernetes
ƒƒ Docker’s compactness is made applications, etc) at any place. It Kubernetes, often abbreviated to K8S,
possible due to its unique empowers clients to comprehend is a container orchestration tool that

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  41


Focus

Tools Pros Cons


Ansible 1. SSH-based, so it doesn’t require the installation of 1. Less powerful than tools based in other programming languages.
any agents on remote nodes.
2. Variable registration is required for even basic functionality, which can make
2. Playbook structure is simple and clearly structured. easier tasks more complicated.
Chef 1. Extendable with scripting language, easy 1. Web UI does not allow modification of configurations
configuration files

2. Easy to implement, learn and use on daily basis for 2. Other platforms have better pre-configured deployment scripts
configuration management
Docker 1. Docker produces an API for container management 1. Containers don’t work at bare-metal rates.

2. Fast application deployment 2. Data storage is intricate

Puppet 1. Well-established support community through 1. For more advanced tasks, you will need to use the CLI (Command Line
Puppet Labs. Interface), which is Ruby based.
2. Because of the DSL (Domain Specific Language) and a design that does not
2. Simple installation and initial setup focus on simplicity, the Puppet code base can grow large and unwieldy.

Kubernetes 1. Microservices rolling updates 1. Businesses need a certain degree of reorganization when using Kubernetes
with an existing app.
2. Makes it a lot easier to establish effective CI/CD 2. Pods sometime need a manual start/restart before they start
pipelines. working as intended. This can happen in certain situations such as
when running near full capacity

Figure 6: Pros and cons of the five tools featured


ƒƒ ontainer grouping using Pod
C
Ansible Chef Docker Kubernetes Puppet
ƒƒ Self-healing
Configuration Configuration It’s a Configuration Configuration ƒƒ Auto-scalability
management management container management tool. management tool.
tool. tool. technology. ƒƒ DNS management
It is written in Ruby and Written in Go Written in Go Ruby-DSL (domain- ƒƒ Load balancing
Python. Erlang. programming programming language. specific language) ƒƒ Rolling updates or rollbacks
language. language.
ƒƒ Resource monitoring and logging
Easy for Complex Easiest to Setting up Kubernetes Difficult for Figure 6 depicts the pros and cons
configuration from the manage, requires a lot of planning beginners
management development understand of the five tools featured here. Figure
perspective and isolate. 7 compares these five top DevOps
It is more It’s similar to It is very Kubernetes is suitable It’s more targeted automation tools.
appropriate Puppet different for developers of modern towards operations
for front-end from the applications that don’t require
developers, others and development The road ahead
where some has several background
programming components
The DevOps universe is brimming
might be with unique and remarkable open
needed.
source tools. Mainstream DevOps
It is similar Builds a It delivers Kubernetes brings a lot It may configure tools can help you overcome any
to Puppet in pipeline of configuration of complexity as it is a more than one
producing files. processes. for one larger project with more process at a time hindrance. You can select the device
process at a moving parts that any and make the that suits your business needs and
time, making other DevOps tool, but dependencies a bit
Docker files that also leads to latency complex. immediately watch the improvements
simpler than in executing commands, unfold in your business activities.
bash script making troubleshooting
for process and monitoring Diverse DevOps tools will not only
configuration. cumbersome. work well independently, but also
Figure 7: Comparing the top five DevOps tools work well in combination.

takes containerisation to the next level. containers at scale. With Kubernetes,


It functions admirably with Docker or you can gather your containers into
By: Dr S. Balakrishnan
any of its options. Kubernetes is still legitimate units.
The author is a professor at Sri Krishna
exceptionally new; its first version College of Engineering and Technology,
turned up in 2015. It was established Key features Coimbatore. He has 17 years of
by a few Google engineers who were Some of the platform features that experience in teaching, research and
administration.
looking for a solution to oversee Kubernetes offers are:

42  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

Understanding Continuous
Integration and Continuous
Delivery/Deployment
This article discusses continuous integration (CI) and continuous delivery/deployment (CD),
which are part and parcel of the DevOps software development culture. The goal of all
developers is to produce software that is reliable, reusable, extendable, flexible, correct
and efficient. DevOps ensures this, with CI and CD as integral parts of the process. 

I
n simple terms, integrate when
you commit. Continuous
integration implementation
doesn’t mean less bugs. Rather, it
Development
highlights the issues or bugs in the early Code Commit
stages and hence is useful because the
earlier in the development cycle you
fail, the faster you recover!
Static Code
Analysis (SCA)
The benefits of detecting
bugs early
In the case of discovering failures or
bugs, it becomes the priority of the SCA - Quality
respective stakeholders to focus on Gate
solving build or continuous integration
issues at the earliest and fix the
broken build.
Continuous integration (CI) is a Compilation
popular DevOps practice that requires
the development team to commit code
into a shared repository (Centralised
Version Control or Distributed Version Unit Test
Control) as and when a feature is Execution
completed or a bug is fixed. Each
commit goes through a build validation
process using an automated build
Code Coverage
process with any automation tool -Quality Gate
feasible, based on the knowledge or the
culture of the organisation.
It is important for the development
team to commit frequently whenever a Build Package
feature is implemented or a bug is fixed.
There are still some developers who
commit code even if it is not properly
tested or is not working fine. There are
also instances when code is committed Figure 1: Continuous integration

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  43


Focus

When you decide to go down the


path of cultural transformation, think
about the satisfaction you’ll get when
after addressing multiple failures, your
software eventually gets the green status.

Benefits, best practices and


challenges of CI
We can understand some of the benefits
of continuous integration from Figure 2.
Best practices make it easy to
implement continuous integration
within an organisation, without
resistance. Please refer to Figure 3.
The major challenge with continuous
integration is the mindset! People resist
it because all developers are so used to
the IDE, its features and functionality
that any new tool is not welcomed. It
is important to start with the proof-of-
concept and a demo of all the automation
activities orchestrated in the automation
server. This helps to develop a mindset
that is ready to accept new practices.
Once the resistance is overcome, the
Figure 2: Benefits of continuous integration development team can use CI practices
effectively because it helps to boost
even if the compilation fails; or when productivity.
libraries are hard coded and the path
in the developer’s system is different, Continuous delivery
and hence the compilation fails. Continuous delivery and continuous
deployment are the next logical steps
The solution when implementing DevOps practices.
Please don’t commit if you are not Continuous integration creates an
ready. This piece of advice holds good artifact that needs to be deployed in a
for life as well as while developing development or QA stage, or a production
software! After each commit, build environment. The next step is automated
validation has to pass, all quality gates deployment in different environments.
have to be cleared and things must These environments can be on-premise or
work successfully in the pipeline. on the cloud. Resources can be physical
Even if the pipeline fails, it must or virtual machines, or containers.
be fixed immediately. This failure Services can be Infrastructure as a Service
helps the Jenkins engineer and or a Platform as a Service, where the
developer to grow. artifact can be deployed.
Failure should be the motive to The continuous delivery phase
inspire us to continue to fix problems represents activities to deploy an
and improve continuously even if a artifact in non-production environments
temporary failure creates a road block. using scripts or deployment tools. The
We must not avoid build failures but artifact is always ready to be deployed
address them head on. Build success in production. Usually, continuous
is only one step away when the delivery is a preferred practice since
Figure 3: Best practices of continuous integration issues are fixed. there’s less risk involved with it.

44  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

Figure 4: CI/CD

Figure 6: Best practices of continuous delivery

The best practices to create code for an application or


that make it easy to different components of an application.
implement continuous Most of the time, environments are
delivery are shown in controlled by customers and they are
Figure 6. not willing to relinquish this due to
A major challenge security reasons; hence, automated
Figure 5: Benefits of continuous delivery with continuous deployment to all the environments
integration is gaining control over the might not be feasible all the time.
The continuous deployment phase environment and people’s mindset. Cloud resources and containers
represents activities to deploy the The dev environment might be in are changing the game. The way cloud
artifact in production environments control of the development team, yet services are used for infrastructure as
using scripts or deployment tools. there are specific instances when this well as for the DevOps setup helps
The artifact is directly deployed is not the case. VDI (virtual desktop to automate deployment in different
in production. Usually, continuous infrastructure) is given to developers environments.
delivery is preferred to continuous
deployment. References
[1] Agile, DevOps and Cloud Computing with Microsoft Azure: https://www.amazon.in/
Benefits, best practices and Agile-DevOps-Cloud-Computing-Microsoft/dp/9388511905
challenges of continuous [2] Continuous Integration: https://www.martinfowler.com/articles/
continuousIntegration.html
delivery
Success and failure are both equally
important during digital and cultural By: Mitesh S.
transformation activities. Let’s The author has written the book, ‘Agile, DevOps and Cloud Computing with Microsoft
understand some of the benefits of Azure’. He contributes occasionally to https://clean-clouds.com and https://
etutorialsworld.com.
continuous delivery from Figure 5.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  45


Focus

Building the DevOps Pipeline


with Jenkins
Jenkins is an open source tool that provides integration with the tools used in
application life cycle management to automate the entire process, based on feasibility.

T he prerequisites for installing Jenkins on a specific


system are:
ƒƒ Jenkins official documentation recommends
java -jar jenkins.war --httpPort=9999

Access Jenkins’ dashboard: Browse to http://


256MB of RAM. localhost:9999
ƒƒ Jenkins official documentation recommends 10GB
of drive space; however, I suggest 50GB - 80GB of It will redirect the user to unlock the Jenkins page
free space on a disk drive. by providing the administrator password, which can be
ƒƒ Jenkins official documentation recommends found from the console log when the jenkins command is
Java 8 or 11. executed for the first time.
The commands to run Jenkins are as follows. Click on Install Suggested Plugins.
You can start Jenkins at the command line by using Once all the plugins are installed, create your first
the Generic Java package (.war): admin user (Figure 2) and click on Save and Continue.
Provide the Jenkins URL or keep the default,
java -jar jenkins.war depending on your needs.
Once the Jenkins setup is complete, click on Start
To access the Jenkins dashboard, browse to http:// using Jenkins. Verify the Jenkins dashboard. Click on
localhost:8080. Change the port to run Jenkins, with the Manage Jenkins.
following command: Go to Manage Jenkins -> Global Tool Configuration.

Figure 1: Customise plugins

46  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

Figure 5: Global Tool Configuration

Figure 2: Admin user

Getting Started

Instance Configuration
Jenkins URL: http//localhost:9999/

The Jenkins URL is used to provide the root URL for absolute links to various
Jenking resources. That mean this value is required for proper operation of many
Jenkins features including email notification, PR status updates, and the BUILD_
URL environment variable provided to build steps.

The proposed default value shown is not saved yet and is generated from the
current request, if possible. The best practice is to set this value to the URL that
users are expected to use. This will avoid confusion when sharing or viewing links. Figure 6: Upstream and downstream projects

Figure 3: Jenkins instance configuration application life cycle management activities. There are two
types of pipelines in Jenkins, as of today. This means that
Jenkinsfile can contain two different types of styles/syntax
and yet it can achieve the same thing.
Scripted pipelines follow the imperative programming
model. They are written in the Groovy script in Jenkins. All
Groovy blocks/constructs help to manage flow as well as
error reporting.

node {
/* Stages and Steps */
}

Figure 4: Jenkins dashboard node {


stage(‘SCA’) {
The Build Pipeline plugin // steps
Using Jenkins and the build pipeline makes us realise the }
complexities of managing Jenkins over time. It is easier to use stage(‘CI’) {
the Build Pipeline plugin in Jenkins to create pipelines with // steps
upstream and downstream projects (Figure 6). }
Execute the pipeline as shown in Figure 7. stage(‘CD’) {
// steps
Scripted pipelines }
Jenkins 2.0 and later versions have built-in support for }
delivery pipelines. Jenkinsfile contains the script to automate

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  47


Focus

Figure 7: Build Pipeline plugin

Figure 8: Blue Ocean plugin


pipeline {
Declarative pipelines agent any
As the name suggests, a declarative pipeline follows a stages {
declarative programming model. Declarative pipelines are stage(‘SCA’) {
written in a domain-specific language in Jenkins that is clear steps {
and easy to understand. //

48  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

}
}
stage(‘CI’) {
steps {
//
}
}
stage(‘CD’) {
steps {
//
}
}
}
}

Blue Ocean
Blue Ocean provides
an easy way to create
a declarative pipeline
using new user
experiences available
in its dashboard. It is Figure 9: Blue Ocean repository
like creating a script by
selecting components,
steps or tasks.
Open the Blue Ocean
dashboard and click on
Create Pipeline. Connect
with the required
repository; then create
stages, select the steps
and configure.
Blue Ocean is a new
user experience, and it Figure 10: Blue Ocean tests
provides an easy way to
access unit test results
(Figure 10).
Click on Pipeline
to get the status of the
pipeline. Click on the
specific stages to access
the logs of these stages.
Click on Artifacts
to access the package
file and other artifacts
(Figure 12).
The following list of
open source tools can be
integrated in the pipeline
to implement DevOps
practices. Figure 11: Blue Ocean automated deployment

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  49


Focus

Figure 12: Blue Ocean artifacts

Tool Description

Travis CI Hosted continuous integration service that supports integration with BitBucket and GitHub.
https://travis-ci.org/

GoCD GoCD is a build and release tool that helps to perform end-to-end orchestration for application life
cycle management activities.
https://www.gocd.org/

Nagios Nagios is an open source tool that can be used to monitor network and infrastructure.
https://www.nagios.org/

Docker This is a very popular container management tool. Kubernetes supports Docker as a container pro-
vider.
https://www.docker.com/

Ansible Ansible is used for automation, such as for configuration management and continuous delivery.
https://www.ansible.com/

Collectl This is used to gather the performance data of systems such as CPU, network, and data.
http://collectl.sourceforge.net/

GitHub This provides a repository for public and private access to maintain version control. It is hosted.
https://github.com/

Kubernetes This is one of the most popular container orchestration tools available in the market.
https://kubernetes.io/

Artifactory This provides community and enterprise versions of artifact management tools.
http://www.jfrog.com/artifactory/

CruiseControl CruiseControl is a Java based, open source continuous integration framework.


http://cruisecontrol.sourceforge.net/

50  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

Selenium Selenium is a popular automated functional testing tool that is used for Web applications. It is open
source.
https://www.selenium.dev/
Appium Appium is a popular automated functional testing tool that is used for mobile applications.
It is open source.
http://appium.io/
SonarQube This is used to analyse the code to track bugs, security vulnerabilities, and code smells. It supports
more than 15 programming languages.
https://www.sonarqube.org/
SaltStack https://www.saltstack.com/
Apache JMeter https://jmeter.apache.org/
OWASP ZAP This is used to scan security issues of applications for penetration testing. It is an active open source
Web application security scanner.
https://www.zaproxy.org/
Ant Apache Ant is an XML based build management tool for Java based projects.
https://ant.apache.org/
Gradle This is a popular build management tool for Android based projects. It is also used in Java based ap-
plications. It supports domain-specific languages.
https://gradle.org/
Maven Apache Maven is one of the most popular build tools, with multiple goals for an application’s life cycle
phases such as build, test, and deploy. It is mainly used for Java projects.
http://maven.apache.org/
Hygieia This is a one-of-a-kind DevOps dashboard that helps to integrate with tools such as Bamboo,
Jenkins, Jenkins-codequality, Jenkins Cucumber, Sonar, AWS, uDeploy, XLDeploy, Jira, VersionOne,
Gitlab, Rally, Chat Ops, Score, Bitbucket, GitHub, Gitlab, Subversion, GitHub GraphQL, HP Service
Manager (HPSM), AppDynamics, Nexus IQ and Artifactory. It has two types of dashboards — one for
engineers and the other for executives.
https://www.capitalone.com/tech/solutions/hygieia/
CFEngine This is a popular DevOps tool that is used to automate IT infrastructure related operations.
https://cfengine.com/
GitLab This is a Git repository. Now it also provides support for automation pipelines to configure continuous
integration and continuous deployment.
https://about.gitlab.com/
Junit This is a popular yet simple unit testing framework to write unit tests for the Java programming lan-
guage.
https://junit.org/
Jasmine Jasmine is a popular behavioural data driven framework to verify JavaScript based applications.
http://jasmine.github.io/

References
[1] Agile, DevOps and Cloud Computing with Microsoft Azure.
https://www.amazon.in/Agile-DevOps-Cloud-Computing- By: Mitesh S
Microsoft/dp/9388511905
[2] Continuous Integration. https://www.martinfowler.com/ The author has written the book ‘Agile, DevOps and Cloud
articles/continuousIntegration.html Computing with Microsoft Azure’. The title of his upcoming
book is ‘Hands-on Azure DevOps’.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  51


Focus

Understanding DevOps:
A Revolution in Software
Development
In the field of IT and software development, DevOps implementation is gaining
significant popularity. This article takes a quick look at the important concepts of
DevOps and the phases of its life cycle. It also maps the relevant open source tools,
and finally, highlights how it can bring value to the IT industry.

T
he word DevOps is DevOps is gaining popularity Various stages in the
a combination of DevOps is gaining popularity DevOps life cycle
development and because it bridges the gap between An understanding of DevOps
operations. DevOps is developers and the supporting is incomplete without having
not software — not a tool, not a operations team. The DevOps knowledge of its life cycle. The
product and not a programming approach aims at: various phases that comprise the
language, but an approach whereby ƒƒ Delivering high-quality software DevOps life cycle are highlighted in
the development and operations in a shorter development life Figure 1 and described below.
teams work together, instead of cycle. 1. Continuous development: The
waiting for the other to finish tasks, ƒƒ Deployment in frequent cycles. first phase of the DevOps life
to enable continuous delivery of ƒƒ Reducing the time to move cycle involves ‘planning’ and
value to end users. This software into production, from ‘coding’ of the software. Planning
development methodology improves conceptualising an idea. includes activities like designing
the collaboration between the ƒƒ Accelerating application delivery the blueprint of the module, and
development and the operations across enterprise portfolios. identifying the resources and the
teams using various automation tools. ƒƒ Enabling rapid building and algorithm to be used. Once the
These tools are implemented during delivery of products. plan is finalised, the developers
different phases, which are a part of ƒƒ Delivering applications code the application and maintain
the DevOps life cycle. and services at high velocity. it using popular tools like Git,
Gradle and Maven.
2. Continuous testing: To
PLAN catch any error and ensure
Continuous DEPLOY
Development Continuous
the reliability of the software,
CODE Deployment the developed modules are
OPERATE
continuously tested for bugs.
BUILD
Integration In this phase, testing tools like
Continuous Testing Selenium (Se), TestNG, JUnit,
Continuous
TEST
MONITOR Monitoring etc, are used.
3. Continuous integration:
This phase is the heart of the
Continuous Integration
DevOps life cycle in which code
supporting new functionality in
Figure 1: Different phases of the DevOps life cycle the Git repository is continuously

52  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

PLAN DEPLOY

DEVELOPER OPERATIONS

CODE
OPERATE

Continuous
Integration

BUILD

MONITOR

TEST

Figure 2: Categories of popular DevOps tools

reflected using the Jenkins tool. The pros The cons


4. Continuous deployment: ƒƒ It facilitates faster code ƒƒ It takes time for users to adjust to
The deployed applications deployment into production. the frequent changes that are made.
are consistently improved by ƒƒ Ensures the release of ƒƒ Lack of a separate plan for consistent
collecting end user experiences quality products. upgradation may lead to severe
from the obtained results of the ƒƒ It helps in reducing the chances shortfalls with respect to security.
software. Popular tools that are of software failures. ƒƒ IT companies need to hire
used for the deployment stage are ƒƒ Enables continuous delivery of experts who have an in-depth
Ansible, Puppet and Docker. software with stable features. understanding of development,
5. Continuous monitoring: ƒƒ Helps in identifying bugs at testing, monitoring and deployment
This is the stage when the an early stage of development tools, and that can be a challenge.
performance of the deployed due to better communication With technology evolving very
product is monitored. In this between the development and rapidly, IT companies that do not
phase, the production server is operations teams. adopt DevOps may find it difficult to
continuously monitored for all ƒƒ Implementation of DevOps also compete in the market. The adoption
kinds of activities that happen increases the organisation’s of the DevOps model is inevitable,
on it, and the log files are opportunities for innovation because it bridges the gap between the
generated. Nagios is a popular and research. developer and operations teams.
tool for this phase.
Careful implementation of these References
five phases of the DevOps life cycle These links are in addition to those provided in the body of the article itself.
helps an IT organisation to achieve [1] https://devops.com/
its business objectives. Various tools [2] https://www.edureka.co/blog/devops-lifecycle/
[3] https://intellipaat.com/blog/tutorial/devops-tutorial/
that contribute to the DevOps life [4] https://www.simplilearn.com/cloud-computing/devops
cycle are highlighted in Figure 2. [5] A. Bhardwaj and C. Rama Krishna, ‘A Container-based Technique to Improve Virtual
Machine Migration in Cloud Computing’, IETE Journal of Research, pp. 1-16, 2019.
The pros and cons of
adopting DevOps By: Dr Aditya Bhardwaj
The pros and cons that should be
considered when adopting DevOps The author works as assistant professor at PEC (a deemed university), Chandigarh. He
has experience in cloud computing and Big Data technology.
are listed here.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  53


Focus

How DevOps Differs from


Traditional IT and Why
DevOps is the buzzword in the software development industry. But how much of the
hype associated with this new technology is warranted? This article demystifies DevOps
and explains to the reader why it scores over traditional IT practices.

CODE DEPLOY

PLAN RELEASE

BUILD OPERATE

DEVELOPMENT TEAM OPERATION TEAM

TEST MONITOR

D
evOps is a set of practices Why it is needed of synchronising timelines, causing
that promotes partnership ƒƒ B
efore DevOps, the development more delays in deployment.
between the development and operations teams worked in a
and operations teams to completely segregated way. Comparing DevOps and
deploy code to production faster in an ƒƒ Testing and deployment were traditional IT
atomically and continuous manner. The different activities done separately When comparing traditional IT ops with
word ‘DevOps’ is a combination of the after design-build. Hence, they DevOps, it’s clear how they differ and
words ‘development’ and ‘operations’. required more time. why the latter is increasingly embraced
DevOps helps to increase an ƒƒ Using traditional IT practices, team by organisations worldwide. Given
organisation’s speed to provide members spent large amounts of below are some points of comparison.
applications and services. It allows their time in testing, deploying and
companies to serve their customers designing, instead of building the 1. Time
better and compete more powerfully in project. DevOps teams spend 33 per cent more
the market. In simple words, DevOps ƒƒ Manually deploying the code leads time to refine infrastructure against
can be interpreted as an arrangement to human errors in production. failure than traditional IT ops teams. In
of development and IT operations with ƒƒ Different teams have different addition, DevOps teams spend about 21
better communication and collaboration. timelines. This leads to problems per cent less time putting out fires on a

54  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

weekly basis and 37 per cent less time to recover, while recoveries in less organisational collaboration. The 2017
handling support cases. DevOps teams than 30 minutes are 33 per cent more State of DevOps Report quantifies
also waste less time on administrative likely for DevOps teams. Automated this increase in efficiency, reporting
support due to a higher level of deployments and an infrastructure that high performing organisations
automation, self-service tools and that’s programmable are the key employing DevOps practices spend 21
scripts for support tasks. With all of features for quick recovery. per cent less time on unplanned work
this additional time, DevOps teams are and rework, and 44 per cent more
able to spend 33 per cent more time to 4. Release of software time on new work. More generally
ameliorate infrastructure and 15 per When it comes to releasing software, speaking, however, successfully
cent more time on self-enhancement DevOps teams need roughly 36.6 implementing DevOps practices can
through further education and training. minutes to release an application have a profound impact on a company
whereas traditional IT ops teams through improving efficiency and
2. Data and speed need about 85.1 minutes. This means execution in areas that are both
DevOps teams tend to be small, agile, that DevOps teams release apps essential and decidedly unglamorous.
driven by innovation and focused on more than twice as fast as traditional Fredrik Håård, an engineer
addressing tasks in an accelerated IT ops teams. with over 12 years of DevOps
manner. According to a Gartner experience who worked as a senior
report, they work on the mantra, Why DevOps is better cloud architect at McKinsey and
“Don’t fail fast in production; embed There are many advantages of using at Wondersign, articulates this
monitoring earlier in your DevOps DevOps rather than traditional IT. point more fully, “Good DevOps
cycle.” Agility is one of the top five ƒƒ Reduced chances of product engineers must be champions – and
objectives of DevOps. In the case of failure: Software delivered by take responsibility for all the areas
traditional IT ops, the data count for DevOps teams is usually more that might not be prioritised by the
the feedback loop is confined to the fit-for-purpose and relevant to organisation such as data security,
service or application being worked the market thanks to the constant disaster recovery, mitigation, and
upon. If there’s a downstream effect feedback loop. audits.” He adds, “The choices you
that is not known or noticed, it can’t ƒƒ Improved flexibility and support: make in DevOps can have long-lasting
be addressed. It’s up to IT ops to Applications developed by effects at a company.”
pick up the pieces. That is the reason DevOps teams are typically more So the conclusion is, DevOps
why DevOps is faster in delivering expansive and easy to maintain teams get more time and solve
business applications and the due to the use of microservices problems faster. They spend more
challenge to IT ops is to keep pace and cloud technologies (we’ll get time in ameliorating things, less time
with the speed of business. to that later). fixing things, recover from failures
ƒƒ Faster time to market: Application faster and release applications more
3. Recovery and crunch time deployment becomes quick and than twice as fast as traditional IT
The average DevOps teams see only dependable thanks to the advanced ops. Through DevOps, all members
two app failures per month, and continuous integration (CI) and of different teams work together
the recovery time is less than 30 automation tools DevOps teams because they have the same goal,
minutes for over 50 per cent of all usually count on. which is to deliver quality software
respondents. Of the DevOps teams ƒƒ Better team efficiency: DevOps to the market.
surveyed, 71 per cent can recover means joint responsibility, which
from failures in less than 60 minutes leads to better team engagement and
while 40 per cent of traditional IT ops productivity.
References
teams need over an hour to recover. ƒƒ Clear product vision within the [1] https://en.wikipedia.org++++
A key practice of DevOps is to be team: Product knowledge is no [2] https://www.toptal.com
prepared for the possibility of failures. longer spread across different roles
Continuous testing, alerts, monitoring and departments, which means
and feedback loops are put in place better process transparency and By: Neetesh Mehrotra
so that DevOps teams can react decision making. The author works at NIIT Technologies
quickly and effectively. Traditional The DevOps culture comes with as a senior test engineer. His areas
IT ops teams are almost twice as a variety of rewards, some of which of interest are Java development and
automation testing.
likely to require more than 60 minutes include greater efficiency, security and

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  55


Focus

RCloud is DevOps for


R
Data Science
DevOps is the collaboration between the development and deployment streams of a
software system. It increases productivity by reducing the time between the development
and deployment of software. RCloud is a platform that accelerates data analysis related
insights by reducing the time between coding and deployment.

D
evOps for data science is Features of RCloud on RCloud can be verified and
gaining popularity as it ƒƒ C an be used from anywhere: It is executed by anyone with access to
involves infrastructure, browser based software; so if you the notebooks without any concern
configuration management, have Internet connectivity, you can for environmental variables.
integration, testing and monitoring. use it from anywhere. ƒƒ Live code execution: RCloud
Hence, it accelerates data analysis ƒƒ Project container facility: RCloud notebooks are not static Web pages
insights. DevOps supports data notebook consists of all the but code that is executed live.
scientists by creating integrated required components and associated ƒƒ Unique RCloud Web service
environments for most vital tasks such data dependencies. It contains interface: RCloud provides a
as data exploration and visualisation. dependencies of data analysis that unique Web service interface
Data scientists need varied types of include code, comments, equations, through which any notebook
infrastructure to handle any complex visualisations, etc. asset can be integrated with other
project. DevOps does the provisioning ƒƒ Association feature: RCloud technologies by simple means.
and configuration of infrastructure for provides an excellent capability ƒƒ Promotes user engagement:
a variety of environments. Any data for association. It is browser based RCloud is platform-independent.
science model is iterative in nature software that is installed on a server Access and control remains
as it involves new data that needs or any distributed environment constant, which increases user
to be trained, and based on that, it such as Hadoop. It provides access confidence and engagement.
evolves new models that need to be to all notebooks so that new users
made available to users. For this, can view, copy, edit and update the Why RCloud?
the data scientist applies continuous analysis and visualisations with new For effective results of a data science
integration and deployment practices. data sets. It is very efficient and easy project, certain information must
DevOps bridges the gaps between the to use as it only needs a browser. be shared among team mates. They
training environment and the model ƒƒ Distribution capability: RCloud is must be agile enough to address new
deployment environment through based on the URL sharing concept. features and functionalities. They must
continuous integration and continuous It delivers value faster. It is agile move their results at different levels
deployment pipelines. enough for us to share the URL at of the data science project such as
RCloud is considered as DevOps any stage of analysis. data pre-processing, exploratory data
for data science as it resolves the data ƒƒ Scalability: RCloud can perform analysis, predictive modelling and
analysis development and deployment parallel connections to multi- visualisation. RCloud is the perfect
issues of collaboration, sharing, server systems. It provides the software to address these issues as
scalability and reproducibility. RCloud data scientist with flexibility to it contains association, distribution,
was created at AT&T labs by Simon run Big Data packages without scaling and reproducibility (ADSR)
Urbanek, Gordon Woodhull and Carlos writing complex code. characteristics. RCloud is very vital
Scheidegger and is open source software. ƒƒ User directory based access: for data science for many reasons:
It is a Web based platform for all aspects It contains a user directory that ƒƒ RCloud is an open source Web
of data science such as analytics, provides access to the notebooks of based platform for data science. It
visualisation and collaboration. It uses every user registered with RCloud. has excellent capability to help you
the R language for all tasks. ƒƒ Reproducibility: A data analysis share your ideas or work with your

56  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

Figure 1: RCloud on GitHub

team mates. Figure 1 describes the Data packages without writing ƒƒ R Cloud maintains automatic Git
RCloud components on GitHub. complex code. based trials of code modifications.
ƒƒ RCloud provides a platform for the ƒƒ RCloud is well suited for Big Data DevOps involves infrastructure,
data scientist to search relevant things applications due to its scalability configuration management,
without reinventing the wheel. feature. integration, testing and monitoring.
ƒƒ It provides fast interaction with ƒƒ It provides great security features RCloud is considered as DevOps for
data in the Hadoop Distributed File so that unauthorised clients cannot data science as it resolves the data
System (HDFS) or similar kinds of make calls to the RCloud runtime analysis development to deployment
distributed file systems. This feature environment. Notebooks of RCloud issues of ADSR. In this article, we
is very well suited to Big Data can also be encrypted for advanced have described the main features of
analytics. security. Authenticated client server RCloud in detail, with an emphasis
ƒƒ RCloud differs from other DevOps channelling is also a unique feature on those that are very essential for
of data science as it provides of RCloud. data scientists.
browser based access.
ƒƒ It gives a lot of flexibility to users to
create any type of complex widgets, References
notebooks or dashboards. [1] https://www.forbes.com/sites/janakirammsv/2018/11/04/the-growing-
ƒƒ Both registered and non-registered significance-of-devops-for-data-science/#7120d32a7481
users can view or interact with [2] http://stats.research.att.com/RCloud/
[3] https://www.kdnuggets.com/2016/11/rcloud-devops-data-science.html
live notebooks of the RCloud
environment.
ƒƒ Communication in RCloud is done
with standard communication By: Dr Dharmendra Patel and Dr Atul Patel
protocols such as HTTP. Both the authors are associated with the Smt. Chandaben Mohanbhai Patel Institute of
ƒƒ RCloud provides data scientists Computer Applications, Charusat, Gujarat. Their areas of interest are data mining, data
science, artificial intelligence, deep learning and image processing.
with the capability to run Big

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  57


Focus

How Prometheus Helps to


Monitor a Kubernetes
Deployment
The ubiquitous usage of Kubernetes necessitates accurate and timely monitoring
of its clusters. Prometheus and Sensu are the tools to latch on to for this.

D
evelopment combined Cooperation’ was made at Flickr by in DevOps. This resulted in open source
with operations leads to John Allspaw and Paul Hammond. tools like Prometheus and Sensu, along
the high impact DevOps That year can be treated as the year with the older Nagios. Let’s take a
practices, and microservices the DevOps movement began. The closer look at Prometheus, while getting
based architecture is ubiquitous in Linux containerisation technology the basics on Sensu.
such an environment. Although such based Docker came into play in
architecture is not new — it’s been 2013. In 2014, we saw the journey of Monitoring tools
around since the 1980s – the DevOps Kubernetes begin as an orchestration We typically talk about the following
practice is relatively new. The idea attempt of multiple Docker containers three tools in the context of Kubernetes
of DevOps began in 2008 with a from Google. Subsequently, Kubernetes monitoring.
discussion between Patrick Debois began to be maintained by the Cloud 1. Prometheus: This open source
and Andrew Clay Shafer, concerning Native Computing Foundation Kubernetes monitoring tool
the concept of agile infrastructure. In (CNCF). The consequent proliferation collects the metrics of Kubernetes
June 2009, the epochal presentation of Kubernetes led to the absolute deployment from various
of ‘10+ Deploys a Day: Dev and Ops necessity of monitoring its deployment components, relying on the pull

58  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

method. The data is stored in an 1. The apps are ever evolving; they 1. The hosts (nodes) which are
inbuilt time series database. are always moving. running the Kubelets.
2. Sensu: This complements 2. There are many moving pieces to 2. The process associated with
Prometheus. It provides flexibility monitor. Kubernetes is not really a Kubelets — the Kubelet metrices.
and scalability to measure the monolithic architectural tool, as we 3. Kubelet’s build in the cAdvisor
telemetry and execute service checks. all know. Those components also data source.
The collected data can be more keep on changing. 4. The Kubernetes cluster. This is
contextual here, and it is possible 3. Once upon a time, everything was the whole deployment. From a
to extend it to provide automatic server based, often bare metal. Then monitoring perspective, these are
remediation workflows as well. we got cloud based deployment. The Kube state metrices.
3. Nagios: This is the old friend we natural progression makes multi- The good part is that Prometheus
used to monitor deployments. cloud a reality, which adds an extra can monitor all of the above, addressing
It serves its purpose well, facet to the monitoring challenges. all the monitoring challenges we
especially in the context of bare 4. We typically annotate and tag described earlier.
metal deployments to a host of the pods and containers in a
administrators. Kubernetes deployment to give The Prometheus architecture
them nicknames. The monitoring The heart of Prometheus is its server
Kubernetes monitoring should reflect the same. component, which has a collector
challenges that pulls the data from different data
When we decide to go with Kubernetes data sources sources to store it in an inbuilt time
Kubernetes, we are ready to embrace Once we understand the challenges series database (TSDB). It also has an
the mantra that the only constant in of monitoring in a Kubernetes API server (HTTP server) to serve the
life is change. Naturally, this adds environment, we need to understand API request/response.
associated challenges with respect to the moving parts of a typical, sizable The unique feature of Prometheus
monitoring Kubernetes deployments. deployment. These moving parts are the is that it is designed for pulling out data
Let’s discuss a few monitoring data sources from which Prometheus to make it scalable. So, in general, one
challenges to understand them better. pulls the monitoring metrices. need not install an agent to the various

Service discovery Prometheus


Short-lived alerting
jobs
file_sd
push metrics Alertmanager Email
at exit
discover
targets notify
etc
Pushgateway Prometheus server
push
alerts

HTTP
pull Retrieval TSDB
metrics server

PromQL

Prometheus
web UI

Grafana
Node HDD/SSD Data
Jobs/
exporters visualisation
and export
Prometheus
targets API clients

Figure 1: Prometheus architecture (Reference: https://prometheus.io/docs/introduction/overview/#architecture)

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  59


Focus

Figure 2: The Prometheus target dashboard

Figure 3: The Prometheus graph dashboard

components to monitor them. However, So let us play around with To configure Prometheus.yml, use
Prometheus also has an optional push Prometheus. the following code:
mechanism called Pushgateway. It is now time to get our hands
The Alert Manager is the dirty. A typical Prometheus installation global:
component to configure the alerts and involves the following best practices: scrape_interval: 30s
send notifications to various configured 1. Install kube-prometheus. evaluation_interval: 30s
notifiers like email, etc. The PromQL is 2. Annotate the applications with
the query engine to support the query Prometheus’s instrumentation. scrape_configs:
language. Typically, it is used with 3. Label the applications for easy - job_name: ‘prometheus’
Graphana, although Prometheus has an correlation. static_configs:
intuitive inbuild Web GUI. Data can be 4. Configure the Alert Manager for - targets: [‘127.0.0.1:9090’,
exported as well as visualised. For the receiving timely and precious alerts. ‘127.0.0.1:9100’]
sake of brevity, we are only covering Let us configure Prometheus in a labels:
the Prometheus Web GUI, not the demo-like minimalistic deployment to group: ‘prometheus’
Graphana based visualisation. get a taste of it.

60  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

To start Prometheus, type:

docker run -d --net=host \ Sensu


> -v /root/prometheus.yml:/etc/ Container Container Agent
(Sidecar)
prometheus/prometheus.yml \
> --name prometheus-server \ Pod Sensu Agent
Pod
(DaemonSet)
> prom/prometheus

For starting the node exporter, the


code snippet is: Kubelet

docker run -d \ Sensu Agent (Kubelet)


> -v “/proc:/host/proc” \
> -v “/sys:/host/sys” \ Pod

> -v “/:/rootfs” \ Node

> --net=”host” \
> --name=prometheus \ Figure 4: Sensu deployment (Reference: https://blog.sensu.io/monitoring-kubernetes-
> quay.io/prometheus/node- docker-part-3-sensu-prometheus)
exporter:v0.13.0 \
> -collector.procfs /host/proc \> data model loses various contexts of the is of paramount interest for self-
-collector.sysfs /host/sys \ measurements. It is tightly integrated healing purposes.
> -collector.filesystem.ignored- with Kubernetes, so if the deployment 3. Secure transport: Prometheus
mount-points “^/(sys|proc|dev|host|etc) is multi-generational, it may face does not encrypt or authenticate
($|/)” challenges. The data transported is the metrics.
neither authenticated nor encrypted. In this article, we discussed how
Sensu complements Prometheus on microservices based architectural
Note: The paths shown here are this aspect aptly. Figure 4 shows how patterns are getting contextual in
purely local to a demo deployment. Sensu agents are typically deployed in a a DevOps environment with the
You may have to adjust the paths as Kubernetes environment. ubiquitous usage of Kubernetes.
per your own deployment. Some of the advantages Sensu Monitoring of such a sizable
provides to complement Prometheus deployment becomes challenging
1. Raw metrices can be viewed by are as follows: because of its highly dynamic
curl localhost:9100/metrices. In the 1. Adds more context to Prometheus nature. So, we need tools like
URL, it can be viewed in /metrics collected data to make it more Prometheus to overcome some of
endpoint. meaningful. the monitoring challenges. Sensu
2. The metrics view can be found 2. Workflow automation of acts as a complementary tool to
in /targets URL. For production, remediation actions based on alerts. Prometheus, improving the latter’s
Graphana is the preferred tool. So, alerts become actionable. This performance considerably.
3. The graph view can be found in
/graph URL. Any metrics collected
(viewable using /metric endpoint) References
can be viewed in /graph. [1] https://dzone.com/articles/devops-tools-for-monitoring
[2] https://blog.sensu.io/monitoring-kubernetes-part-1-the-challenges-data-sources
[3] https://blog.sensu.io/monitoring-kubernetes-docker-part-2-prometheus
Sensu: Complementing [4] https://prometheus.io/docs/introduction/overview/
Prometheus [5] https://blog.sensu.io/monitoring-kubernetes-docker-part-3-sensu-prometheus
[6] https://www.katacoda.com/courses/observability-analysis/prometheus
Prometheus has lots of advantages
because of its scalability, pluggability
and resilience. It is tightly integrated
with Kubernetes and has a thriving By: Pradip Mukhopadhyay
supportive community. However, The author has 19 years of experience across the stack — from low level systems
it has a few disadvantages also. For programming to high level GUIs. He is a FOSS enthusiast and currently works for
NetApp, Bengaluru.
example, the simplified constrained

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  61


Focus

DevOps vs Agile:
What You Should Know
About Both
DevOps is a practice of bringing the development and operations teams
together, whereas agile is an iterative approach that focuses on collaboration,
customer feedback and small rapid releases. This article highlights the
differences between the two software development technologies.

D
evOps is a software emphasis is on iterative, incremental and ƒƒ D evOps focuses on constant testing
development method that evolutionary development. and delivery while the agile process
focuses on communication, focuses on constant changes.
integration and collaboration Key differences ƒƒ DevOps requires a relatively large
among IT professionals to enable rapid ƒƒ D
evOps is a practice of bringing team while agile requires a small
deployment of products. On the other the development and operations team.
hand, the agile methodology involves teams together, whereas agile is an ƒƒ DevOps leverages both shift-left
continuous iteration of development iterative approach that focuses on and shift-right principles; on the
and testing in the SDLC process. In collaboration, customer feedback other hand, agile leverages the shift-
this software development method, the and small rapid releases. left principle.

Customer Developer Operations


+ + +
Software Requirement Tester IT Infrastructure

Figure 1: The communication chain between stakeholders in a typical IT process

Customer Operations
Developer
+ +
+
Software Requirement IT Infrastructure
Tester

Figure 2: Agile addresses the communication gaps between the customer and developer

62  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

Customer Developer Operations


+ + +
Software Requirement Tester IT Infrastructure

Figure 3: DevOps addresses the communication gaps between the developer and IT operations teams

Differences between agile and DevOps


Parameter Agile DevOps

What is it? Agile refers to an iterative approach that fo- DevOps is considered a practice of bring-
cuses on collaboration, customer feedback and ing the development and operations teams
small, rapid releases. together.

Purpose Agile helps to manage complex projects. DevOps’ central concept is to manage end-
to-end engineering processes.

Task The agile process focuses on constant DevOps focuses on constant testing and
changes. delivery.

Implementation The agile method can be implemented within a The primary goal of DevOps is to focus on col-
range of tactical frameworks like ‘sprint’, ‘safe’ laboration, so it doesn’t have any commonly
and ‘scrum’. accepted framework.

Team skillset Agile development emphasises training all team DevOps divides and spreads the skillset
members to have a wide variety of similar and between the development and operations
equal skills. teams.

Team size A small team is at the core of agile. Because Relatively larger team size, as it involves all the
the smaller the team, the faster they can move. stakeholders.

Duration Agile development is managed in units of DevOps strives for deadlines and benchmarks
‘sprints’. This time is much less than a month with major releases. The ideal goal is to deliver
for each sprint. code to production daily, or every few hours.

Feedback Feedback is given by the customer. Feedback comes from the internal team.

Target areas Software development An end-to-end business solution and fast


delivery.

Shift-left principles Leverages shift-left Leverages both shift-left and shift-right.

Emphasis Agile emphasises on the software development DevOps is all about taking software that is
methodology to create software. When the ready for release and deploying it in a reliable
software has been developed and released, the and secure manner.
agile team will not care what happens to it.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  63


Focus

Cross-functional Any team member should be able to do what’s In DevOps, development teams and opera-
required for the progress of the project. Also, tional teams are separate. So, communication
when each team member can perform every is quite complex.
job, it increases understanding and bonding
between them.
Communication Scrum is the most common method of im- DevOps communications involve specs
plementing agile software development. Daily and design documents. It’s essential for
Scrum meetings are carried out. the operational team to fully understand
the software release and its hardware/net-
work implications for adequately running the
deployment process.
Documentation The agile method gives priority to the working In DevOps, process documentation is fore-
system over complete documentation. It is ideal most because the software is sent to the
when you’re flexible and responsive. However, operational team for deployment. Automation
it can hurt when you’re trying to turn things minimises the impact of insufficient docu-
over to another team for deployment. mentation. However, in the development of
complex software, it’s difficult to transfer all
the knowledge required.
Automation Agile doesn’t emphasise on automation. Automation is the primary goal of DevOps. It
Though it helps. works on the principle of maximising efficiency
when deploying software.
Goal It addresses the gap between customer needs, It addresses the gap between development +
and the development and testing teams. testing and operations.
Focus It focuses on functional and non-functional It focuses more on operational and business
readiness. readiness.
Importance Developing software is inherent to agile. Developing, testing and implementation -- all
are equally important.
Speed vs risk Teams using agile support rapid change, and a In the DevOps method, the teams must make
robust application structure. sure that the changes that are made to the
architecture never develop a risk to the entire
project.
Quality Agile produces better application suites with DevOps, along with automation and early bug
the desired requirements. It can easily adapt removal, contributes to creating better quality
according to the changes made on time, during software. Developers need to follow coding
the project’s life. and architectural best practices to maintain
quality standards.
Tools used JIRA, Bugzilla and Kanboard are some popular Puppet, Chef, TeamCity, OpenStack and AWS
agile tools. are popular DevOps tools.

Challenges The agile method needs teams to be more pro- The DevOps process needs the develop-
ductive, which is difficult to manage every time. ment, testing and production environments to
streamline work.

Advantage Agile offers a shorter development cycle and DevOps supports agile’s release cycle.
improved defect detection.

ƒƒ T
he target area of DevOps is ƒƒ DevOps focuses more on operational By: Harsukhdeep Singh
software development whereas the and business readiness, whereas The author has worked as a QA
target area of agile is to deliver end- agile focuses on functional and non- engineer for the last four years and is
an open source enthusiast.
to-end business solutions quickly. functional readiness.

64  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

Gulp: A DevOps Based Tool


for Web Applications
This article highlights the features of Gulp, an open source cross-platform
streaming task runner that lets software developers automate many
development tasks. It covers the installation of Gulp, and touches upon the
code required for task and module management.

operations (Ops) of an organisation The DevOps life cycle consists of


that delivers software reliably, in a seven phases which are: continuous
shorter time. The main purpose of development, continuous integration,
DevOps is to automate the software continuous testing, continuous
development life cycle. The key monitoring, continuous feedback,
features associated with DevOps are continuous deployment and continuous
speed, reliability and better quality. operations.
Agile development and enterprise ƒƒ Continuous development: This
systems management (ESM) make includes a plan for the software and
up the foundation of DevOps. Agile development processes.
is linked to the software development ƒƒ Continuous integration: This
process and ESM covers various consists of commit changes
processes that involve configuration and integrated code from teams
management, system monitoring, that have worked on the same
etc. The need for DevOps arises project and have performed
because the operational team and integration testing.
development team work separately. ƒƒ Continuous testing: During

S
oftware development teams The design, development and testing this process, the team identifies
need to work collaboratively teams take more time and more errors bugs in the code with the help of
on numerous tasks that are time occur during deployment. automated testing, which is more
consuming. DevOps is very The architecture of DevOps reliable and less time consuming
popular in the corporate world for the provides a combined approach than manual testing.
automation, collaboration and real-time for development and operations. ƒƒ Continuous monitoring: This
control that it enables on different types This approach involves developing monitors all the processes that
of software projects. Many corporate code, testing, planning, monitoring, take place in different phases of
giants including Amazon, Oracle, Red deployment, operations and the the software development life
Hat, SaltLake, Nagios, etc, use DevOps release. This architecture is followed cycle and records information
tools for real-world projects. by large applications and those regarding problems.
DevOps refers to the approach used that are hosted on cloud platforms. ƒƒ Continuous feedback: This
to correlate the software development This is because, in the case of large improves subsequent versions of the
process with the various IT tasks to applications where the development software by removing what is not
provide better software quality and and operational teams do not work relevant according to the customer.
streamlined deliveries. It speeds up the in synchronised environments, ƒƒ Continuous deployment: This
development, testing and deployment of long delays can occur in the involves code deployment on all
code of high quality. process of design, deployment and the servers.
As the name indicates, DevOps testing. DevOps overcomes delays, ƒƒ Continuous operation: This is
is a combination of the development maintaining high quality and timely the automation of the process and
(Dev) and information technology delivery of the product. key operations.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  65


Focus

DevOps based free and Table 1: Prominent DevOps tools


open source tools Tool URL
DevOps provides a large variety
of tools that are useful in the Gulp https://gulpjs.com
automation of the software Worksoft https://www.worksoft.com
development, testing and release
processes. Table 1 lists some of Kamatera https://www.kamatera.com
these tools.
Buddy https://buddy.works
Gulp: An open source tool Nagios https://www.nagios.org
for Web applications
Gulp is used by software Chef https://chef.io
development teams as the streaming Jenkins https://www.jenkins.io
build system with which a number
of tasks can be programmed and Vagrant https://www.vagrantup.com
automated in an effective way. Splunk https://www.splunk.com
Gulp is an effective task runner
that has the strong base of the Git https://git-scm.com/
Node.js platform. It makes use of Puppet https://puppet.com
powerful JavaScript code for the
execution of tasks and corporate
level Web applications. JavaScript code Installing and working with Gulp
In Web development projects, ƒƒ C ode generation for cross- To install and work with Gulp, you
the following are a few of the browser compatibility need to install Node.js. Once Node.js is
tasks required to be worked on ƒƒ Deployment of files for staging installed, Gulp can be integrated with the
continuously by the software and uploading to servers following instruction:
development teams: Often, these tasks are repetitive
ƒƒ Formation, generation and in nature and require a lot of time. $ npm install --global gulp
transformation of templates Gulp is a powerful and multi-
ƒƒ Compression of multimedia files featured tool that can be used to Once the installation is finished, the
ƒƒ Code linting automate and speed up the tasks installed version can be verified as follows:
ƒƒ Code validation associated with software project
ƒƒ Integration of stylesheets and development. $ gulp -v

Figure 1: Official portal of Gulp Continued to page 69...

66  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

DevOps is the Future


of Software Development

DevOps scores
over legacy,
monolithic and
agile software
development.
This article
discusses the
various stages
of the DevOps
software
development
cycle and
delineates the
appropriate
FOSS tools that
can be used
at each of the
stages.

D
evOps is a method of by the development team through a case of DevOps, instead of delivering
software engineering that software life cycle comprising various the whole software, small chunks of
combines both software stages like customer requirement(s), code are updated, and the updated
development (Dev) and planning, modelling, construction software is released to the operations
information technology operations and deployment. However, such a team continuously, which speeds up
(Ops) to create high quality software development cycle could take a lot the software development cycle. The
within a short period of time. DevOps of time and collaborative effort to DevOps life cycle can be automated
shortens the time taken for software successfully deliver software to the using various development tools and
development through continuous customer. This is also known as agile requires less manual activity. The
delivery and the integration of code. software development. DevOps concept first originated in 2008
As the name suggests, DevOps is DevOps can help overcome the during a discussion between Patrick
a combination of development and drawbacks of the agile software Debois and Andrew Clay Shafer.
operations. Traditionally, software development life cycle. In the agile However, the idea only started to spread
development is done by a team of method, to develop software, every in 2009 with the advent of the first
stakeholders comprising business stage of the life cycle needs to be DevOps Days event held in Belgium.
analysts, software engineers, completed prior to the final release Now, most tech giants like Facebook,
programmers and software testers. and that could take a lot of time for Google, Amazon, Netflix, etc, have
The software development is done the software to reach maturity. In the adopted the DevOps culture.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  67


Focus

requirements of the operations team.


Continuous Integration
FOSS tools like Puppet, Ansible and
Saltstack are used to automate the
deployment process.
Build Code Deploy Operate
7. Operate: After the software has
been updated and configured, the
operations team starts operating the

Dev Ops
Test Plan Monitor
products and services with the updated
software. This is done by either using
proprietary or FOSS tools.
Release 8. Monitor: This is the final stage for
the IT operations team and also for the
Deve DevOps life cycle. Here the customer
lopment Operations
Continuous Delivery
requirements are gathered and the
data is sent to the development team
Figure 1: DevOps life cycle to update the software product/service
for the next iteration of DevOps.
DevOps life cycle and tools with the codebase, it is tested in a FOSS tools like Nagios are used by
The DevOps life cycle consists of eight virtual environment, using a VM the operations team to automate the
stages – plan, code, build, test, release, or Kubernetes. A series of both monitoring process.
deploy, operate and monitor. There are manual and automated tests are
various free and open source software conducted at this stage. This is Benefits of DevOps
(FOSS) tools that can be used to improve the most crucial stage of DevOps DevOps has many more benefits than
and automate the DevOps life cycle. and should go through without the agile software development process.
The eight stages of DevOps are failure. This stage could also In case of the latter, the development
explained below. cause bottlenecks and increase the process progresses through four main
1. Plan: This is the starting stage of timeline. In order to complete the stages – planning, coding, testing and
DevOps where the planning for the stage successfully without much release. The planning for the software
software development is done by delay, various continuous testing development is done based on customer
the development team. Everything tools are used. FOSS tools like requirements and involves assigning
from the software requirements to the CruiseControl and Selenium are used various tasks to the stakeholders,
development timeline is planned and to automate the process. like designing flowcharts, writing
the team works accordingly. FOSS 5. Release: After the code is tested documentation, estimating timelines, etc.
tools like Redmine, Trac and Git are successfully, it is prepared for After the planning is done, the coding
used at this stage. release. At this stage the development for the software development starts
2. Code: After everything is planned, team decides which features of the and a working prototype is created.
the coding for the software software product should be enabled The prototype is then tested in the third
development is done. Coding can be or disabled by default and when it (testing) stage. The testing stage is the
done from scratch or can be reused, should be released. This is the final most important part of the agile software
depending on the requirements. Many stage for the DevOps development development life cycle because it can
FOSS tools can be used for coding. cycle, and after this the software/ affect the overall quality of the software.
Git is one such tool that is used to code is delivered to the operations This stage can also cause bottlenecks
automate the process. team. This stage can also be because the time required for software
3. Build: After the coding is done, it is automated using FOSS tools like testing cannot be estimated precisely and
shared with other software engineers Jenkins and Bamboo. could go beyond the project’s timeline.
of the development team. The code is 6. Deploy: This stage belongs to the After successfully testing the
either approved or rejected after it is operations team and it starts after software, it is then released to the
reviewed. If it’s approved, then it is the release of the software by the customer in the fourth stage. However,
merged with the main codebase of the development team. The software if the customer finds any problems with
repository. FOSS tools like Gradle is then deployed by the operations the software, then the development
can be used to automate this process. team using various tools. The release team takes the customer’s feedback for
4. Test: Once the new code is merged is configured according to the another iteration of the development

68  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Focus

cycle. Then the whole development With DevOps, the development and services to reach maturity
process takes place again, with the team communicates with the operations in a very short period of time.
four stages, to improve the software team and does not deal with the DevOps is mainly suitable for the
and then it is released again, with customers. In DevOps, only small development of cloud computing
another version number. Once again chunks of code are continuously products and services, because
the customer’s feedback is taken, and coded and tested, which prevents any it needs collaboration between
if any improvements are required then bottlenecks or delays in delivering the the development team and the IT
the development cycle goes into the code. DevOps provides continuous operations team. As there is a growing
iteration stage again and the cycle delivery and integration of the code. demand for cloud computing products
keeps continuing until the software This is why it is better than agile and services, DevOps is the right
reaches maturity (a stable software). software development. choice for most software companies
This type of development cycle could As DevOps has many benefits over and has a great future.
take an indefinite amount of time and agile software development, many
causes delays in software development. software companies are resorting to the
That’s why using DevOps can make DevOps culture. Because it provides By: Debojit Acharjee
the development cycle quicker and continuous delivery and integration, The author is a software
engineer and writer.
more effective. DevOps can help software products

Continue from page 66...

Module management in Web gulp.task(‘name of the task’, function() $ gulp mytask


applications using Gulp {
The following is the hierarchy used for // Operations Today, large scale software
the files and folders while working with }); projects have diversified teams of
Gulp streaming applications. programmers, testing engineers as
src: Path of the pre-processed …where the name of the task well as other team members. Hence,
HTML code files: refers to the title of the task. All the there is a need to automate and control
operations or tasks are defined in tasks effectively so that companies can
Scripts: Assorted pre-processed scripts function(). The ‘gulp.task’ is used to get fast delivery of the product, and
Styles: Multiple Stylesheet Source Code register the function for the execution to provide an integrated development
in CSS format of the operations. environment with testing using the
Here is an example: different tools of DevOps. The main
Build: This folder is automatically requirement for the DevOps approach
created with the dynamic generation of gulp.task(‘mytask’, function() { towards software development is
production files: var SourcePath = ‘src/myimages/**/*’, that it should be cloud-centric so
DestinationPath = ‘build/myimages’; that simultaneous operations can be
Images: Compressed image files gulp.src(SourcePath) performed with the help of DevOps
Scripts: Script file with minified codes .pipe(changed(DestinationPath)) based tools and technologies. It is
Styles: CSS script with minified codes .pipe(mytask()) very difficult for small companies to
.pipe(gulp.dest(DestinationPath)); follow DevOps because they can’t
gulpfile.js: Configuration file of }); afford cloud costs. From the point of
Gulp in JavaScript for defining the tasks. view of a career, DevOps is in huge
The execution or running of the demand in the software development
Task management and Gulp file is done using the following job market, with a very good salary
automation in Gulp instruction: package on offer.
Gulp integrates the modular approach
with task management so that the
software development and related By: Dr Gaurav Kumar
phases can be programmed with The author is the MD of Magma Research and Consultancy Services, Ambala and is associated
accuracy. You need to create a Gulp task with various universities, institutes and autonomous organisations for the delivery of expert
lectures, technical workshops and consultancy. Website: www.gauravkumarindia.com.
with the following structure:

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  69


CODE
SPORT Sandya Mannarswamy

In this month’s column, we will first briefly discuss the


common patterns in coding interview questions, and then
feature a few of these questions.

T
here are typically a number of position of these stones is given as integers
patterns in coding interview in a sorted array. The frog is sitting on the
questions. Some of these include first stone, which is at position 0, and the first
dynamic programming, sliding jump must be of one unit. At any point during
window, monotonic queue/stack, divide and the journey, when the frog jumps, it needs to
conquer patterns, etc. Problems that appear only land on a stone, else it can’t cross the
difficult at first, turn out to be quite simple river. At any stone, it can only jump (k-1), k
once the underlying pattern can be identified. or (k+1) units from the current stone, where
Let us talk about a few examples of each of k is the size of the previous jump. The frog
these patterns. needs to land on the last stone for it to cross
Dynamic programming (DP) continues the river. You need to write code to determine
to be quite popular in coding interviews. whether the frog can actually cross the river,
Dynamic programming questions are given an array representing the position of
difficult unless you have practised earlier the stones on the path.
on similar types of questions. While the The brute force solution would be to frame
solution once framed into a DP equation looks this problem as a recursive solution. Given
quite simple, there is quite a leap from the the position of the current stone, and the size
problem statement to framing the dynamic of the last jump, we can recursively keep
programming solution equation. One way checking whether we can reach a stone from
to navigate the gap, typically, is by thinking the current position, and from then onwards to
about the problem first in terms of a top- the next stone, and so on, until we end up at
down recursive solution. Once we can find the last stone or we run out of stones. Given
the recursive equation, extend the solution by the size of the last jump, k, the possible sizes
applying memorisation so that you don’t have of the new jump are (k-1), k and (k+1). We
to recompute the same sub-problems over need to check if there exists a stone at the
and over again. Once we have the memorised current position + the new jump size. If so,
solution, try and see if you can frame it as a we can then again recursively do the check
DP equation by looking at the sub-problems. from that stone (with that stone as the starting
Let us look at a sample question. point and the jump size with which that stone
There is a frog that needs to cross a was reached). If we can reach the last stone
river. There are stones on the crossing path, through any of the paths, the crossing was
which are divided into units of one, on which successful. If there is no successful path to the
the frog can jump and cross the river. The last stone, the frog fails to cross.

70  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Guest Column CodeSport

The brute force recursive solution has many test the actual code for this problem. The sliding window
redundant computations since we can end up with the pattern is typically applicable to problems that are over
same starting stone and jump size from many of our the sequential single dimensional data structure such as
recursive paths. Hence, an obvious improvement is to an array or a string, though they can be applied with some
memorise these computations. We can cache the result modifications to matrix problems also. Here are a few
of the recursive function for the stone index and jump- sample problems for our readers to practice.
size combination and if this same recursive call is made, (1) Given an array of integers A of size N, find the
we can return the cached result instead of the recursive contiguous sub-array with the minimum sum. A brute
function call. force approach would be to consider all sub-arrays (which
As we compute the recursive solution, it becomes is N^2), compute the sum for each of them and choose the
clear that at each stone, we are trying to find out what minimum. Can you do better than that?
the possible jump sizes are from this stone. Once we (2) Given a binary array containing 1s and 0s, find the
have the set of jump sizes known for each stone, we can length of the longest consecutive set of 1s in the array
then check if there exists a stone at any of these jump if you can replace k 0s with 1s. First assume that k is 1.
sizes. If there exists a stone with that position, we can Then solve this problem for any arbitrary ‘k’.
then update the new jump sizes for the newly reached (3) Given K sorted arrays, write a function to form a single
stone. If we end up reaching the last stone during our sorted array from all these arrays.
traversal of all stones, then we have figured out that (4) You are given an array of N integers. Write a function
the frog can cross successfully. Else the frog cannot that computes the maximum absolute difference between
cross. We create a map which can remember the set the nearest smaller left element and the nearest smaller
of possible jump sizes associated from each stone’s right element among all the array elements. If there is
position. Hence, given a stone and its set of jump sizes, no left smaller element for a given array element, take
the DP solution computes for all reachable stones from the left smaller element as 0 (and do the same for the
the current stone, and their possible jump sizes. Given right smaller element).
that this is a reachability problem, it is also possible (5) You are given an array of N integers, for which each
to formulate this in terms of graph reachability. Given array element represents the height of a histogram.
the starting stone and destination stone, the idea is to Each of the histograms has unit widths. You are
find out whether a possible path exists between them asked to find the area of the largest rectangle in the
while obeying the constraints on the jump sizes. I will histogram. Remember that for any two histogram bars,
leave it to our readers to come up with the depth first the smallest bar lying between them decides the height
search/breadth first search method of computing this of the rectangle.
reachability solution. (6) You are given an integer N. You need to find another
Another pattern that appears often in coding interviews integer M that has the same set of digits present in N but
is the sliding window pattern. Let us look at an example. is greater in value than N. M should be the smallest such
Given a source string S and a target string t, write a integer. If no such integer exists, return -1.
function that can return the minimum length substring of Feel free to reach out to me over LinkedIn/email if you
S which contains all the characters in the target. The brute need any help for your coding interview preparations. If
force approach is to enumerate all possible substrings you have any favourite programming questions/software
of S, see which of them contain all characters in target t topics that you would like to discuss on this forum, please
and choose the minimum length substring among these send them to me, along with your solutions and feedback, at
substrings. Instead, we can apply a sliding window sandyasm_AT_yahoo_DOT_com.
approach to this problem. I request our readers to follow the public health guidelines
Typically, a sliding window has a left pointer and and stay safe. Please don’t worry if you are not able to focus
a right pointer. We first expand the right pointer on the fully on learning new technologies, programming, etc. These
source string till the window includes all characters in are difficult times. So first take care of your mental and
target t. Now we have a valid substring containing all physical health, help others wherever you can, and stay safe.
of the target characters, which is represented by the Wishing all our readers happy coding until next month!
window. We can now move the left pointer forward to Stay healthy and stay safe.
shrink the window so that we can reduce the length of
the string. We keep repeating this operation over the By: Sandya Mannarswamy
complete source string and keep updating the minimum The author is an expert in natural language processing and is
length substring among all the valid substrings we have currently working as an independent researcher. Her interests
include natural language processing, machine learning and AI.
encountered. I will leave it to our readers to write and

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  71


Developers Overview

A Study of Various Open


Source Blockchain Platforms
Blockchain platforms are important because they can be used in e-governance,
eliminate middle men, and secure the confidence of businesses. Get acquainted with
the various blockchain platforms in this interesting article.

T
he term ‘blockchain applications that help in making Amazon would record the name along
technology’ typically refers business operations more transparent with other information to amazon.
to the transparent, trustless, and efficient. com. Instead of using the actual name,
publicly accessible ledger the purchase is recorded without
that securely transfers the ownership What exactly is blockchain? any identifying information using a
of units of value using public key At the most basic level, blockchain is unique ‘digital signature’ like a sort of
encryption and proof of work methods. just a chain of blocks, but not in the user name.
The technology uses decentralised traditional sense of those words. When Part 3: Blocks store information
consensus to maintain the network, we use the words ‘block’ and ‘chain’ that distinguishes them from other
which means it is not centrally controlled in this context, we are actually talking blocks, much like names of people, and
by a bank, corporation or government. about digital information (block) stored each block stores a unique code called
In fact, the larger the network grows in a public database (chain). a hash that allows every block to be
and becomes increasingly decentralised, Blocks on the blockchain are made different from the other. Hashes are the
the more secure it becomes. Blockchain up of digital pieces of information and cryptographic codes created by special
technology is picking up at a fast they have three main parts. algorithms.
pace nowadays. The technology that Part 1: Blocks store information
emerged in 2009 as the underlying about the transaction like the date, time Blockchain variants
platform for Bitcoin exchange has now and amount involved in the transaction Public: The ledgers of these
evolved into a mainstream technology. of the recent purchase. blockchains are visible to everyone
It finds application in various fields Part 2: These blocks store on the Internet. They allow anyone to
like healthcare and finance. Many information about the participants in verify and add a block of transactions
companies venturing into this domain the transactions. For example, a block to the blockchain. Public networks have
are developing blockchain based for purchase of some items from incentives for people to join and are

72  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Overview Developers

free for use. Anyone can use a public and all the nodes are connected. This copies in the network. This ensures that
blockchain network. blockchain platform is highly reliable the records in the transaction processing
Private: Private blockchains exist as there is no possibility of tampering stages are immutable in nature. Any
within a single organisation, allowing with the transaction record because correction in the record is amended as a
only specific people of the organisation it is copied to all the nodes — and new transaction processing stage in the
to verify and add transaction blocks. tampering with all the nodes in the network, thus ensuring all the record
However, everyone on the Internet is network is practically impossible at processing transaction stages are kept as
generally allowed to view. any given time. distinct entries in the ledger.
Consortium: In this blockchain Platform security: Though the
variant, only a group of organisations blockchain platform is decentralised in Blockchain platforms
can verify and add transactions. Here, nature with many users being a part of Blockchain platforms assist us in
the ledger can be open or restricted to the workflow process and participating creating applications that implement
select groups. A consortium blockchain in the various stages of executing the the concepts of blockchain. Every
is used across organisations. It is only transaction, there is a higher level of person or company does not have
controlled by pre-authorised nodes. security ensured in any blockchain either the resources or the time to
platform due to its decentralised nature develop their own blockchain from
Pillars of blockchain platforms and multi-node record copy. Also, scratch, and hence these companies
When there are many public and private different blockchain platforms ensure leverage blockchain platforms
blockchain platforms available in higher levels of platform security like developed by tech giants for faster and
the marketplace, one has to carefully permissionless ledgers, the consensus easier application development. Any
choose a suitable option based on one’s algorithm, usage of cryptocurrency organisation deciding to implement
requirements. This has to be evaluated for transactions and the smart contract blockchain may choose from the
based on the pillars of the blockchain facility, to name a few. prominent frameworks available,
platform as listed below: Record immutability: In any i.e., Ethereum, Hyperledger Fabric,
ƒƒ Decentralised network blockchain platform, the decentralised Quorum, Corda, Ripple, etc. The final
ƒƒ Platform security network and ledger copied across all decision should be based upon the
ƒƒ Immutability of the record state the nodes of the network ensures that suitability of each, to the organisation.
Decentralised network: One of all the records kept in the ledger for Let’s now discuss the most popular
the key architectural principals of the any transaction are secure enough so open source blockchain platforms.
blockchain platform is its decentralised that a change in a record is accepted
nature. This means the transaction only if it’s accepted by all participants Ethereum
in the blockchain network is copied across nodes, thus changing the record Ethereum (a public blockchain network)
across all the nodes of the network unanimously across all the ledger was developed by Vitalik Buterin and
is considered an efficiently developed
platform that has smart contract features,
Governance and Free structure flexibility and multi-industry adaptability.
Ethereum acts as a base component in
Principle and Business Goals
building and developing most of the
decentralised applications. ERC20 is
Transparency in Transaction
the most popular token standard among
cryptocurrencies. Stability, security,
Decentralized Network

Record Immutability

corruption prevention and zero downtime


Platform Security

are some of the benefits offered by


Ethereum over other applications.
The Ethereum network has two
types of accounts, namely:
ƒƒ External accounts
ƒƒ Contract accounts
Both are referred to as ‘state
Platform Scalability
objects’ and comprise the ‘state’ of the
Ethereum network, i.e., every state
Figure 1: Pillars of blockchain platforms object has a well-defined state. For

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  73


Developers Overview

Figure 3 explains the architecture


blocks of the OpenChain platform,
Mining which contains multi-layer components
Node for consensus, ledger and platform
services, and supports membership
services to handle workflow and
Ethereum Virtual Machine
permissioned transaction management.
ƒƒ Anyone can spin up a new
Smart Contract
OpenChain instance within seconds.
ƒƒ The administrator of an OpenChain
Block-1 Block-2 Block-3 ...... Block-n instance defines the rules of the
ledger.
ƒƒ End users can exchange value on
the ledger according to those rules.
ƒƒ Every transaction on the ledger is
Consensus
Ledger digitally signed, like with Bitcoin.

Figure 2: Ethereum architecture How OpenChain is different


ƒƒ Tokens on OpenChain can be
pegged to Bitcoin, making
Participants Network Transaction Contact it a side chain.
ƒƒ Has smart contract modules and
unified API.
Consensus Ledger ƒƒ Assigns aliases to users instead of
Membership Secure
Registry
using base-58 addresses.
Services ƒƒ Multiple levels of control and
Store Protocol an hierarchical account system
allowing permissions to be
at any level.
Platform Infrastructure ƒƒ Ability to have multiple OpenChain
instances replicating from each
Figure 3: OpenChain’s platform architecture other.

external accounts, the state comprises Key highlights Key highlights


the account balance while for contract ƒƒ Activity: Highly Active on ƒƒ Activity: Medium active on GitHub
accounts, the state is defined by the GitHub ƒƒ Type of ledger: Distributed ledger
memory storage and balance. ƒƒ Type of network: Public, smart technology and is private
contract based ƒƒ Pricing: Open source
How Ethereum is different ƒƒ Pricing: For transactions and for ƒƒ Supported languages: JavaScript
ƒƒ Ethereum offers greater computational services ƒƒ GitHub repo: Openchain-js
transparency as compared to ƒƒ Supported languages: Python,
Hyperledger. GO, C++ BigChainDB
ƒƒ The flexibility of the platform ƒƒ GitHub repo: pyethereum BigChainDB is built from a federation
allows any developer to create (Python); gpethereum (GoLang); of enterprise-ready database nodes,
applications using an inbuilt cpp-ethereum (C++) such as MongoDB instances, which
programming language. store immutable information about
ƒƒ Having ‘Ether’ as a built-in OpenChain assets in a synchronised manner.
cryptocurrency provides a OpenChain is an open source BigChainDB is a MongoDB
competitive edge to Ethereum distributed ledger technology. It is database that uses Tendermint to obtain
applications over Hyperledger and suited for organisations wishing to its blockchain features. A BigChainDB
Corda in use cases that require issue and manage digital assets in a network may be public, private or
cryptocurrency. robust, secure and scalable manner. permissioned, according to the access

74  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Overview Developers

permissions entities have over the in Figure 3, in the sense that an asset which is immutable and can’t be
system. In a public BigChainDB, transaction receives an asset input, modified once the asset is created;
any participant is able to access the which is then transformed to an and metadata, which can be modified
network or deploy that participant’s output that may be used in the future through subsequent transfer
own MongoDB+Tendermint node and as an input for a new transaction. transactions.
connect it to the database federation, Asset outputs can only be used once Transfer transactions allow the
while a permissioned BigchainDB as input for a transaction. There transfer of ownership of an asset or
could be managed by a consortium or a are two types of transactions in the modification of the metadata.
governing entity, where every member BigChainDB, as given below. The only one entitled to perform this
of the consortium manages their own Create transactions generate transaction over an asset is its owner.
node in the network and no one can join a new asset in the system (as a These transactions use as input an
without permission. JSON document in MongoDB) unused output of the asset, generating
BigChainDB’s transaction model is with two types of information in as a result a new output with the
analogous to that of Bitcoin as shown it. These are asset information, corresponding modifications.

Key highlights
CLI ƒƒ Activity: Active on GitHub
GUI
ƒƒ Type of ledger: Multi-ledger
Interfacing Layer integration
ƒƒ Pricing: Open source
ƒƒ Supported languages: JavaScript
Node
and Python
Driver
ƒƒ GitHub repo: bigchaindb
Consortium
HydraChain
Node Cluster HydraChain, an extension of
the Ethereum platform, is fully
compatible with the Ethereum
Node Protocol on an API and contract
Validations
level. It supports distributed ledgers
Member
and private chains, which are mainly
Node set up for the financial industry.
Store The infrastructure of HydraChain
Blockchain Platform allows users to develop smart
contracts in Python, which can
Figure 4: BigChainDB architecture improve development and debug
efficiency enormously.
Ledger
HydraChain has a well-defined
configuration system, as shown in
Figure 5, which provides flexible
customisation adjustments such as
transaction fees, gas limits, genesis
Node Node Node allocation or block time.

Features
ƒƒ Fully compatible with the
Node Node Node Ethereum Protocol
ƒƒ Accountable validators and native
contracts
User ƒƒ Fully customisable, easy to
deploy, and open source with
Figure 5: Hydrachain architecture commercial support available

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  75


Developers Overview

Key highlights
ƒƒ Activity: Low but actively updated Services Contracts Whitelist Blacklist
on GitHub
ƒƒ Type of ledger: Private Flow Services
ƒƒ Pricing: Open source
ƒƒ Supported language: Python Blockchain Platform
ƒƒ GitHub Repo: Hydrachain (Python)
Assets Transaction Store
Corda
Corda is an open source blockchain
platform that enables businesses to Members Workflow Rules
transact directly and in strict privacy
using smart contracts, reducing Key
transactions and record-keeping Vault Identity Nodes
Management
costs while streamlining business
operations. In a world of permissionless Approval
Messaging Client Persistent DB
Process
blockchain platforms, in which all
data is shared with all parties, Corda’s
strict privacy model allows businesses Figure 6: Corda architecture
to transact securely and seamlessly, as
shown in Figure 6.
User Interface
R3 delivers two interoperable and
fully compatible distributions of the
platform – Corda, a free download
based on the code available on GitHub; Services Layer Data Store
and Corda Enterprise, an enterprise
blockchain platform that offers features
and services fine-tuned for modern-day
businesses.
Corda provides three main tools to Node 1 Node 2 Node 3 Node 4 Node 5 Node 6
achieve global distributed consensus:
ƒƒ Smart contract logic, which
specifies constraints that ensure
state transitions are valid according
to pre-agreed rules, described in the Assets Transaction API Permission
contract code as part of CorDapps.
ƒƒ Uniqueness and timestamping Blockchain Platform
services known as notary pools to
order transactions temporally and
eliminate conflicts. Figure 7: MultiChain architecture
ƒƒ A unique component called the flow ƒƒ S upported language: Kotlin MultiChain is for creating new
framework, which simplifies the ƒƒ GitHub repo: Corda blockchains with their own native
process of writing complex multi- currencies and/or issued assets.
step protocols between multiple MultiChain Users cannot transact existing
mutually distrusting parties across MultiChain technology is a platform cryptocurrencies on MultiChain unless
the Internet. that helps users to establish a certain someone trusted acts as a bridge in the
private blockchain that can be used by middle, holding some cryptocurrency
Key highlights organisations for financial transactions. and issuing tokens on MultiChain to
ƒƒ Activity: Actively updated on A simple API and a command-line represent it as shown in Figure 7.
GitHub interface are what MultiChain provides MultiChain is an off-the-
ƒƒ Type of ledger: Distributed us. This helps to preserve and set shelf platform for the creation
ƒƒ Pricing: Open source up the chain. and deployment of private

76  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Overview Developers

Features/Platform Ethereum OpenChain BigChainDB HydraChain Corda Multichain


Security Spying data is not Multiple levels of control Highly secure with Extension of Ethereum Higher privacy for Moderate security with
possible transaction level platform architecture data and transaction privacy features for
permissions transaction
Support/Governance Ethereum Consortium Hierarchical system to set Federation Consensus Ethereum consortium Corda Consortium Global consortium
permission at any level Model
Community Activity Democratic Slack community develop- Community develop- Open community Slack and StackO- Git based community
Autonomous ment ment in Git verflow work
Organizations (DAO)
Limitations Open financial system Unified API support and Data oriented solution In presence of pending No global broadcast Flexible to extend
access extensibility is not available and no industry focus transaction only a new of transaction across privacy features is not
block gets created network supported
Industry Focus Multi-Industry Insurance and Banking Multi-Industry ap- Financial services Banking, Financial Multi-Industry ap-
application plication Services, Insurance plication
Type of Ledger PermissionLess PermissionLess Permissioned or Permissioned Permissioned Permissioned or
private network Private network
Functionality Digital Asset Digital Asset Management Big Data management Customizable fee, genisis Pluggable consensus Data Ledger functions
Management allocation, block time
Smart Contract Ethash - Proof of Partionned Consensus Federation of Voting Proof of Work (PoW) Supported Proof of Work (PoW)
Work (PoW) nodes
Crypto Currency used ETH No Native currency. No Native currency. ETH No Native currency. No Native currency.
Figure 8: Comparison of open source blockchain platforms
blockchains, either within or between helps any Distributed Denial of When any transaction happens
organisations. It aims to overcome Service (DDoS) network attacks. in a blockchain platform, it gets
a key obstacle to the deployment Proof of Stake (PoS): Since PoW validated either by permissionless
of blockchain technology in the involves a lot of system resources like or permissioned blockchain. In
institutional financial sector, by GPU, it is an advanced mechanism permissionless blockchains, the
providing the privacy and control suitable for sophisticated blockchain transaction is validated by the public,
required in an easy-to-use package. networks. PoS solves the challenge of who are participants in the network
this high resource usage by using an and hence the transaction processing
Key highlights algorithm based on distributed tokens rate is higher at any given time.
ƒƒ Activity: Medium but actively and static coin supply. In permissioned blockchain, the
updated on GitHub Deleted Proof of Stake (DPoS): transaction is validated by a selected
ƒƒ Type of ledger: Private, This is an extended version of PoS. group of participants and approved by
permissioned The PoS is like a lottery mechanism the blockchain owner or approver.
ƒƒ Pricing: Open source based on the coin stake of the users The table in Figure 8 compares
ƒƒ Supported languages: Python, C#, and DPoS is based on influencing the various features and architectural
JavaScript, PHP, Ruby network by all stakeholders equally. It points of the blockchain platforms like
ƒƒ GitHub repo: savior (Python); c# is based on an algorithm approach that security, support, community activity,
MultichainLib (C#); Multichain- balances electing a list of nodes by any limitations in platform support, industry-
Node (JavaScript); stakeholder to complete the transaction. focus of the solution and the type of
libphp-multichain (PHP); There are other consensus ledger used in the platform, as well as
multichain-client (Ruby) mechanisms like Practical Byzantine the functionality supported.
Fault Tolerance (PBFT) and Directed
Functionality Acyclic Graph (DAC), which are
The functional architecture of the evolving in the Blockchain 3.0 wave References
blockchain platform is based on (Blockchain 1.0 is Bitcoin architecture [1] https://ethereum.org/
[2] https://www.openchainproject.org/
various consensus algorithms, which and Blockchain 2.0 is Ethereum [3] https://www.bigchaindb.com/
are used as the basis of the platform consensus architecture). They are [4] https://www.hyperchain.cn/en
execution mechanism. These include getting implemented in platforms like [5] https://www.corda.net/
[6] https://www.multichain.com/
the following. HyperLedger and Ripple.
Proof of Work (PoW): This
is the original type of consensus
algorithm from Satoshi Nakamoto’s By: Anand Nayyar and Dr Magesh Kasthuri
first research on blockchain. This Anand Nayyar works at Duy Tan University in Vietnam. He loves to explore open source
algorithm is used as a mechanism to technologies, sensor communications, network security and the Internet of Things.
confirm the transaction completeness Dr Magesh Kasthuri is a Ph.D in artificial intelligence and the genetic algorithm. He
and if successful, a new block gets currently works as a senior distinguished member of the technical staff at Wipro
added to the chain of network. This Technologies.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  77


Developers Insight

A Few Surprising Programming


Language Features
The first programming language that you learn leaves an indelible mark on you. If you are
used to coding in a particular language and then decide to take up a new one, the latter may
throw up a few surprises. Some of these can be fun but they could also be frustrating.

I
learned C more than 20 years ago discussed later in this article), and I was example, a Java (an object-oriented
and as a C programmer at heart, I literally surprised. This is something programming language) programmer
found it difficult to adjust to other that has happened to me a lot while learning Haskell (a functional
languages. Nevertheless, I had learning new programming languages. programming language) might come
to learn or use many others like C++, While learning Python, it was a surprise across a large number of features that
Java, x86 assembly language, Python, to learn that you don’t need to explicitly are surprising and a few which might
etc, over the years. I am no expert in declare the type of a variable before even look counter intuitive. Similarly,
all these programming languages. If I storing data into it. compiled and interpreted languages
can order food, ask for water and call This article discusses some often differ a lot in their features. An
a taxi in some (natural) language, I of the unique features of different example for this is a C++ (a compiled
assume proficiency in that language. programming languages that might language) programmer learning Python
Often, the same standards apply while surprise programmers well versed in (an interpreted language).
claiming proficiency in a programming some other programming language. Similar surprises might await
language. Recently, while learning Now let us also try to find out why this a person switching from a general-
Haskell, a functional programming occurs. Major surprises tend to occur purpose programming language
language, I came across a feature called when a programmer learns a language to a domain-specific one — for
lazy evaluation of expressions (to be that has a different paradigm. For example, a C (a general-purpose

78  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight Developers

programming language) programmer as an argument and returns a pointer The array num contains the
learning JavaScript (a domain- to an array of pointers to integers. first five numbers in the geometric
specific programming language). The pointer variable ptr2 is a pointer sequence starting at 1 with common
The programming experience might to a pointer to a pointer to a pointer ratio 5. Figure 1 shows the output of
also depend on the underlying to a pointer to a pointer to a pointer the program octal.cc. But why is the
architecture and operating system. An to an integer (I hope I have counted number 21 printed instead of 25? Well,
x86 assembly language programmer correctly!). There are 50 asterisk in C, C++ and Java, a number like
(from Intel) learning MIPS assembly symbols before the declaration of the 0123 is treated as an octal number and
language (from MIPS Technologies) pointer variable ptr3. But is there a ‘0x123’ is treated as a hexadecimal
or a PowerShell (from Microsoft) user limit to this? I tried up to 1000 asterisk number. In the program octal.cc the
learning Linux shell scripting are bound symbols and it was still compiling fine. numbers 001, 005 and 025 are treated
to encounter a few surprises. Of course, I believe there are no upper limits set as octal numbers due to this reason. But
there could be many other reasons for by the C standard. There may be a limit the numbers 001 and 005 are the same
this, but the above mentioned ones at which the C compiler might fail to in decimal and octal number systems,
seem to be the most obvious. handle this, but I don’t know where whereas the number 025 in octal is 21
For those learning their first that limit is. No programmer will ever in decimal. Thus, the sequence printed
programming language, it’s most use these sorts of pointers in a real is 1, 5, 21, 125, 625. This feature is
probable that none of the features are program. Nevertheless, these features convenient on many occasions, but
surprising. Now let us look at a few are available for us to use. A similar could lead to potential bugs if one is
features of programming languages that C++ program will also compile without not careful. For example, a Python
might surprise someone who learns these any errors. Now back to our business. programmer who is familiar with the
as their second programming language. Imagine the horror of a Java, Python or notation ‘0o25’ might consider ‘025’ as
Haskell (all of which are programming the number 25 with a leading zero.
Complicated pointers in C/C++ languages that do not use pointers)
Most undergraduate programmes in programmer who comes across such
computer science do have a course on monstrosities.
C programming and, to many, the most
difficult section is the one on pointers. Confusing Octal numbers in C/
To add to the misery, you can declare C++/Java
pointers of arbitrary complexity. As This may not be as big a surprise as
an example, consider the C program the previous one. But once, a long
named pointer.c given below. This and time back, this feature of C/C++/Java
all the other programs discussed in gave me quite a headache and I believe Figure 1: Output of the C++ program octal.cc
this article can be downloaded from we should discuss it. Consider the
opensourceforu.com/article_source_ C++ program octal.cc given below. The for loop with an else in
code/June20surprisinglanguage.zip. Similar C and Java programs (octal.c Python
and Octal.java) are also available for As mentioned earlier, a C/C++/Java
#include<Stdio.h> download. What is the output of the programmer newly learning Python
program octal.cc? will be surprised that explicit type
int main() declaration is not required in Python.
{ #include<iostream> Since this dynamic typing feature
int *(*(* ptr)(int *))[2]; using namespace std; of Python is well known to many
int *******ptr1; programmers, I will discuss a simple
int ******************************* int main() yet surprising feature of Python; for
*******************ptr3; { loop with an else part. Consider the
return 0; int num[ ]={001,005,025,125,625}; Python script loop.py shown below.
} for(int i=0;i<5;i++)
{ for i in range(5):
The above C program compiles cout<<num[i]<<endl; print(i)
without any errors. The pointer } else:
variable ptr1 is a pointer to a function return 0; print(“Noraml Exit from for loop”)
that accepts a pointer to an integer } for i in range(10):

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  79


Developers Insight

print(i) programming language takes hundreds Figure 3 shows the output of the PHP
if i == 4: of characters just to print the message script case.php. Do notice that PHP
break ‘Hello World’ on the screen. variable names are case-sensitive like
else: Warning! If you execute this code most other programming languages.
print(“Break from for loop”) on a Linux terminal and do not close
the terminal immediately, your system I AM PHP
Figure 2 shows the output of the will hang. In such a situation, you will I AM PHP
Python script loop.py. Notice that the have to force restart your system.
I AM PHP
else part of the for loop gets executed
only when exited normally from the Case-insensitive function I AM PHP
loop and not when exited through a names in PHP I AM PHP
break statement. Though not essential, Imagine copying the directories named
for-else is a convenient feature that ‘SONGS’, ‘Songs’ and ‘songs’ from Figure 3: Output of the PHP script case.php
is absent in most of the programming your Linux machine to your friend’s
languages. Windows machine. The Windows file Implicit type conversion in
system in general is not case-sensitive JavaScript
and you will be forced to rename 2 of Imagine the case of a Java programmer
these directories before copying them, learning JavaScript. Due to the
because all the three directory names similarity in names we might think that
(SONGS, Songs and songs) are the this will be an easy task. Even though
same if treated in a case-insensitive the syntax is somewhat similar, the
manner. Some programming languages transition from Java to JavaScript is
behave like this. A classic example is not that easy. Java is a strongly typed
PHP. Consider the PHP script named language where type checking is very
case.php given below: rigorous, while on the other hand,
JavaScript is a weakly typed language
<!DOCTYPE html> with extensive implicit type conversion.
Figure 2: Output of the Python script loop.py <html> This is one feature of JavaScript that
<body> should be avoided if possible. To
Fork bombing with a Linux understand the pitfalls in the implicit
shell script <?php type conversion of JavaScript, let us
What is the smallest program (in terms function PRINTMSG( ) { go through the JavaScript script named
of source code size) in any language echo “I AM PHP <br>”; type.js given below. What is the output
that comes to your mind, which when } of the script type.js?
executed will crash your system? I am
sure the winner will be the following printmsg( ); <!DOCTYPE html>
shell script. PrintMsg( ); <html>
PRINTMSG( ); <body>
x( ) { x | x & }; x pRiNtMsG( );
PrInTmSg( ); <script>
It defines a function called x, which a = ‘2’ + 1
calls the function x itself recursively ?> b = ‘2’ - 1
and then pipes this result to another document.write(a + “<br>”);
recursive call to x in the background. </body> document.write(b + “<br>”);
The fourth x in the script is a call to c = ‘1’ + 2 + 3
the function x to begin the bombing. </html> d = 1 + 2 + ‘3’
Soon your CPU will be allocating all document.write(c + “<br>”);
its time to process just these calls to Function names in PHP are document.write(d);
the function x. Just 11 non-white space case-insensitive and the lines of code </script>
characters in the script and your system printmsg(), PrintMsg(), PRINTMSG(),
is down, so imagine the surprise of pRiNtMsG(), and PrInTmSg() are all </body>
those programmers whose favourite calling the function PRINTMSG(). </html>

80  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight Developers

A seasoned Java programmer knowledge, can you explain why the natural numbers by the line of code let x
will expect a number of errors in the variable ‘c’ contains the string ‘123’ = [1..]. Notice that ‘x’ is not evaluated
above code. But everything is fine with and the variable ‘d’ contains the string at this point. The line of code take 10
JavaScript. Figure 4 shows the output of ‘33’? I will give you a hint — the x gives the list [1,2,3,4,5,6,7,8,9,10]
the script type.js. Why does the variable associativity of the operator ‘+’ is what as output (the first ten elements of the
a have the value 21 and variable b matters. The operator ‘+’ has left- list x). But the third line of code print
have the value 1? This is due to the associativity. Hence in the line of code x will lead to the complete evaluation
implicit type conversion in JavaScript. c = ‘1’ + 2 + 3, the operation ‘1’ + 2 is of the list ‘x’ resulting in printing the
The operator ‘-’ performs just one carried out first, resulting in the string natural numbers (from which you
function, a mathematical subtraction. ‘12’ because ‘1’ is a string. Then the have to forcefully exit). Notice that the
When a string and a number are the expression becomes ‘12’ + 3, resulting lazy evaluation feature of Haskell is
operands of the ‘-’ operator, JavaScript in the final string ‘123’ stored in the very powerful but at the same time has
converts the string to a number. So in variable c. But in the line of code d = received some criticism from experts.
the line of code, b = ‘2’ - 1, the string 1 + 2 + ‘3’, due to left-associativity
‘2’ is converted to the number 2, and of the operator ‘+’, the first operation Multiple main methods in Java
the number 1 is subtracted from it to performed is 1+2, resulting in the When I first learned Java
obtain 1. Notice that, here, the variable number 3, because both 1 and 2 are programming, I did a very poor job
b contains a number, 1. Now let us look numbers. Then the expression becomes because I tried to learn some advanced
at what happens with the operator ‘+’. 3 + ‘3’, resulting in the final string ‘33’ features (Swing and AWT) without
The operator ‘+’ performs two stored in the variable d. Imagine the mastering the basics. Till recently, I
functions, mathematical addition and horror of a Java programmer going was under the false impression that
string concatenation. When a string through these results in his favourite a Java program can contain only
and a number are the operands of the programming language! one main method. But then I got a
‘+’ operator, instead of converting the rude shock — it is possible to have
string to a number, JavaScript converts Lazy evaluation in Haskell more than one main method in a Java
the number to a string. So in the line Haskell is a general-purpose purely program. For example, consider the
of code a = ‘2’ + 1, the number 1 is functional programming language, which Java program AAA.java shown below
converted to the string ‘1’, and the has a feature called lazy evaluation! with two different classes AAA and
strings ‘2’ and ‘1’ are concatenated to Because of lazy evaluation, expressions BBB, each having one main method.
give ‘21’. Notice that here, the variable are not evaluated immediately if they
‘a’ contains a string, ‘21’. With that are just bound to variables. Instead, the class AAA
evaluation of an expression is carried {
21 out only when its results are needed by void disp()
other operations. For this reason, lazy {
1 evaluation is often called ‘call-by-need’. System.out.println(“Hello From
123 Lazy evaluation enables infinite functions AAA...”);
to be stored in Haskell lists. Figure 5 }
33 shows the processing of one such infinite
list in the GHCi interactive environment. public static void main(String[ ]
Figure 4: Output of the JavaScript script type.js The list ‘x’ represents the infinite list of args)

Figure 5: Output of GHCi interactive environment

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  81


Developers Insight

{ You can imagine the number of errors


AAA a = new AAA(); they are going to make within a short span
a.disp( ); of time. At least, that is what happened
} to me and it was very frustrating. Figure
} 7 shows an example for array indexing
in the Scilab console. I think the figure
class BBB clearly illustrates the matter. In addition
{ Figure 6: Output of the Java program AAA.java to MATLAB and Scilab, there are other
public static void disp() programming languages like Fortran,
{ --> x= [111 222 333]; Julia, Mathematica, etc, in which arrays
System.out.println(“Hello from start at index 1.
BBB...”); --> x= (0); A lot more could have been added
} to this discussion, but since I don’t
public static void main(String[ Invalid index. know whether or not these so called
] args) surprises are indeed surprising to
{ --> x= (1); others, let us stop for the time being.
BBB b = new BBB(); ans = The selection of surprising features is
b.disp( ); based on my personal experiences and
} preferences, so it might be quite possible
} 111. that the very universally surprising
feature Y of programming language X
Figure 6 shows the output of the Figure 7: Output on the Scilab console might be missing on this list. But the
program AAA.java. From the figure, it most important takeaway from this
is clear that the name of the class you computer scientist Edsger W. Dijkstra discussion is that while learning a new
use to invoke the Java Virtual Machine has convincingly argued that this should programming language, if you are really
(JVM) decides the main method to be the case and it would be nice if you surprised by a particular feature, then it
be called. You can see that the output could go through his paper). But after a might be the reason why you are learning
is different when you execute the while, the students master the language that language. All the features present
commands java AAA (the main method and become comfortable with array in your old programming language will
of AAA is called) and java BBB (the indices starting at 0. Years later, array make the learning of the new one easy,
main method of BBB is called). Though indices starting at 0 become second but the surprising new features will test
surprising, it reminds us that the main nature to them. And then one day they you and might even give you a career
method is no way special. have to learn and use MATLAB/Scilab boost. So, the next time you come across
for some mathematical computations. a surprising feature in a programming
Matrix indexing in MATLAB/ All of a sudden, they realise that language, be ready; mastering it might
Scilab indexing starts at 1 for MATLAB/Scilab. be the break you need.
Imagine students learning their first
programming language, be it Python,
Java, C, or C++. The first time they By: Deepu Benson
come across arrays, I am sure they will The author is a free software enthusiast whose area of interest is theoretical computer
have some difficulty adjusting with science. The open source tools of his choice include ns-2 and ns-3. He maintains a
technical blog at www. computingforbeginners.blogspot.in.
the indexing starting at 0 (the famous

Visit ‘Online Only’ expo-cum-conference


What’s New In Electronics & IoT from the comfort of your HOME…

During These Challenging Times?

VISIT NOW https://IndiaTechnologyWeek.com JUNE EDITION: 17th – 19th JUNE, 2020

What’s New In Electronics & IoT


82  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com
Let's Try Developers

Image Feature Processing


in Deep Learning using
Convolutional Neural Networks:
An Overview
David H. Hubel and Torsten Wiesel laid the foundations for building the CNN
(convolutional neural network) model after their studies and experiments on the human
visual cortex. Since then, CNN models have been built with near human accuracy. This
article explores image processing with reference to the handling of image features in
CNN. It covers the building blocks of the convolution layer, the kernel, feature maps and
how the activations are calculated in the convolution layer. It also provides insights into
various types of activation feature maps and how these can be used to debug the CNN
model to reduce computations and the size of the model.

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  83


Developers Let's Try

“M
ore than 50 Hubel and Wiesel also discovered automatically learn the important
per cent of that there are three types of cells in features without any guided supervision.
the cortex, the the visual cortex and each has distinct Given the images of two classes, for
surface of the characteristics, based on the features example, dogs and cats, a CNN will be
brain, is devoted to processing visual they learn or react to. These are: simple, able to learn the distinctive features of
information,” said William G. Allyn, a complex and hyper-complex. Simple dogs and cats by itself.
professor of medical optics. cells are responsible for learning Compared to other deep learning
The human visual cortex is basic features like lines and colours; models for image related problems,
approximately made up of around 140 complex cells for learning features CNNs are computationally efficient,
million neurons. In addition to other like edges, corners and contours, while which makes them the preferred
tasks, this is the key organ responsible hyper-complex cells are responsible option since they can be configured
for detecting, segmenting and for learning combinations of features to run on many devices. Ever since
integrating the visual data received by learnt by simple and complex cells. the arrival of AlexNet in 2012,
the retina. It is then sent to other regions This powerful insight paved the way researchers have successfully built
in the brain where it is processed and to understanding how perception is new CNN architectures that have
analysed to perform object recognition built – the receptive fields of multiple very good accuracy, with powerful
and detection, and subsequently retain it neurons may overlap and together they and efficient models. Popular CNN
as applicable. tile the whole field. These neurons model architectures include VGG-16,
In the early 1950s, David H. Hubel work together, with a few neurons Inception models, ResNet-50, Xception
and Torsten Wiesel conducted a series learning features on their own and and MobileNet.
of experiments on cats and won two others learning by combining the In general, all CNN models have
Nobel prizes for their amazing findings features learnt by other neurons, and architectures that are built using the
on the structure of the visual cortex. finally integrating to detect all forms of following building blocks. Figure 2
They revolutionised our understanding complex features in any region of the depicts the basic architecture of a CNN
of the human visual cortex and paved visual field. model.
the way for much further research based For the video of the Hubel and ƒƒ Input layer
on this topic. Wiesel experiment, visit https://www. ƒƒ Convolution layers
Their studies showed that in the youtube.com/watch?v=y_l4kQ5wjiw. ƒƒ Activation functions
visual cortex, many neurons have ƒƒ Pooling layers
small local receptive fields. A local CNN architecture ƒƒ Dropout layers
receptive field means that these CNNs are the most preferred deep ƒƒ Fully connected layers
neurons react to visual stimuli only if learning models for image classification ƒƒ Softmax layer
they are located in a specific region or image related problems. Of late, In the CNN architecture, the
of the visual field of perception. In CNNs have also been used to handle building blocks involving the
other words, these neurons are fired problems in fields as diverse as convolution layer, activation function,
or activated only if the visual stimulus natural language processing, voice pooling layer and dropout layer are
is located at a particular place on the recognition, action recognition and repeated before ending with one or
retina or visual field. They found that document analysis. The most important more fully connected layer and the
some neurons have larger receptive characteristic of CNNs is that they softmax layer.
fields and they are fired, activated or
react to complex features in the visual
Sample Cell Complex Cell Hypercomplex Cells
field, which in a way are a combination
of low-level features to which other
neurons react. The low-level features
mean horizontal and vertical lines,
edges and corners and different angles
of lines, while high-level features are
Final Object Seen
the simple or complex combinations
of these low-level features. Figure
1 illustrates the local receptive
Local Receptive Fields
fields of different cells in the human
visual cortex. Figure 1: Illustration of local receptive fields and cells in the human visual cortex

84  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Let's Try Developers

in rows x to:

Activation Function x + kh – 1, columns y to y + kw – 1

…where kh and kw are the height


and width of the receptive fields. Hence,
to produce a feature map equal to the
Input Image Pooling Layer Dropout Layer input image size, suitable padding has
Convolution Layer
Softmax Layer to be done around the input image. In
general, zero padding is applied to the
input image.
Fully Connected Layers
It is also possible to slide the kernel
Figure 2: Illustration of CNN model architecture over the input image by spacing out the
pixel location. That means, as described
Kernel above, the kernel is made to slide over
each pixel location of the input image to
generate the feature map. The distance
between each successive receptive field
is one (based on pixel locations). The
spacing out or distance between two
successive receptive fields is called
the stride. As the kernel is made to
Input Image Feature Map slide over rows and columns, the stride
can be different for sliding across
Figure 3: Convolution operation rows (vertical direction) and columns
(horizontal direction). With the stride
Convolution layer in the same location). The area on value more than one, it is possible to
The convolution layer is the most the input image that is used during produce a feature map that is smaller
important building block of the CNN each convolution operation is the than the input size. With the stride, the
architecture. The neurons in the first receptive field, the size of which is neuron value in the feature map located
convolution layer are not connected to equivalent to the kernel size. Since the at (x, y) is produced by rows.
all the pixels in the input image, but kernel is made to slide over the entire
are connected to pixels of their local input image, an equal-sized feature (x) * sh + kh - 1, columns (y) * sw +
receptive fields, as shown in Figure map as the input image is produced kw – 1,
2. Mathematically, the convolution after the convolution operation is
layer merges two sets of information applied to the entire image. The same …where sh and sw are horizontal
to produce a feature map. It applies the procedure is followed for 3D images and vertical strides.
convolution filter or kernel to the input or multi-colour channel input images. Each location (x, y) in the feature
and produces the feature map. Let us An independent feature map is also map will be a neuron generated using
understand this better with an example. produced for each channel in that case. the specific kernel in that convolution
Let us suppose that the input image Since there is one feature map for each layer. The neuron’s weight at a location
size is 6 x 6 and the kernel size is 3 convolution operation using a kernel, (x, y) will hence be a value, after the
x 3. To draw a parallel to the visual for a normal 2D image in grey scale, convolution operation is applied to
cortex, the visual field size is 6 x 6 the number of feature maps produced rows x to:
and the local receptive field size is 3 in each layer is equal to the number
x 3. To generate the feature map, the of kernels used at that layer. For 3D x + kh – 1, columns y to y + kw – 1,
kernel is made to slide over each pixel images, it is three times the number of
location of the input image. Hence kernels due to three colour channels. …where kh and kw are the height and
a single value is generated for each During a convolution operation, for width of the receptive field of the input.
convolution operation (the summation each row x and column y in the feature The CNN model is built to learn
or multiplication of the pixel value map, the value at position (x, y) will be the equation WX + b, where W is
with the corresponding kernel value the total of the multiplication of values the weight of the neurons, X is the

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  85


Developers Let's Try

activation values of neurons, and b is studying the features of the eye and vertical strides, respectively.
the bias term. All neurons in a feature eyebrows together as a combination,
map share the same weights (kernel) then normal CNN would learn the zx, y, k
and bias term, but different activation features of the eyebrow and the eye as
values that correspond to the values two different features, and it may not The above is the output of the
of the receptive field corresponding to help in learning the minor differences neuron located at row m, column n in
that neuron in the feature map. This is of the combination of features. In that the feature map k of the convolutional
one of the reasons why computations case, the convolution layers have to layer l.
are less in a CNN compared to a DNN be tweaked to learn those features
(deep neural network). A feature learnt appropriately. kl-1
by a neuron at one particular location
will allow it to detect the same feature Activation feature maps in CNN The above is the number of feature
anywhere else in the input, regardless of Although activations are generated maps in the layer l-1 (previous layer).
the location. This is another important for each neuron, they may not be
feature of the CNN as compared to propagated to the final prediction. ai, j, fm
a DNN, which is not translational Let us now get the equations to
invariant. determine the output of a neuron at a The above is the activation output of
Next, consider the convolution convolutional layer. For a grey scale the neuron located in layer l – 1, row i,
layers after the first convolution layer. image (single channel) the output of column j, and feature map fm. bk is the
The neuron at location (x, y) in the the neuron in the first convolution layer bias term for feature map k (in layer l).
feature map k in a convolutional layer l located at (x, y) of the feature map k is
gets its input from all the neurons in the given by the following: wi, j, fm, k
previous layer l -1, which are located
in rows: zx, y = Σm=1 to kh (Σn=1 to kw (ai, j The above is the weight (kernel)
* wm,n)) + bk for i = m * sh + kh – 1 of any neuron in the feature map k of
x * sw to * sh + kw - 1, columns y * and j = n * sw – 1 and ai, j the layer l and its input located at row
sh to y * sw + kh – 1 m, column n, relative to the neuron’s
…which will be the pixel values of receptive field and feature map fm.
…and for all feature maps in the the input image. And by: The feature maps produced in the
layer l – 1. These are the same inputs convolution layer are input to the next
to all neurons located in the same row kh, kw, sh, and sw layer (pooling layer) after applying the
x and column y, in all the feature maps.   activation function. The feature map
This explains how the features learnt …which are the height and width generated after applying the activation
in one layer get integrated to learn of the kernel and the horizontal and function is called the activation feature
combined features or complex patterns vertical strides, respectively. map. The visualisation of the activation
in successive layers. This also explains The output of the neuron in any feature map for a filter in a particular
how a CNN learns distinct features convolutional layer l (l is not the first convolution layer depicts the regions of
at the initial layers; and convolutes convolutional layer) and for the feature the input that are activated in the feature
to integrate basic features to learn map k is given by the following: map after applying the filter. In each
complex distinct features or patterns activation feature map, the neurons
in the intermediate convolution layers, zx, y, k = Σm=1 to kh (Σn=1 to kw have a range of activation values with
before finally learning objects present (Σfm = 1 to kl-1 (ai, j, fm * wm, n, the maximum value representing the
in the input in the last convolution fm, k))) + bk for i = m * sh + kh – 1 most activated neuron, the minimum
layers. As discussed earlier, any feature, and j = n * sw – 1 and ai, j, fm value representing the least activated
pattern or object learnt by the CNN neuron, and the value zero representing
at any layer will allow it to detect this …which will be the activation a neuron not activated.
anywhere else in the input, independent values of previous layer l – 1. And by: Let us consider an illustration. The
of the location. So one needs to be extra AlexNet model is used for transfer
careful when using a CNN to detect kh, kw, sh, and sw learning to build a binary classifier
combined objects that are similar in that classifies the given flight image
the input. For example, in the case of …which are the height and width to either a passenger flight or a fighter
face detection, if one is interested in of the kernel, and the horizontal and flight. AlexNet has five convolutional

86  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Let's Try Developers

activation feature map (the green


coloured box), which is shown as a
high resolution image below the test
image. The white coloured pixels in
this image show only those features
that have been learnt by this filter
or, in other words, these are the
activated neurons of the feature map
with a value greater than zero. The
remaining pixel values are zero and
hence black in colour, so neurons
corresponding to those pixels are
not activated. From a comparison
between the activation feature map
and the original image, it can be seen
Figure 4: Activation feature maps of a passenger flight that the features in the activation
feature map correspond to multiple
windows in the input image. This
can also be justified by checking
the top nine highly activated images
shown at the top right corner in the
figure. The top nine images show the
objects that have multiple window
patterns or features. Refer to http://
yosinski.com/deepvis for more details
on visualisation of the CNN model.
A similar analogy is applicable
to neurons in all the feature maps
shown above.
Now, among the 16 x 16
activation feature maps in the
convolution layer shown in Figures
4 and 5, it can be seen that many
of these maps are not showing any
Figure 5: Activation feature maps of fighter flight activations. None of the neurons in
these feature maps are activated.
layers — the first four convolutional retrained during transfer learning). CNN theory states that each filter
layers are frozen and the model is ƒƒ The passenger and fighter represents distinct feature/s at each
trained to obtain very good accuracy of flight test images (top left layer, and in these figures, each of
around 98 per cent. The figures 4 and 5 corner in the figure) for which the 256 filters represents features of
show the visualizations of the activation visualisations are shown. the passenger or the fighter flight that
feature maps of the fifth convolutional ƒƒ Assume the activation feature maps are learnt. If there are no activations,
layer, which has 256 filters and hence are arranged as a 16 x 16 matrix this means that it does not learn any
256 activation feature maps. Figures 4 with index 1 given for activation feature. Since the model is a binary
and 5 show the following details. feature map at row 1, column 1; and classifier for classifying passenger
ƒƒ The activation feature maps for index 17 for the activation feature and fighter flights, each of the filters
the passenger flight image and map at row 2, column 1, and so on. in the convolution layer will be
the fighter flight image for the ƒƒ The higher resolution image of the learning either passenger or fighter
newconv5 convolution layer selected activation feature map features. Hence these feature maps
(the fifth convolutional layer (green coloured box) is shown just can be classified into four types –
of the AlexNet model, renamed below the test image. class 0, class 1, mixed and inactive.
to newconv5 since it has been In Figure 4, check the 137th ƒƒ Class 0: In these feature maps, the

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  87


Developers Let's Try

features learnt belong to class 0 only. fighter class, then the count of the type This article provides an overview
ƒƒ Class 1: In these feature maps, the of filters will not match the count of the of how the visual cortex perceives
features learnt belong to class 1 only. type of filters for the fighter class during objects. It explains the parallels
ƒƒ Mixed: In these feature maps, the training, which is quite obvious. The between CNNs and different building
features learnt belong to both class potential reason for mis-classification blocks of the CNN architecture. It
0 and class 1. could be any of the following: also briefly covers each building
ƒƒ Inactive: These feature maps do ƒƒ The model has evaluated the block and how the convolution
not learn any features of class 0 or features of both the fighter and layer processes the image features
class 1. passenger flight classes, and found from the input. It then highlights
The type of activation feature maps that fighter features dominating the importance of activation during
in each layer can be easily visualized the passenger features during image classification. Knowing
using the Deepvis tool specified above. classification and hence it is mis- the details about the activations
Many a times, complex standard models classified. of each filter in each layer allows
are used to build a simple binary as ƒƒ The model has evaluated and found model developers to understand the
above, or a model for up to ten class more of fighter features matching importance of each filter and update
classifiers using transfer learning. during classification and hence it is their CNN architectures or build new
During those scenarios, as seen above, mis-classified. CNNs with reduced parameters and
many filters may not be required since ƒƒ The model has evaluated and computations. The article also gives
they will not be learning any features at found comparable fighter and details on the types of activation
all, and fewer filters will be sufficient to passenger features matching during feature maps, and how they can be
learn features of all the classes. In these classification and due to minor used for debugging CNN models and
cases, if the requirement is to retain the differences in features learnt, it has to detect potential mis-classification.
original model architecture for some been mis-classified Further exploration and research on
reason, then the inactive filters can be To understand the above activation feature maps for different
removed to reduce the computations reasoning, the activation feature maps types of models other than CNN will
and the size of the model. of each convolution layer need to be provide many insights, especially
The types of filters will also analysed, and the average count of for object detection and recognition
help to debug when the test image the type of filters at each layer may models. The same can be extended to
is mis-classified. The types of filters have to be considered to come to a natural language processing and voice
at each layer can be studied for both conclusion. There are approaches to recognition models, the activation
the classes during training and during handle this, which will be discussed feature maps for which would be a bit
evaluation. The filters activated can in the next article. more complex to analyse.
be compared during training and
evaluation. When making a correct
prediction of the test image, the count References
of the type of filters for each class [1] https://towardsdatascience.com/applied-deep-learning-part-4-convolutional-
will approximately match the average neural-networks-584bc134c1e2
count of the type of filters for each [2] http://yosinski.com/deepvis
[3] https://towardsdatascience.com/wtf-is-image-classification-8e78a8235acb
class during training. This means that [4] https://www.tensorflow.org/
if a test image is correctly classified
as a passenger flight class, then its
count of the type of filters (class 0, By: B.N. Chandrashekar, Dr Manjunath Ramachandra and Shashidhar Soppin
class 1, mixed and inactive) must be B.N. Chandrashekar is a principal consultant and researcher with hands-on programming
approximately equal to the average experience. He has around two decades of industry experience. With a Masters from
count of the type of filters of the IISc, he specialises in embedded, retail, e-commerce and AI driven technology.
passenger flight determined during Dr Manjunath Ramachandra has over two decades of work experience in the
training, and the same will be true for overlapping verticals of artificial intelligence, computer vision, healthcare and wireless/
an image correctly classified as the mobile technologies. He figures in the list of the ‘2000 outstanding intellectuals of the
21st century’, brought out by the International Biographical Center, UK.
fighter flight class.
These findings can help to debug Shashidhar Soppin, DMTS (distinguished member of the technical staff) senior
mis-classified images. For example, if member of Wipro, has 19+ years of experience in the IT industry. He specialises in
virtualisation, Docker, the cloud, AI, ML, deep learning and OpenStack.
the test image is mis-classified as the

88  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Let's Try Developers

Using spaCy for Natural


Language Processing
and Visualisation
spaCy is an open source Python library that lets you break down textual data into machine
friendly tokens. It has a wide array of tools that can be used for cleaning, processing and
visualising text, which helps in natural language processing.

following code:

pip install spacy


python -m spacy download en_core_web_sm

spaCy has different models that


you can use. For English, the standard
model is en_core_web_sm. For the
coding part, you will have to select a
Python IDE. I personally like to use
Jupyter Notebook, but you can use any
IDE that you are comfortable with.
Even the built-in Python IDLE works
fine for the code covered in this article.
Once you are on your IDE, test if
the installation has been successful by
running the following code. If you get
no errors, then you are good to go.

N
atural language processing By the end of this article, you will import spacy
(NLP) is an important have hands-on experience in using myspacy = spacy.load(‘en_core_web_sm’)
precursor to machine spaCy to apply all the basic level text
learning (ML) when processing techniques in NLP. If you The myspacy object that we have
textual data is involved. Textual data are looking for a lucrative career in created is an instance of the language
is largely unstructured and requires machine learning and NLP, I would model en_core_web_sm. We will use
cleaning, processing and tokenising highly recommend adding spaCy to this instance throughout the article for
before a machine learning model can your roster. performing NLP on text.
be applied to it. Python has a variety
of NLP libraries available for free Installing spaCy Reading and tokenising text
such as NLTK, TextBlob and Gensim. As with most of the machine learning Let’s start off with the basics.
However, spaCy stands out when it libraries in Python, installing spaCy First create some sample text and
comes to its speed of processing text requires a simple pip install command. then convert it into an object that
and applying beautiful visualisations While it is recommended that you use a can be understood by spaCy. We
to aid in understanding the structure of virtual environment when experimenting will then apply tokenisation to the
text. spaCy is written in Cython; hence with a new library, for the sake of text. Tokenisation is an essential
the upper case ‘C’ in its name. simplicity, I am going to install spaCy characteristic of NLP as it helps us
This article discusses some key without a virtual environment. To install in breaking down a piece of text into
features of the spaCy library in action. it, open a terminal and execute the separate units. This is very important

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  89


Developers Let's Try

for applying functions to the text such As you can see in the following are all inflected forms of the word
as NER and POS tagging. output, we have successfully broken ‘character’. Here, ‘character’ is the
down the sample_passage into lemma or the root word. Lemmatisation
#reading and tokenizing text discernible sentences. is essential for normalising text. We
some_text = “This is some example text use the lemma_ property in spaCy to
that I will be using to demonstrate the This is an example of a passage. lemmatise text.
features of spacy.” A passage contains many sentences.
read_text = myspacy(some_text) Sentences are denoted using the dot #lemmatisation of text
print([token.text for token in read_ sign. for word in read_text:
text]) It is important to detect sentences in print(word, word.lemma_)
nlp.
The following will be the output of The following is the lemmatised
the code: Removing stop words output of the sample text. We output the
An important function of NLP is to word along with its lemmatised form.
[‘This’, ‘is’, ‘some’, ‘example’, remove stop words from the text. Stop To preserve page space, I am sharing
‘text’, ‘that’, ‘I’, ‘will’, ‘be’, words are the most commonly repeated the output of a single sentence from our
‘using’, ‘to’, ‘demonstrate’, ‘the’, words in a language. In English, words sample text.
‘features’, ‘of’, ‘spacy’, ‘.’] such as ‘are’, ‘they’, ‘and’, ‘is’, ‘the’,
etc, are some of the common stop Sentences sentence
We can also read text from a file as words. You cannot form sentences that are be
follows: make semantic sense without the usage denoted denote
of stop words. However, when it comes using use
#reading and tokenizing text from a file to machine learning, it is important the
file_name = ‘sample_text.txt’ to remove stop words as they tend to dot dot
sample_file = open(file_name).read() distort the word frequency count, thus sign sign
read_text = myspacy(sample_file) affecting the accuracy of the model. . .
print([token.text for token in read_ spaCy has a list of stop words in its
text]) library for English. To be precise, there Finding word frequency
are 326 stop words in English. You can The frequency at which each word
Sentence detection remove them from the text using the occurs in a text can be vital information
A key feature of NLP libraries is is_stop property of spaCy. when applying a machine learning
detecting sentences. By finding the model. It helps us to find the main
beginning and end of a sentence, you #removing stopwords topic of discussion in a piece of text
can break down text into linguistically print([token.text for token in read_ and helps search engines provide users
meaningful units, which can be very text if not token.is_stop]) with relevant information. To find the
important for applying machine frequency of words in our sample text,
learning models. It also helps you in After removing the stop words, we will import the Counter method
applying parts of speech tagging and the following will be the output of our from the collections module.
named entity recognition. spaCy has sample text.
a sents property that can be used for #finding word frequency
sentence extraction. [‘example’, ‘passage’, ‘.’, from collections import Counter
‘passage’, ‘contains’, ‘sentences’, word_frequency = Counter(read_text)
#sentence detection ‘.’, ‘Sentences’, ‘denoted’, ‘dot’, print(word_frequency)
sample_passage = “This is an example ‘sign’, ‘.’, ‘important’, ‘detect’,
of a passage. A passage contains many ‘sentences’, ‘nlp’, ‘.’] The following is the output for the
sentences. Sentences are denoted using frequency of words in our sample text:
the dot sign. It is important to detect Lemmatisation of text
sentences in nlp.” Lemmatisation is the process of Counter({This: 1, is: 1, an: 1,
read_text = myspacy(sample_passage) reducing the inflected forms of a word example: 1, of: 1, a: 1, passage: 1,
sentences = list(read_text.sents) such that we are left with the root of the .: 1, A: 1, passage: 1, contains: 1,
for sentence in sentences: word. For example, ‘characterisation’, many: 1, sentences: 1, .: 1, Sentences:
print(sentence) ‘characteristic’ and ‘characterise’, 1, are: 1, denoted: 1, using: 1, the:

90  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Let's Try Developers

1, dot: 1, sign: 1, .: 1, It: 1, is:


1, important: 1, to: 1, detect: 1, attr

sentences: 1, in: 1, nlp: 1, .: 1})


det
POS tagging
Parts of speech (POS) tagging helps nsubj amod
us in breaking down a sentence and
understanding what role each word
in the sentence plays. There are eight This is an example sentence.
parts of speech in the English language, DET AUX DET NOUN NOUN
i.e., noun, pronoun, adjective, verb,
adverb, preposition, conjunction and Figure 1: POS tagging visualisation using displaCy
interjection.
In POS tagging, we apply a tag to Every year DATE , Hyderabad hosts the biggest exhibition in India GPE . It has fun rides and food stalls. At least a million CARDINAL people visit it

each word in a sentence that defines Every year DATE .

what part of speech that word represents


in the context of a given sentence. Figure 2: NER visualisation using displaCy
In spaCy, we will make use of two
properties — tag, which gives the fine- #using displacy for POS tag least a million people visit it every
grained part of speech, and pos, which visualisation year.”
gives the coarse-grained part of speech. from spacy import displacy read_text = myspacy(sample_text)
The spacy.explain() method provides a sample_text = “This is an example displacy.serve(read_text, style=”ent”)
description of a given tag. sentence.”
read_text = myspacy(sample_text) Figure 2 shows the output of the
#POS tagging displacy.serve(read_text, style=”dep”) NER visualisation.
for word in read_text: In this article, we have discussed
Figure 1 shows the output of the the basic functionalities most
print(word, word.tag_, word.pos_, code, which you can view by using your commonly used for NLP using
spacy.explain(word.pos_)) browser on the localhost link provided spaCy and its built-in visualiser
by displaCy. displaCy. There are many more
The following is the output of the advanced features in this library that
POS tagging of the sample text. In order Visualising NER using displaCy are absolutely worth exploring and
to preserve page space, I am providing Named entity recognition (NER) is the mastering if you want a solid foundation
only a portion of the output. process of identifying named entities in in NLP. However, those features are
a piece of text and tagging them with beyond the scope of this article. Hence,
This DT DET determiner a pre-defined category, such as a name this is a good stopping point. I have
is VBZ AUX auxiliary of a person, location, organisation, shared a few links as reference to help
an DT DET determiner percentage, time, etc. spaCy has the you get a deeper understanding of the
example NN NOUN noun property ents, which we can use to spaCy library. I recommend you go
of IN ADP adposition apply NER on text. We will also use through them.
a DT DET determiner displaCy to visualise the NER output.
passage NN NOUN noun
#using displacy for NER visualisation
References
Visualising POS tagging using sample_text = “Every year, Hyderabd
[1] https://spacy.io/models
displaCy hosts the biggest exhibition in India.
[2] https://spacy.io/usage
spaCy comes with a built-in visualiser It has fun rides and food stalls. At
called displaCy, using which we can
apply and visualise parts of speech By: Mir H.S. Quadri
(POS) tagging and named entity The author is a scholar and researcher in the fields of artificial intelligence and machine
recognition (NER). To visualise POS learning. He shares a deep love for Web development and has worked on multiple
tagging for a sample text, run the projects using a wide array of frameworks. He is also a FOSS enthusiast and actively
contributes to several open source projects. You can find his works on codelatte.site.
following code:

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  91


Developers Let's Try

SPA JS: Building Cross-


Platform SPAs with Less Code
SPA JS is a single page application that is simplified and developed using JavaScript.
It is used to create a cross-platform application using which you can develop a Web
app, and the same code can also be used to develop desktop apps. If the code
is bundled with the Electron software, then the result would be a desktop app,
and if you put the code under a Cordova stack, then you would get Android or iOS
applications. So, you can use the same code of SPA JS to develop applications for
the desktop as well as the mobile.

T
he primary aim of SPA JS is about frameworks like Knockout, Problem 2: Earlier, we had frontend
to keep development simple Ember and Backbone. It’s not that designers and backend developers.
and eradicate repetitions. Angular, React, etc, are really game The former designed the page, while
It also follows the crucial changers. We already had the concept the latter used programming languages
principle, YAGNI (You Ain't Gonna of building a single page application like C, C++, Java and .NET to build
Need It), which states that you should using these frameworks. So, let us a page and call APIs. JavaScript was
“Always implement things only when examine the problem with Angular, then introduced as frontends for micro-
you need them but never when you React or Vue.js, as well as other interactions, but frontend designers
just think that you may need them.'' programming options. faced difficulties designing pages with
For example, in open source software, Problem 1: The question is why it. To address this situation, backend
contributors may develop different do we have different frameworks? developers were brought to the frontend.
modules by assuming that you may need This is because developers try to build But unfortunately, backend developers
them. However, these modules may not some applications on one of these are generally not good in design.
be useful at all. frameworks, but the latter do not solve So, companies and developers
their particular problem in the way they are facing the issue of how to create a
Why SPA JS? want to. So, developers try to create perfect bridge between the frontend and
Many of you may be familiar with their own frameworks that solve their backend. Recently, the term ‘full-stack
Angular, React and the Vue.js individual problem, but this leads to a developer’ has emerged, describing those
framework but very few may be aware lot of frameworks. who know all the backend development,

92  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Let's Try Developers

Figure 1: SPA folder structure


one package.json, with one or two learn and use SPA JS. Also, it is on a
including databases. However, the full- files like index.html, does not allow pure MVC framework that provides
stack developer doesn't know all the us to start our project. We now need flexibility to developers.
developments related to every platform to do a npm install, which downloads The structure is well organised,
or framework, and that's the current approximately 120MB of code. And all which helps to reduce the code
problem being faced. this needs to be done even if you need complexity. Even when the developers
Problem 3: Earlier, developers to just demonstrate ‘Hello World’. try to mix up the code, SPA JS will
used to do programming in BASIC or We think that we are writing less not allow it. It has a clear view that is
FORTRAN. For them, the Web was just code but, unfortunately, we are not — it separated with three layers — one layer
CGI or Perl. They used to dump all the is more likely we are writing less code with pure HTML content, the second
code in a particular Perl file including as someone else has already written it, with all the controls, and the data layer
HTML, Perl scripts, etc, but it used to and we have installed it along with our (which is entirely API driven) that helps
be a nightmare to manage this. Then seed project. to build the perfect bridge. One of the
developers started using Java, but Steve Jobs explained the problem significant problems with Angular.js
even Java servlets were painful. They like this, "The way you get programmer is that the features keep on updating
wrote HTML code under double quotes, productivity is not by increasing the periodically, and it is challenging to
which was a horrendous task for HTML lines of code per programming per teach developers these features and
developers. So, they needed a separation, day. That doesn't work. The way frameworks. SPA JS has tried to address
like the MVC framework. But after you get programmer productivity is this issue. The site also has all the
the launch of JSP, which is similar to by eliminating the lines of code that guides, so developers don't need to
HTML with the integration of simple JS, you have to write. The goal here is to Google for a solution.
developers are now using JSP with a lot eliminate 80 per cent of the code that
of JavaScript and less HTML, leading you have to write for your app. That's
to complexity of the code. The problem the goal. I’ve seen a lot of demos that Disclaimer:
is that developers seem to be mixing JS, try to take it all the way back into the The views presented in the article are
HTML and even CSS, making things algorithmic part of the code base, and personal and do not represent the views
of the organisation the author works for.
even more complex. none of them have ever been any good."
The first law of software quality So, the less the code, the fewer the
is e=mc² (errors=(more code)²). errors, and the higher the productivity. By: Kumar P.
This implies that the more the code, The author is a consulting engineer and
the more the errors. Just take an Let us get back to SPA JS chief frontend architect at Unisys. This
example of Angular or React. The To get started, visit https://spa.js.org/. article is based on the talk he presented
during Open Source India 2019.
seed project that we download with The site has all that is needed to

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  93


Admin Insight

A Headless CMS:
Delivering Pure Content in the
Age of Mobile-first Internet

CMS
A headless CMS (content
management system) is a back-
end-only version built from the
ground up as a content repository
that makes content accessible via
a RESTful API for display on any
device. It is designated ‘headless’
because it does not have a
front-end. A headless CMS is
focused on storing and delivering
structured content.

W
hat started on October individually and as part of communities, This gave birth to content
29, 1969, with so that the content can be created to suit management systems (CMSs), which
ARPANET (under their tastes and mood. revolutionised the way we create
the US Department of information, store it and present it
Defence) delivering a message from The need for CMSs to users. The CMSs enabled people
one computer to another, became the Much of what you experience as the to collaborate to create content
Internet as we know it today. It is the Internet is powered by the information that and share it with others, in an
hub of information that’s available is stored in back-end systems of the apps instant. Soon, people didn’t need
and accessible around the globe to that you use today. Internet companies, to understand HTML in order to
everyone. About 60 per cent of the schools, universities and even governments create and share content, as the CMS
world’s population has access to the store information about you to understand evolved to accommodate complex
Internet and there are innumerable your behaviour, share content with you content creation requirements and
apps, software, distributed systems and and to function in meaningful ways. provided the building blocks to enable
smart devices that use it — enabling When the World Wide Web became collaboration, document management
what was once in the realm of sci-fi a phenomenon, it allowed people to and storage.
movies and stories. The Internet today share information without boundaries. As content became king and with
is one of the most profitable mediums to Anyone with a basic understanding of more users and complex content, there
make money. From small time creators HTML could create Web pages and was the need to ensure that users could
to large Internet companies, everyone host them on a website. It was just Web access the content whenever they
depends on the content to generate pages initially and when things caught wanted and from wherever they were.
revenue. This is coupled with analytics on, we needed something better to The content delivery networks (CDN)
to understand what people like, create, store and present the content. evolved to meet this need. The CDNs

94  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight Admin

try to cache the static content as close


as possible to the users so that the
content servers don’t have to serve the
information that doesn’t change much
every time there is a request for it.

Content management systems


As the technology moved forward, Database API Clients
computing moved from large
mainframes to smaller PCs. And the Figure 1: Basic design of a headless CMS
devices just kept getting smaller — to
mobile phones, smart devices such as any screen size and layout. For devices coupling between the server and
smart watches and digital assistants. without a screen, an API layer was client qualifies.
A traditional CMS includes two added on top of the content storage However, you will also find
major components – a content storage engine. The apps that run on these headless CMSs that use frameworks
and creation/management system that devices can call these APIs and deliver or languages other than JavaScript.
allows a content creator to create, the content as needed. Jamstack can be used on a variety
manage and modify the content and This works but it’s just only a of operating systems, Web servers,
content types; and a content delivery matter of ‘when’ rather than ‘if’, the databases, back-ends or front-ends. It
system that builds the content and computing and consumption is due provides a higher degree of flexibility,
delivers it to the users when they request for a change – just like smartphones extensibility and scale, as well as a
for content. The former provides the arrived a little over a decade ago and great developer experience.
content storage and management, and changed our lives. We are not sure One thing is for sure — a headless
the latter is the presentation layer that what this change will be and if you CMS does not include a content-serving
fetches the raw content, and applies follow the innovations in hardware mechanism of any kind and that leads
styles and the template to compile the and technology, you are bound to us to the following question.
final layout and deliver it to users. It is have heard of things like transparent
also possible to add more functionality displays, foldable phones and even How do I deliver content?
by way of plugins or addons that wall-sized panels. The traditional systems deliver content
complement the content delivery with That’s where pure content with their content delivery component.
more features such as advertising. management system comes into the As a content creator, you are limited by
This worked when PCs were the picture – a system that can pave the their capabilities in this area. Additional
primary devices to consume content. way for any means of delivery on any engineering is needed to accommodate
Think of a blog, video blog or even a kind of device with any size or type of any other forms of content delivery that
news site – a traditional CMS ensures display – even ones without a display! you may need. This means that you
that the content writers can push content, Open source developers (and others) need to spend time and resources to
collaborate with each other and then, have already done that and these systems ensure all your users can get the content
the content gets delivered to users in the are called headless CMSs – named so in the same consistent way.
right styles and formatting without the since they do not have a dependency, A headless CMS is meant to be
creators doing anything about it. or a coupling with a presentation layer designed API-first, and it exposes
found in the traditional CMSs. Since APIs for you to consume and serve the
Pure content management headless CMSs store content in a content from any medium.
However, content consumption moved structured format and without any notion Some of the traditional CMSs, like
from large screens to smaller ones — about how the content will be consumed WordPress for example, also provide
think mobile phones and smart watches or presented to the users, they are APIs to bridge this gap. But they are
— and even smart devices and digital considered ‘pure’. Figure 1 illustrates a designed to be all-in-one solutions, so
assistants that do not have a screen. basic design of a headless CMS. decoupling the parts you don’t need is
Traditional CMSs needed to now also A number of popular headless not possible. You can choose to ignore
provide a way to deliver content on CMSs are based on Jamstack (https:// them though.
these new devices and form factors. jamstack.org/), which stands for That’s a Catch-22 situation – you
This was tackled by adding support JavaScript, APIs and mark-up, and either end up wasting computing
for responsive themes that can adapt to any software that doesn’t have a tight resources on components you don’t

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  95


Admin Insight

need to run, or you use them as best you to understand what is being asked listed above, as a headless CMS provides
can and fork your code to accommodate in natural language either using text first class APIs to interact with the data and
different consumption mediums. or voice inputs. Once the processing data types so you don’t have to worry about
With a headless system, you gain for this task is over, the bot makes database handling. You can get started with
complete control over how the content itself useful by surfacing the relevant how you will integrate these APIs with
stored in it will be used; and you can content. This is similar for the digital your application surfaces, be it a mobile
design a pipeline that doesn’t need to be assistants — in their hardware as well as or Web app, chatbot or a skill for a digital
refactored any time you want to support software avatars. assistant.
a new medium for your users, be it a The content that you consume Strapi (https://strapi.io/) is one of
smart watch or an AR system. this way is usually small snippets of the headless CMSs that I am working
If you have run into the challenges information that can be sourced from with, and it includes a UI to add content
mentioned with a traditional CMS, emails, calendars, third party services as well as user roles and permissions
you are most likely using one of the such as flight tracking and weather, etc. to control access to the content. Ghost
workarounds mentioned as well. If not, But other content, such as the witty (https://ghost.org/) and Netlify CMS
you need to start thinking ahead about responses that you get, are best served (https://www.netlifycms.org/) are others
when you will run into them and how from a headless content. These aren’t that I have tried out a little bit. You
you will address them, starting today. only limited to jokes, etc, but include can look at other options at https://
meaningful content as well. headlesscms.org/.
Leveraging a headless CMS As an example, consider a food A headless CMS also makes an
Headless CMSs not only can be used in ordering or a shopping portal. These excellent case for Web apps or websites,
places where a CMS is needed but also in services take a large number of partners even the ones that are a good mix of
other scenarios where you need to store and sellers onto their platform, who list static and dynamic content. You can
content in a structured and consistent their services or products. A traditional front-end your headless CMS with
format. Let’s explore this further! approach is to store this content in a a static site generator such as Hugo
Almost any scenario that you NoSQL DB and fetch the content as the or Gatsby, and achieve much greater
believe requires a CMS is a candidate UI is being rendered. On a high level, performance while lowering your
for a headless CMS too. However, this approach requires that you build an computing costs. Static site generators
headless CMSs also have uses beyond API wrapper that does the following: also support plugins to fetch data from
their traditional content serving ƒƒ Builds a UI to take onboard the dynamic sources such as APIs. This
functions. If you look at the tech partners, sellers, items and services combination offers a number of benefits
trends of the last couple of years, you ƒƒ Validates the content before it is when used in conjunction with a CDN.
might have realised that chatbots, saved in the database with the Dynamic site acceleration solutions
smart devices and digital assistants are right structure such as Azure Front Door or CloudFront
topping the charts. These interfaces are ƒƒ Fetches the content when being add an additional layer of performance
powered by the innovation in hardware, served to the users benefits that can further reduce the load
NUI (natural user interface) and of Add a chatbot or a digital assistant on your content repository. A prime
course, the content that can be served (or a skill) to the mix and you can serve example of this scenario is GitHub Pages
in a variety of ways. the content on those mediums too if (https://pages.github.com/) that allows
A good chatbot is powered by your APIs are designed efficiently. you to host a static website directly from
state-of-art NLP models, and optionally, With a headless CMS, you are left your GitHub repo, free of cost. Figure 2
with a voice interface that allows it with just the work needed for Point 1 is a schematic for this scenario.
Variations of this architecture can be
useful in other scenarios where serving
content is part of the solution.
Dynamic
Content Challenges with a headless CMS
When looking at the reference
Static Site architecture diagram, it is evident that the
Generators CDN Clients lack of a presentation layer in a headless
Headless
CMS CMS introduces the flexibility to use
any front-end. But it also means that you
Figure 2: A schematic for GitHub Pages have another piece of the puzzle that

96  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Insight Admin

you need to account for when decidingecosystem having been built around needs to be built before you can realise
on whether to opt for a headless CMS.them. For example, the popular CMS the value they bring to the table.
A traditional CMS covers your Web WordPress is estimated to have over a Moving to a headless CMS may not
and mobile bases out-of-the-box with 60 per cent share of the overall CMS be a straightforward activity and will
themes to customise the look and feel.
market and 35 per cent share of all require planning and effort to transition
To get started with content delivery on
websites functioning today. One of your workflows and users seamlessly.
a headless CMS, developers will need the reasons is the extensibility and Before you start planning on this, take
to build a user experience (UX) for the
the plugin marketplace this CMS a step back and see if your existing
content. This is also true for the content
offers to customise it from a simple CMS satisfies your current and future
creators – a traditional CMS includesblog to a full-fledged e-commerce content delivery requirements. If you see
content creation and collaboration website. With responsive themes and yourself extending your CMS because
workflows which are non-existent plugins, you can convert WordPress it does not support certain delivery
with the other choice. A pure contentinstallations to PWAs and push them surfaces that either your business plans
management system will also be a into the App Store and Play Store to dictate you use or you believe your users
culture shock to content authors since
provide a more engaged experience are moving to, it is time to consider
well-known concepts such as pages andfor the users. Drupal is another evaluating a headless CMS.
other UI constructs do not exist at all,
contender in this domain.
and a UX for authors will need to be Headless CMSs are a newer breed
built from scratch too. in the content management space, and By: Ashish Sahu
Visit ‘Online Only’ expo-cum-conference
What’s New In Electronics & IoT
they do tick all the boxes for building from
The the comfort
author ofsolutions
is a cloud your HOME…architect
So, is it a traditional CMS scalable and extensible content delivery working with Microsoft India. He helps
or a headless one? pipelines that can support virtually ISVs and startups overcome technical
During These Challenging Times?
Traditional CMSs have been around any kind of device available today and
challenges, adopt the latest technologies
and take their solutions to the next level.
for a long time, with quite an tomorrow. The user experience, though,
VISIT NOW https://IndiaTechnologyWeek.com JUNE EDITION: 17th – 19th JUNE, 2020

What’s New In Electronics & IoT


During These Challenging Times?
Visit ‘Online Only’ expo-cum-conference from the comfort of your HOME…

JUNE EDITION:
17th – 19th JUNE, 2020

VISIT NOW https://IndiaTechnologyWeek.com

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  97


Admin Let’s Try

The Benefits of Using


Terraform as a Tool for
Infrastructure-as-Code (IaC)
DevOps tools have enabled software engineers to deploy application source code in a
better way. Including Infrastructure-as-Code (IaC) in the DevOps cycle helps the transition
to the cloud model, accommodating the shift from ‘static’ to ‘dynamic’ infrastructure.

T
he advent of DevOps helped Infrastructure-as-Code (IaC) What is Terraform?
minimise the dependence on This approach is the management Terraform is an open source
sysadmins who used to set and configuration of infrastructure provisioning tool from HashiCorp
up infrastructure manually, (virtual machines, databases, load (more can be read at http://terraform.
while seated at some unknown corner balancers and connection topology) in io/) written in the Go language. It
of the office. Managing servers a descriptive cloud operating model. is used for building, changing and
and services manually is not a very Infrastructure can be maintained just versioning infrastructure, safely and
complicated task in a data centre. But like application source code under efficiently. Provisioning tools are
when we move to the cloud, scale and the same version control. This lets responsible for the creation of server
start working with many resources the engineers maintain, review, test, and associated services rather than
from multiple providers (AWS, GCP, modify and reuse their infrastructure configuration management (installation
Azure, etc), manually setting up and and avoid direct dependence on the and management of software) on
configuring to achieve on-demand IT team. Systems can be deployed, existing servers. Terraform acts as a
capacity slows things down. Being managed and delivered fast, and provisioner, and focuses on the higher
repetitive, the manual process is most automatically, through the IaC. abstraction level of setting up the
likely error-prone. And it cannot be There are many tools available servers and associated services.
managed and automated when working for IaC such as CloudFormation The infrastructure Terraform can
with resources from different service (only for AWS), Terraform, manage includes low level components
providers together. Chef, Ansible and Puppet. such as compute instances, storage,

98  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Let’s Try Admin

and networking, as well as high level 4. Confirm the Terraform installation: Terraform. These APIs are
components such as DNS entries, SaaS maintained by the community.
features, etc. It leaves the configuration terraform –v //v0.12.9 • Username: Key given by
management (CM) to tools such as provider
Chef that do the job better. It lays Life cycle • Password: Key given by
the foundation for automation in Terraform template (code) is written provider
infrastructure (both cloud and on- in HCL language and stored as a • Region: Specify region of
premise) using IaC. Its governance configuration file with a .tf extension. deployment
policy makes the cloud operating model HCL is a declarative language, which ƒƒ Resource: There are many kinds
system-compliant, which otherwise is means our goal is just to provide the of resources such as an open-stack
only known internally to the IT team. end state of the infrastructure and basic instance, AWS EC2, Droplet
Terraform is cloud-agnostic and uses Terraform will figure out how to create and Azure VM, which can be
a high level declarative style language it. Terraform can be used to create and created as follows:
called HashiCorp Configuration manage infrastructure across all major • resource <provider_instance_
Language (HCL) for defining cloud platforms. These platforms are type> <identifier>
infrastructure in ‘simple for humans to referred to as ‘providers’ in Terraform • Image ID: This is machine
read’ configuration files. Organisations jargon, and cover AWS, Google Cloud, image specific, a tag for an
can use public templates and can have Azure, Digital Ocean, Open Stack and image we need to install
a unique private registry. Templates many others. (Ubuntu and Windows).
are a maintained repository containing Let us now discuss each stage in • Flavour type: This is the
pre-made modules for infrastructure the IaC lifecycle (Figure 1), which is type of instance governing
components needed under version managed using Terraform templates. the CPU, memory and disk
control systems (like Git). space.
Code Here, template defines the
Installation Figure 2 is sample code for starting provider as AWS, and provides the
Installing Terraform is very simple; just an EC2 or t2.micro instance on AWS. access key, secret key and region for
follow the steps mentioned below. As visible, only a few lines will being able to connect to AWS. After
1. Download the archive instantiate an instance on AWS. It’s that, resource to be create is specified
also implicit that the same code can be ie. an aws_instance here and is
(https://releases.hashicorp.com/ maintained under VCS and be used to named as “example”. Also, the count
terraform/${VER}/terraform_${VER}_linux_ instantiate instances in various regions and instance type (size of instance)
with different resource configurations, is mentioned as code and can be
amd64.zip): removing error-prone and time- seen in Figure 2.
export VER=”0.12.9” consuming manual work. The same code can be used to
ƒƒ Provider: All major cloud players set up an instance in another region.
wget such as AWS, Azure, GCP and Also, Terraform offers users the power
https://releases.hashicorp.com/ OpenStack have their APIs for of using variables and other logical
terraform/${VER}/terraform_${VER}_linux_ statements such as if-else and the for
amd64.zip loop, which can optimise the setting up
of infrastructure even more.
2. Once downloaded, extract the
archive:

unzip terraform_${VER}_linux_amd64.zip

3. The last step created a Terraform


bin file in the working directory.
For Terraform to be accessible
everywhere, move it to usr/local/bin:

sudo mv terraform /usr/local/bin Figure 2: Terraform template code for


Figure 1: Life cycle of IaC using Terraform provisioning an EC2 instance

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  99


Admin Let’s Try

Maintain and reuse


Configuration templates, i.e., pre-made
modules for infrastructure components
are written, reviewed, managed, and
reused after storing them under VCS.
Organisations can use public templates
contributed by the community as well
as have unique private templates stored
in a central registry.

Init
The Terraform binary contains the basic
functionality and everything else is
downloaded as and when required. The
‘terraform init’ step analyses the code, Figure 3: The ‘Terraform init’ step initialises all the required resources and plugins
figures out the provider and downloads
all the plugins (code) needed by the
provider (here, it’s AWS). Provider
plugins are responsible for interacting
over APIs provided by the cloud
platforms using the corresponding CLI
tools. They are responsible for the life
cycle of the resource, i.e., create, read,
update and delete. Figure 3 shows
the checking and downloading of the
provider ‘aws’ plugins after scanning
the configuration file.

Plan
The ‘terraform plan’ is a dry run for
our changes. It builds the topology of
all the resources and services needed
and, in parallel, handles the creation of
dependent and non-dependent resources.
It efficiently analyses the previously
running state and resources, using a
resource graph to calculate the required
modifications. It provides the flexibility Figure 4: ‘Terraform plan’ dry runs the instantiation
of validating and scanning infrastructure
resources before provisioning, which
otherwise would have been risky. The
‘+’ symbol signifies the new resources
that will be added to the already
existing ones, if any. Figure 4 shows the
generation of the plan and shows the
changes in resources, with ‘+’ indicating
addition and ‘-’ indicating deletion.

Validation
Administrators can validate and
approve significant changes produced
in ‘terraform planning’s’ dry run. This Figure 5: ‘Terraform apply’ instantiates the validated infrastructure in the planning step

100  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Let’s Try Admin

completely prevents specific workspaces Additional features of Terraform 4. Terraform uses a declarative style
from exceeding predetermined thresholds, 1. It provides a GUI to manage all the when the desired end state is written
lessening the costs and increasing running services. It also provides directly. Here, the tool is responsible
productivity. Also, any standards or an access control model based for figuring out and achieving that
governing policy for the cloud operating on the organisation, teams and end state by itself.
model have not been put in place yet. users. Its audit logging emits logs Let’s say, we want to deploy five
A policy has not been codified yet; whenever a change (here, change Elastic instances on AWS using Chef
rather, certain practices are internally signifies sensitive write to existing (Figure 8a) and Terraform (Figure 8b).
known among teams and organisations. IaC) happens in the infrastructure. Observing the script given in Figure
Terraform enforces sentinel policies as 2. Existing pipeline integration: 8, one can see that both are equivalent
code before provisioning the workflow Terraform can be triggered from and will produce the same results.
to minimise the risks through active within most continuous integration/ But let’s assume a festive sale is
policy enforcement. continuous deployment (CI/CD) coming up. The expected traffic will
DevOps pipelines such as Travis, increase, and infrastructure must scale
Apply Circle, Jenkins and Gitlab. This our application. Let’s say, five more
The ‘terraform apply’ executes the enables a plugin provisioning instances are required to handle the
exact same provisioning plan defined workflow and sentinel policies into predicted traffic.
in the last step, after being reviewed. the CI/CD pipeline. As language is procedural, setting
Terraform transforms configuration 3. Terraform supports many providers count as 10 will start additional 10
files (.tf) based on appropriate API (more at https://www.terraform. instances rather than adding extra five, thus
calls to cloud provider(s) automating io/docs/providers/index.html), initiating a total of 15 instances. We must
resource creation (using the provider’s allowing users to easily manage manually remember the previous count, as
CLI) seamlessly. This will create the resources no matter where they shown in Figure 9a. Hence, we must write
resources (here, an EC2 server) on AWS are located. Instances can be a completely new script adding one more
in a flexible and straightforward manner. provisioned on cloud platforms redundant syntax code file.
Figure 5 shows the resources that will such as AWS, Azure, GCP and As language is declarative, setting
be created, and Figure 6 confirms the OpenStack using APIs provided by the count as 10 (as can be seen in
changes that took place. the cloud service providers. Figure 9b) will start an additional five
If we proceed to the AWS console
to verify the instantiation, a new EC2
instance will be up and running, as
shown in Figure 7.

Destroy
After resources are created, there may be
a need to terminate them. As Terraform
tracks all the resources, terminating
them is also simple. All that is needed
is to run ‘terraform destroy’. Again,
Terraform will evaluate the changes and
execute them after you give permission. Figure 6: Terraform showing the absolute changes made to the infrastructure

Figure 7: EC2 instance can be seen in the initialising state

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  101


Admin Let’s Try

instances. We do not have to manually


remember the previous count. We have
provided the end state and everything
else will be handled by Terraform
itself. This demonstrates all possible
scenarios when manual interventions
occur, such as slow speed, prone to
human error, etc. Figure 8a and 8b: Sample code for instantiating five instances

Advantages of incorporating
IaC using Terraform
1. Prevents configuration drift:
Terraform, being provisioning
software, binds you to make
changes in your container and only
then deploy the new ones across Figure 9a and 9b: Sample code for instantiating ten instances
every server. This separates server
configuration from any dependency, store local variables such as cloud saving extra infrastructure costs and
resulting in identical instances tokens and passwords in encrypted other overheads.
across our infrastructures. form on the terraform registry. Terraform is an open source tool
2. Easy collaboration: The terraform 5. Masterless: Terraform is masterless, that helps teams manage infrastructure
registry (Terraform’s central registry by default, i.e., it does not need in an efficient, automated and reusable
version control) enables teams to a master node to keep track of manner. It has a simple modular syntax
collaborate on infrastructure. all configuration and distributing and supports multi-cloud infrastructure
3. No separate documentation updates. This saves the extra configuration. Enterprises can use
needed: The code written for infrastructure and maintenance costs Terraform in their DevOps methodology
infrastructure will become your we’d have to incur in maintaining an to construct, modify, manage and deliver
documentation. By looking at the extra master node. Terraform directly infrastructure at a faster pace with less
script, thanks to its procedural nature, uses the cloud providers’ API, thus manual intervention.
we can figure out what’s currently
deployed and its configuration.
By: Vaibhav Aggarwal and Prof. B. Thangaraju
4. Flexibility: Terraform not only
The authors are associated with the open source technology lab in the International
handles IaaS (AWS, Azure, etc) but Institute of Information Technology, Bengaluru.
also PaaS (SQL, NodeJS). It can also

OSFY Magazine Attractions During 2020-21


MONTH THEME
March 2020 Mobile/Web App Development and Optimisation
April 2020 Cybersecurity, Open Source Firewall, Network Security and Monitoring
May 2020 AI, Deep Learning and Machine Learning
June 2020 DevOps Special
July 2020 Blockchain and Open Source
August 2020 Database Management and Optimisation
September 2020 Open Source and IoT
October 2020 Cloud Special: BigData, Hadoop, PaaS, SaaS, Iaas and Cloud
November 2020 Open Source on Windows
December 2020 Open Source Programming (Languages and Tools)
January 2021 Data Security, Storage and Backup
February 2021 Best in the World of Open Source (Tools and Services)

102  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Let’s Try OpenGuru

Lighttpd: A Lightweight
HTTP Server for
Embedded Systems
This article guides readers through the implementation of Lighttpd, a lightweight
interactive HTTP server for embedded systems that have limited memory and storage but
require real-time performance. Lighttpd is also very useful on Linux desktops.

L
ighttpd is an open source Web server optimised Next, use the tar command to untar the tarball:
for environments in which speed and high
performance are critical. It is a viable alternative $ tar -zxvf lighttpd-1.4.55.tar.gz
to Apache or other heavyweight Web servers. $ cd lighttpd-1.4.55
Lighttpd is standards-compliant and has built-in security as
well as flexibility. The entire source code has been written After extracting the contents from the tarball, we move
in C. This comes under the BSD ‘three-clause’ licence. It to the directory and do the compiling followed by the
also supports fast CGI (Common Gateway Interface) for installation.
creating dynamic content.
$ ./configure
Prerequisites
For the system: The desktop/embedded system should run
Linux. The demonstration here is based on an Ubuntu 16.04 Note: When you are cross-compiling, you have to
32-bit desktop. mention the build, host, target and other arguments to
make use of your cross-compiler.
For the reader: The reader should know the basics of
HTTP GET/POST/PUT and HTML syntax.
$ CC=”/path/to/cross-compiler-gcc”
Installation $ LD=”/path/to/cross-cmpiler-gnu-ld”
As the focus is on embedded platforms, the installation $ ./configure --host=ppc-linux-gnu \
shown is from source code. --build=i686-redhat-linux-gnu \
To start the installation, you can download the source --target=powerpc-*-elf --includedir=/path/to/sysroot-for-
code from https://download.lighttpd.net/lighttpd/releases- cross
1.4.x/lighttpd-1.4.55.tar.gz. You can download it manually
or by using the following command: To get help on all the options, type the following
command:
$ wget https://download.lighttpd.net/lighttpd/releases-
1.4.x/lighttpd-1.4.55.tar.gz $ ./configure –help

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  103


OpenGuru Let’s Try

Then, to compile and install, use the command given server.errorlog = “/var/log/lighttpd/error.log”
below: accesslog.filename = “/var/log/lighttpd/access.log”

$ make server.modules = (
$ sudo make install “mod_access”,
“mod_accesslog”,
“mod_fastcgi”,
Note: If the target is an embedded one like PPC or
ARM, ‘make install’ is not of any use. Hence you have “mod_rewrite”,
to identify the necessary executables and library after “mod_auth”
compilation, and place them in your file system. )

Once we have successfully compiled and installed the # mimetype mapping


software, create a lighttpd user and group: mimetype.assign = (
“.pdf” => “application/pdf”,
# groupadd lighttpd “.tar.gz” => “application/x-tgz”,
# useradd -g lighttpd -d /home/sganguly/Documents/myserver/ “.tgz” => “application/x-tgz”,
-s /sbin/nologin Lighttpd “.tar” => “application/x-tar”,
“.zip” => “application/zip”,
Here, /home/sganguly/Documents/myserver/ is the root “.mp3” => “audio/mpeg”,
directory for the server. “.css” => “text/css”,
Next, create a log directory for the server to store logs: “.html” => “text/html”,
“.htm” => “text/html”,
# mkdir /var/log/lighttpd “.js” => “text/javascript”,
# chown lighttpd:lighttpd /var/log/Lighttpd “.c” => “text/plain”,
“.conf” => “text/plain”,
Now, create a directory for the documents that the HTTP “.text” => “text/plain”,
server will be serving (Document root): “.txt” => “text/plain”,
“.xml” => “text/xml”,
# mkdir /home/sganguly/Documents/myserver “.mpeg” => “video/mpeg”,

This can be anything of your choice as long as you )


mention it properly in the configuration file. Now create
Index.html file with any content, and place it in this index-file.names = ( “index.html” )
directory.
We are now ready to run the Lighttpd server. The Lighttpd server will start at 127.0.0.1:80 and, by
default, it will load index.html. So, go to a browser and type
# /usr/local/sbin/lighttpd -f /home/sganguly/Documents/ the following command in the browser window:
lighttpd.conf
http//127.0.0.1:80
In the above command, /home/sganguly/Documents/
lighttpd.conf is the configuration file used. The configuration Then hit Enter, and you will be shown the index.html page.
file’s content should be as follows:
Adding support for fast CGI
server.document-root = “/home/sganguly/Documents/myserver” To generate dynamic behaviour and server side computation,
there are PHP, JSP and many other things. But for an
server.port = 80 embedded system with limited memory, in order to get fast
server.username = “lighttpd” responses, C is still the best choice. CGI will allow your C
server.groupname = “lighttpd” code in the server to talk to the HTTP client.
server.bind = “127.0.0.1” Step 1: Download fcgi2 from https://github.com/
server.tag =”lighttpd” FastCGI-Archives/fcgi2.
Step 2: Compile and install.

104  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com


Let’s Try OpenGuru

int main (void) {


Web Server unsigned char val = 0;
Web Client
(lighttpd)
(browser) HTTP char *env;
Fast CGI

while(FCGI_Accept() >= 0){


CGI binary database
printf(“Status: 200 OK\r\n”);
printf(“Content-type: text/html\r\n\r\n”);
Figure 1: Web server with CGI architecture printf(“<!doctype><html>”);
printf(“<head><meta http-equiv=\”refresh\”
$ sudo apt-get install autogen autoconf libtool /*these are content=\”3\”</head> <body>”);
supporting programs in host PC*/ printf(“val %d<br>”, val);
$ cd fcgi2-master
$ ./autogen.sh env=getenv(“HTTP_USER_AGENT”);
$ ./configure if(env != NULL) {
printf(“User Agent = %s<br>”,env);
}
Note: As before, CC and LD variables have to be set

and the necessary options can be given for embedded
env=getenv(“CONTENT_TYPE”);
cross targets.
if(env != NULL) {
printf(“Content Type = %s<br>”,env);
$ make }
$ sudo make install
printf(“</body></html>\n”);
val++;
Note: As before, collect the binaries and place them }
in a file system for cross targets.
return EXIT_SUCCESS;
Step 3: Now we are ready to write and execute our first }
CGI program. But before starting, edit lighttpd.conf for
linking the CGI binary. FCGI_Accept() will be true with any GET or POST
request. Lighttpd also sets some environment variables to
fastcgi.debug = 1 be accessible from the CGI code.
fastcgi.server = ( Step 4: Let’s compile this code:
“/hellotest” => ((
#”host” => “127.0.0.1”, sganguly@breakout:~/Documents$ gcc hello_cgitest.c -o hello_
#”port” => 51000, cgitest.fcgi -lfcgi -Wl,-rpath /usr/local/lib
“bin-path” => “/home/sganguly/Documents/hello_cgitest. /usr/local/lib is the location where fcgi library is located
fcgi”, after installation.
“socket” => “/tmp/hello_cgitest.sock”,
“check-local” => “disable”, Once you open the page from the browser, it will look
“max-procs” => 2, like what’s shown in Figure 2.
))
)

Here hello_cgitest.fcgi is the name of the binary val 15


User Agent = Mozilla/5.0 (X11; Linux i686; rv;75.0) Gecko/20100101 Firefox/75.0
executable. Socket will be used between the HTTP server Number1:
11
and the CGI binary. To get this page, type 127.0.0.1:80/ Number2:
hellotest in a browser. -5

Submit
#include <fcgi_stdio.h>
Sum = 6
#include <stdlib.h>
#include <string.h> Figure 2: A sample Web page with form (as per code)

www.OpenSourceForU.com  |  OPEN SOURCE FOR YOU  | JUNE 2020  |  105


OpenGuru Let’s Try

For HTTP POST, the entire form’s contents come int tokenNo=0;
as a big string, which can be read and parsed in C code, int inputlen = atoi(getenv(“CONTENT_LENGTH”));
as shown below: fread(buffer, inputlen, 1, stdin);
/*parse input fno=4&ln0=-2 */
int num1,num2,sum; token = strtok(buffer, delim);
printf(“<!doctype> \ while(token!= NULL)
<html> <body> <form method=\”post\” action=\”\”> \ {
<label for=\”fno\”>Number1:</label><br> \ tokenNo++;
<input type=\”number\” id=\”fno\” name=\”fno\” if(2 == tokenNo) num1=atoi(token);
value=%d><br> \ if(4 == tokenNo) num2=atoi(token);
<label for=\”lno\”>Number2:</label><br> \ token = strtok (NULL, delim);
<input type=\”number\” id=\”ln0\” name=\”ln0\” }
value=%d><br><br> \ sum = num1+num2;
<input type=\”submit\” value=\”Submit\”> \ }
</form> \
Sum = %d\ You can see that the form is also generated in the code
</body> \ using the printf statement.
</html><br>”,num1,num2,sum); The entire source code is available at https://github.com/
env=getenv(“REQUEST_METHOD”); SupriyoGanguly/tryLighty.
if(strcmp(env, “POST”) == 0)
{ By: Supriyo Ganguly
char buffer[200] = {‘\0’};
The author is a senior technical officer at Electronics
const char delim[] = “&=”; Corporation of India Limited, Hyderabad.
char *token;

106  |  JUNE 2020  |  OPEN SOURCE FOR YOU  |  www.OpenSourceForU.com

Вам также может понравиться