Академический Документы
Профессиональный Документы
Культура Документы
What’s New In
DevOps
Building Gulp: A Understanding
The DevOps Based Continuous Integration
DevOps Tool For Web And Continuous
Pipeline Applications Delivery/Deployment
India Technology Week @ Home
THANK YOU
EARLY INVESTORS
We thank the VISIONARY team members of these brands. They chose to believe
and invest in our concept of an Online Only Expo-cum-Conference.
They helped us create history. We'll henceforth call them Our Early Investors,
and will always remain grateful to them.
JUNE EDITION
17th – 19th JUNE, 2020
FOR U & ME
20 The Rise of AI and its Impact 15
34 How Technology is Helping
Fight Coronavirus
35 Role of Technology in
Maintaining Law and Order
FOCUS
40 The Five Best DevOps Tools
43 Understanding Continuous
Integration and Continuous
Delivery/Deployment
46 Building the DevOps Pipeline
with Jenkins
52 Understanding DevOps:
A Revolution in Software
Development
54 How DevOps Differs from
Traditional IT and Why
56 RCloud is DevOps for
Data Science
Getting Ready for Remote Learning with FOSS
58 How Prometheus Helps to
Monitor a Kubernetes Deployment
62 DevOps vs Agile: What You Should
26 31
Know About Both
65 Gulp: A DevOps Based Tool for
Web Applications
DEVELOPERS
89 Using spaCy for Natural Language
Processing and Visualisation
92 SPA JS: Building Cross-Platform
SPAs with Less Code
COLUMNS
70 CodeSport Breaking Down the Buzz Introduction to Green
ADMIN Around Quantum Computing Computing and its Importance
98 The Benefits of Using Terraform
as a Tool for Infrastructure-as-
Code (IaC)
OPEN GURU REGULAR FEATURES
103 Lighttpd: A Lightweight HTTP 07 FossBytes
Server for Embedded Systems
37 67
Ph: (011) 26810602, 26810603; Fax: 26817563
E-mail: info@efy.in
MISSING ISSUES
E-mail: support@efy.in
BACK ISSUES
Kits ‘n’ Spares
New Delhi 110020
Ph: (011) 26371661, 26371662
E-mail: info@kitsnspares.com
NEWSSTAND DISTRIBUTION
Ph: 011-40596600
E-mail: efycirc@efy.in
ADVERTISEMENTS
MUMBAI
Ph: (022) 24950047, 24928520
E-mail: efymum@efy.in
BENGALURU
Ph: (080) 25260394, 25260023 Six Things to Consider for a DevOps DevOps is the Future
E-mail: efyblr@efy.in
Transformationto the Cloud of Software Development
PUNE
Ph: 08800295610/ 09870682995
E-mail: efypune@efy.in
GUJARAT
Ph: (079) 61344948
E-mail: efyahd@efy.in
72 78
JAPAN
Tandem Inc., Ph: 81-3-3541-4166
E-mail: japan@efy.in
SINGAPORE
Publicitas Singapore Pte Ltd
Ph: +65-6836 2272
E-mail: singapore@efy.in
TAIWAN
J.K. Media, Ph: 886-2-87726780 ext. 10
E-mail: taiwan@efy.in
UNITED STATES
E & Tech Media
A Study of Various Open A Few Surprising Programming
Ph: +1 860 536 6677
E-mail: usa@efy.in
Source Blockchain Platforms Language Features
83 94
Delhi 110020. Copyright © 2020. All articles in this issue, except for
interviews, verbatim quotes, or unless otherwise explicitly mentioned,
will be released under Creative Commons Attribution-NonCommercial
3.0 Unported License a month after the date of publication. Refer to
http://creativecommons.org/licenses/by-nc/3.0/ for a copy of the
licence. Although every effort is made to ensure accuracy, no responsi-
bility whatsoever is taken for any loss due to publishing errors. Articles
that cannot be used are returned to the authors if accompanied by a
self-addressed and sufficiently stamped envelope. But no responsibility
is taken for any loss or delay in returning the material. Disputes, if any,
will be settled in a New Delhi court only.
SUBSCRIPTION RATES
Year Newstand Price You Pay Overseas
(`) (`)
Five 7200 4320 —
Three 4320 3030 —
One 1440 1150 US$ 120 Image Feature Processing in Deep A Headless CMS: Delivering
Kindly add ` 50/- for outside Delhi cheques.
Please send payments only in favour of EFY Enterprises Pvt Ltd. Learning using Convolutional Pure Content in the Age of
Non-receipt of copies may be reported to support@efy.in—do mention
your subscription number. Neural Networks: An Overview Mobile-first Internet
Getting Ready
for Remote
Learning with
FOSS
The COVID-19 pandemic has made it mandatory to adapt to information and
communication technology tools to enable remote learning. This article explores various
free and open source tools that enable academia to cater to the needs of students. Tools
for various functions such as learning management systems (LMS), video conferencing,
building educational resources and evaluation are explored in this article.
T
he application of information Why FOSS? Is it only for video conferencing?
and communication Though there are many proprietary It’s no wonder that video conferencing
technology (ICT) has and paid software to enable the tools have suddenly become household
been a topic of discussion teaching-learning process, they names with the onset of the COVID-19
in academic circles for the past few involve recurrent licensing costs pandemic. Teachers across the
decades. However, with the sudden that might not be affordable for all globe have also started using video
disruption caused by the COVID-19 in a diverse country like ours. One conferencing tools to communicate
pandemic, ICT has emerged from of the most important factors in with their students. However, video
being a nice-to-have component into selecting an ICT tool is inclusion. conferencing alone doesn’t constitute
becoming one that’s mandatory. Earlier, We need to make sure every possible the complete teaching-learning
the perception was that ICT could learner is included. Free and open process. It includes so many other
enhance the teaching-learning process. source software (FOSS) is certainly activities as well. Each of these
Now it is becoming a platform to enable better in terms of inclusion because activities might require specific tools
the teaching-learning process. it removes the cost of the software and hence it is important to know the
With the COVID-19-linked from the scheme of things. Another right tools for each task. The major
lockdown, academia is now forced important advantage is that FOSS can tasks involved are listed below:
to change its standard operating be customised to specific needs and Building learning resources
procedures. It has become necessary for redistributed to the needy without the Communicating with the learners
teachers, students and parents to adapt need for any permissions. Conducting the evaluation process
themselves to this sudden change with
the help of ICT tools, as these facilitate
uninterrupted continuation of the
teaching-learning process. This article
explores various free and open source
tools that enable academia to deliver
services in the best possible manner. Figure 1: Major tasks in remote learning
C
oordinating all the tasks to meet equations, mathematical notations or There are powerful Web based
learning objectives algorithms, then Beamer will make presentation frameworks such as
your presentations look elegant and Impress.js. If you want to check out
Building learning resources professional. Web based presentations, try exploring
Learning resources form the core of any If you want to build browser https://github.com/impress/impress.js.
teaching-learning process. Learning based presentations, then you should Mind maps: Mind maps make
content and how it is delivered will give Reveal.js a try. If you have explanations more effective. There are
determine the success or failure of the introductory knowledge of HTML, many mind-mapping tools available
teaching-learning process. Though there you can build presentations that run like FreeMind, FreePlane and Xmind
are various types of learning resources, inside the browser using Reveal.js. A (Figure 4). Xmind provides many
for the sake of simplicity, let us classify section of the code customised from features and makes the process simple.
the resources that a teacher can build the official demo and its output (Figure Creating a video lecture: Video
for remote learning into the following 3) is shown below: lectures are an important component of
categories: remote teaching. The sequence of steps
Presentations <div class=”slides”> involved in building a screen-casting
Illustrations and mind maps <section> based video lecture is as follows:
Video lectures <h2>Open Source For You </h2> 1. Prepare the presentation slides (using
Podcasts <h3>Presentation on Remote Learning Impress, Beamer or Reveal.js).
Interactive content Tools </h3> 2. Use a screen-recording tool to
Let’s explore the tools belonging to <p> record the slides along with the
each of these categories. <small>Created by <a href=”http:// voice-over (Open Broadcaster
Presentations: Presentations are the kskuppusamy.in”>Dr. K.S. Kuppusamy </ Software – OBS).
most used method of delivering content. a> </small> 3. Use audio-editing tools to enhance
The popular open source tools to build <small>Created using <a href=””> your audio recording (Audacity).
presentations are listed below: reveal.js </a> </small> 4. Use video-editing tools to edit
Impress (Libreoffice) </p> your video (OpenShot, Kdenlive,
Beamer (LaTex) </section> Shotcut).
Reveal.js <section> 5. Upload the lecture to the Web or
If you are already using some sort of <h2>Reveal.js</h2> LMS to share it with your students.
proprietary presentation software then <p> Step 1: This process is already
you might find shifting to LibreOffice Reveal.js makes the presentation explained in the ‘Presentation’ section
Impress very simple and effective. building so simple and effective. of this article.
Beamer is LaTex’s tool for </p> Step 2: This step involves using
building presentations. If you are </section> screen recording software like Open
teaching a subject that involves Broadcaster Software (OBS), which is
a powerful tool. Indeed, it’s not only for
screen recording but also has powerful
streaming capabilities that enable users
to set up various scenes, sources, etc.
You can include the inputs through a
webcam, screen contents, microphone,
etc. In my opinion, everyone who wants
Figure 2: Various categories of resources to create a video lecture must spend
time on exploring OBS.
XMind
Mindmaps FreeMind
Freeplane
#5.
Extract Audio From Video
Sha
reve
Imp
Bea .js
#1.
YouTube Link Sharing Do Noise Removal
re w
ress
als
Buil
Direct upload to LMS Equalize, Compress to Enhance
mer
Aud
Tools
ith t
dP
#3.
acit
Tools
he le
rese
Edit
y
ntat
arn
Aud
ers
ion
io
Building a Video Lecture
#2
#4.
. Re
Edit
co
Add Title
rd S
Vid
Out unncessary portions OBS
eo
cre
Add Captions
Rec
Ca d Scr
C
en
a
p
p
or
t
ture
ure een
Aud
Web
io
cam
Open resources
It is not always a viable option to build
each learning resource from scratch.
There are many open educational
Figure 9: Jitsi – A free and open source video conferencing tool resources (OERs) available. Some of
the important OER repositories are
the teaching-learning process. The ordering and gap-filling. listed below.
interactive content explained in the The major features of TCExam OER Commons (https://www.
earlier section can also be used as are that it is open source, platform- oercommons.org/): This is a public
evaluation tools at the micro level. independent, has community support, digital library of open educational
There are many open source evaluation accessibility (as per Web Content resources. It has features to explore,
tools to conduct holistic evaluation. Accessibility Guidelines that support create and collaborate. The Open
Three of these are listed below: persons with disabilities), and the Author tools make the creation of
Hot Potatoes (http://hotpot.uvic.ca/) capability to conduct paper testing resources simple by providing a
TCExam (https://tcexam.org/) with OMR (optical mark recognition) Web interface. If you have some
VirtualX (http://virtualx. sheets, etc. educational content to share, you
sourceforge.net/) The various features of VirtualX should explore Open Author.
Hot Potatoes is available as to conduct tests include the capability MERLOT (https://www.merlot.
freeware. It can be used to build the to author and organise questions. It org/merlot/): This stands for
following types of tests: multiple- has support for 12 different types of Multimedia Education Resource
choice, short-answer, jumbled- questions, for formulas and equations, for Learning and Online Teaching.
sentences, crosswords, matching/ as well as for graphical analysis. It provides access to a collection
(Credit: https://i0.wp.com/www.techregister.co.uk/wp-content)
explain tradeoffs
among multiple
possible choices
and then help
make those
decisions. Good AI
will always honour
the final decision
made by humans.
I
t is a common phenomenon that when Alan Turing explored it further. credit-card-sized device. Advancement
if you repeat a word enough Progress on it was, however, limited in hardware has significantly enabled
number of times, it loses its due to the state of computer hardware technological augmentation by
meaning. This is happening with available at that time. leaps and bounds.
artificial intelligence (AI) already. When computers became more In the past few years, there has
Although AI has finally made it to powerful in later years, these were been sudden growth in all activities
the mainstream rather too quickly, its faster, affordable and had more power related to AI, which was mainly
journey is going to be rockier than it in terms of storage as well as computing underpinned by the realisation of the
was for other technologies in the past. speed. Since then, research in AI has Internet of Things (IoT) and other
AI, as a concept, is not been growing steadily. There was a time complementary technologies such as
anything new; it has been around when we had merely 1MB memory Big Data, cloud computing, etc. And
for centuries. However, it took systems housed in a big box. Now, since last year, we are seeing several
off significantly during the 1950s, we have 128GB memory systems in a AI implementations.
There is no doubt that AI is still an almost precise prediction of the teaching, coaching and mentoring
in its childhood, but it has reached health status of the patient. And yet, will soon become a high touch service
a critical mass, where research as medical recommendations without and still be in demand. Therefore it is
well as application can happen a doctor’s explicit approval is not a difficult to say whether AI has touched
simultaneously. We can undoubtedly commonplace practice. And this is this sector truly or just morphed it into
say that we have changed gears. good, because, when there is human something else.
AI is already making several life at stake, systems should not make Another area where AI has not yet
decisions that affect our life, whether a final call, ever. Therefore as far as touched and might not affect is live
we like it or not, and has covered the medical field is concerned, AI entertainment and art. These are such
significant ground in recent years. might only reach a status of assisted personalised and creative pursuits that
intelligence and may not be permitted without having a human in it, they
AI is not everywhere yet (should not be allowed) to become a would not have the same meaning.
While it would be natural to think mainstream phenomenon at all. There have been a few experiments
that AI has penetrated almost every While companies are continually with AI creating art, but those art forms
single vertical or market, this is far taking humans out of the customer have quite a different flavour to them.
from the truth. At best, there are service sector and replacing them with AI systems can create art based on what
only a few technology spot-fires in chatbots or automated responders that they have been trained for. Several
a few select industries where AI is are AI-driven, human touch is becoming of those are mainly geometrical and
making its mark. expensive. At an event that saw startups systematic shapes or pictures—nothing
Unfortunately, as always, pitching their company/product, one that a human would necessarily draw
marketing gimmicks are at play startup’s primary differentiation was with a slightly acceptable and natural
to make everyone feel that AI has that it provided personal support for imbalance in it. Real authorship of the
covered everything, while several all queries. Mostly, we are seeing an work of art cannot be yet bestowed to
sections are still untouched. exciting shift in terms of AI- and non- an artificial system.
Many image-recognition systems AI-based offerings. Creativity is some part process
are now better at detecting cancer Self-learning applications is another and some part randomness, which is
or micro-fractures from a patient’s area where AI is making an entry. the exact opposite of the rule-based
MRI or X-ray reports. Many pattern- Using customised learning, pace and method. It is not likely for AI to be able
recognition systems can correlate recommendations, it is becoming quite to contribute directly to the creative
several pathological reports and make popular. However, as that happens, industry any time soon.
Secure | https://electronicsforu.com
CAREERS.electronicsforu.com
Secure | https://CAREERS.electronicsforu.com
Secure | https://CHIPSNWAFERS.electronicsforu.com
BUSINESS.electronicsforu.com
Secure | https://BUSINESS.electronicsforu.com
ACADEMIA.electronicsforu.com
DIRECTORY.electronicsforu.com
Secure | https://ACADEMIA.electronicsforu.com
Secure | https://DIRECTORY.electronicsforu.com
INDUSTRY
• You can advertise for as little as US$ 100 per month RESPONSE We now also act as marketing
partners for our clients and drive
• Special combo offers for advertisers in our print publications
• We’ve now enabled flexible CPM-based advertising
GUARANTEED entire marketing for them, where
we charge them on basis
• You can advertise on the platform of your choice (based on your target audience) SOLUTIONS of results and not efforts!
• We invite press releases at efy-edit-team@efy.in
• Press releases are published free of cost, subject to discretion of the editorial team
CONTACT US: Shrikant Rao • growmybiz@efy.in • +91-98111 55335
For U & Me Insight
What the future holds to adapt to the right technology. And, and responsible disclosure about the
AI and other emerging technologies, most importantly, “What do you want functionality and methodology of the
apart from bringing efficiencies, are to do when you grow up?” will soon system. People involved and affected by
also bringing new possibilities. These become an obsolete question. this need to understand how outcomes
possibilities are creating new business AI will change the job market are derived and, if required, should be
models and opportunities. This will entirely as there will be growing able to challenge them.
continue to happen in the future as requirements for soft skills, since Any AI system should not cause
we progress. most hard skills will be automated. harm to users or general living beings,
Most daily tasks that depend on best Especially for the Indian economy, and must always function in a robust,
estimates or guesswork would also see a since we have mostly relied on hard secure and safe way throughout its
significant shift due to abundance of data. skills for local as well as global lifecycle. Creators and managers of
Due to access to more data, the need for opportunities, this will pose a these systems have the responsibility to
devices that can process this data on Edge significant challenge to keep up with assess continually and manage any risks
would increase and will be a key driver in declining demands. We will be forced in this regard.
maintaining this progression. to come up with new business models, Most importantly, on the
One of the significant drivers not just as businesses but also as accountability front, anyone creating,
of these technological advances is an economy. developing, deploying, operating
democratisation of resources. Whether or managing AI systems must
it is the Internet revolution, open source Maintaining a balanced approach always be held accountable for the
hardware and software revolution, Regardless of how the recent or long- system’s functioning and outcomes.
or anything else, as AI technology term future with AI looks like, there are Accountability can drive positive
becomes a part of our daily lives, we a few points that we must understand behaviours and thereby potentially
will see more of this democratisation and accept in entirety. Most of these ensure that all the above general
happening. This will be a crucial factor points align with OECD’s AI principles principles have been adhered to.
and will keep boosting progress. that were released in early 2019. There is a general feeling that
As of now, most AI applications AI systems should benefit humans, over-regulation limits innovation and
follow a supervised learning approach. the overall ecosystem on the planet, and advancements. However, there is no
In years to come, we will start seeing the planet itself by driving inclusive point in racing to be the first; instead,
more and more of unsupervised growth, sustainable development and let us strive to be better. Being fast and
learning that will keep systems well-being of all. These systems must first by compromising on ethics and
updated continuously. However, this always be designed such that they quality is certainly not an acceptable
will have one significant barrier to respect and follow the rule of law approach by any means.
cross, which is the trust factor. Unless and rights of the ecosystem (humans, It is unlikely that in the next
this trust factor improves, supervision animals, etc). These should also respect ten years or so, we will have robots
will remain a necessity. the general human value system and the controlling humans. However,
There is no accepted or standard diversity it exhibits. More importantly, technology consuming us, our time,
definition of good AI. However, there must be appropriate safeguards feelings and mindfulness is very much
good AI is one that can guide users in the system such that humans are a reality even today; and it is getting
understand various options, explain always in the loop when necessary, worse day by day already.
tradeoffs among multiple possible or can intervene if they feel the need, Just one wrong turn in this fast lane
choices and then help make those regardless of necessity. After all, a fair is what it will take to cause regression
decisions. Good AI will always honour and just society should be the goal of for society. The rise of AI should not
the final decision made by humans. any advancement. lead to the fall of humanity. Let us work
On the consumer front, several Creators of AI systems should towards keeping the technology, AI or
virtual support tools would increase always demonstrate transparency otherwise, in our control, always!
and become mainstream. It will be
almost expected to come across these By: Anand Tamboli
bots first before talking to any human
The author is a serial entrepreneur, speaker, award-winning published author and
at all. However, only businesses emerging technology thought leader.
that demonstrate a customer-centric
approach would thrive in these
scenarios, while others would struggle The article was originally published in the February 2020 issue of Electronics For You.
T
he invention of the computer arises: how is quantum computing A classical computer’s main
has undeniably been one of different from classical computing, purpose is to save and manipulate
the biggest technological and how did it go from being a purely data for working. Its chip uses bits to
revolutions in the history of theoretical subject to being used by store this information. These bits are
mankind. However, classical computing companies like Google and IBM? like tiny switches with two states—on
is not the only way that was formulated and off, represented by one and zero,
in the last century to solve complex Quantum computers versus respectively. From every pixel in an
problems. While it was in 1927 that classical computers image to the texts exchanged between
physicist Heisenberg introduced the We already have computers and even people, everything is ultimately made
uncertainty principle, it was not until supercomputers for faster processing up of these bits, a language that the
1970 that using quantum mechanics speeds, then why do we need quantum computer understands. But even
as a communication resource and the computers? To understand this we supercomputers cannot define the
term quantum information theory need to understand the difference uncertain state that exists between
came into being. So, the question between the two. on and off, especially on atomic and
(Credit: www.eweek.com)
molecular levels. This makes them states can also change (cancel or add)
capable of only analysing simple depending on whether they are in
molecules in practical applications phase or out of phase.
related to biology and chemistry.
This is where scientists and Making of a quantum computer
researchers needed to find a According to IBM researcher
better way of computing when DiVincenzo’s Criteria, there are five
the probability is involved (such minimal requirements for creating a
as spinning a coin instead of quantum computer—a well-defined
flipping it). Also, for problems scalable qubit array; an ability to
above a certain complexity, more initialise the state of the qubits to a
computational power is required, simple fiducial state; a universal set of
which is only possible with quantum gates; long coherence times,
quantum computers. much longer than the gate-operation
Dr Martin Laforest, senior time; and single-qubit measurement.
product manager and quantum There are different ways
technology expert, Isara, to create a qubit. In one of the
explains, “Quantum computers most commonly used methods,
leverage the surprising and often superconductivity is applied
counterintuitive behaviour of to create and preserve hard-to-
atoms and molecules, making them maintain quantum states. Quantum
radically different from today’s computers are isolated from any
computer. They derive their power sort of electrical interference and
by utilising quantum mechanics made to operate in an environment
and marvels such as superposition at almost absolute zero temperatures
and entanglement. Their quantum to prevent errors while working
behaviour makes them much with qubits. Superconductors
more powerful, allowing them to also minimise energy loss during
perform a variety of computational transmission.
tasks exponentially faster than IBM’s quantum computer model In order to achieve results close
classical computers.” to absolute zero, necessary cooling
between on and off. This means that power is made available and multiple
Behind the magic: Working of a if there is more than one option, a other steps are followed. This includes
quantum computer quantum computer can go through each attenuation during refrigeration to
Quantum computers use quantum bits option simultaneously and choose the protect qubits from thermal noise
(qubits) instead of the regular bits. By ultimate answer, instead of ruling out during the process of transmitting
combining qubits, a lot more data can previous options individually before signals to the quantum processor. Also,
be processed in less time as compared checking the next option. cryoperm shields protect the qubits
to basic computers. Underlying Entanglement is a quantum from electromagnetic radiation.
quantum computing is the principle phenomenon where states of two To create a fault-tolerant quantum
of quantum mechanics. Fundamental particles, even if they are physically system, it is necessary to increase the
quantum properties like superposition, separated, are tied such that they computational power of a quantum
entanglement, and interference are used cannot be described independently computer. For this, higher numbers
to manipulate the state of a qubit. of each other. Whatever is the result of qubits are preferable as states
Superposition refers to the of measuring one of these particles, increase exponentially with each qubit.
overlapping of usually independent the outcome of another one will Researchers have designed algorithms
states. Real-life examples include be mathematically related to it. for sequential quantum operations
sounds generated while playing an This correlation is necessary for that can run on fault-tolerant quantum
instrument where notes are played faster computations through special computers for extended periods of
simultaneously, or waves formed on instructions (algorithms) that can only time. To ensure that the results are
throwing a stone in a lake. Qubits can be written with quantum computers. accurate and noise-free, low error rates
be in superposition, that is, somewhere Just like wave interference, quantum need to be maintained.
New opportunities speeds, a quantum computer is also much Can it be a staple technology?
Quantum computers are not just about more vulnerable to errors than a classical Companies are competing to build
saving time and money with better computer would be. reliable quantum computers in a variety
speed and higher efficiency of doing This requires robust quantum of fields like manufacturing, security
tasks that can already be performed. processes and designing devices such and financial services. Numerous
Quantum computers are thought to be that the system is only sensitive to startups like Isara, Ionq and 1qbit have
useful in places where any uncertain the targeted measurement, protecting come up in this field. They are willing
system needs to be simulated. quantum states from decoherence. But to invest money in the technology at
Quantum chemistry is one of the one needs to keep in mind that high this early stage of development because
most promising applications of quantum sensitivity is crucial for precision. of its great potential. Cloud based
computing. By determining the lowest When it comes to cybersecurity, Dr quantum computing technology is
energy state among various molecular Laforest says, “It is important to note that increasingly being leveraged to make it
bond lengths that represents equilibrium quantum is a dual-use technology that more easily available for remote access
molecular configuration, it is possible also has the capacity to cause security and be user-friendly for enterprises,
to simulate a molecule. Modeling even chaos. Imagine a world where our no matter the size of their work
simple molecules and predicting particle existing encryption is no longer effective. teams. Microsoft in November 2019
interactions in chemical reactions can If hackers equipped with quantum announced that it would start providing
aid in discovery of new life-saving computers can break these algorithms, access to quantum computers in its
medicines and other compounds useful they can determine the private keys Azure cloud for select customers.
for making efficient devices, which is not used to secure the data and expose it. IBM Quantum designed and built
possible with conventional computing The combination of quantum computer the world’s first integrated quantum
memory and processing power. speed-ups and known algorithms computing system ‘IBM Q System
Another application is cryptography, developed by Peter Shor and Lov Grover One’ for commercial use in 2019.
as these computers can easily generate make this possible. To ensure we are Another company, D-Wave, recently
hard-to-break encryption keys for ready, preparation has to start now as it announced that it is freely opening
better cyber security. Dr Laforest says, will take a decade or more to fix many up its quantum computers to anyone
“Quantum computing promises many of our complex systems. Experts in the who has ideas for how to use them
positive disruptions. One such possibility field of quantum computer development to find a cure for COVID-19. For
is the use of quantum particles called agree it is highly probable we are assistance in developing solutions, the
photons to create secure communications only ten years away from large-scale company along with its customers like
channels for the distribution of quantum computing with this capability. Cineca, Volkswagen, Denso, Tohoku
quantum keys. This has the potential Cybersecurity experts are already University, Kyocera, Sigma-i, among
to revolutionise networking and the starting to prepare for potential quantum others, is offering access to their
protection of future data transmission.” computer attacks on the encryption engineering teams.
Other possibilities include improved algorithms we use to protect data today.” Quantum computers could change
solar panels, financial strategies for “The good news is that quantum-safe the world but they are still not advanced
prediction of financial markets, better algorithms exist. The most significant enough to replace the classical method.
weather forecasts, and so on. challenge we face is the development They are not so useful when it comes
of tools and methods that will make to the basic tasks like storing images
Problems encountered it easy and seamless to transition to that ordinary computers do. They
Holding an object in a superposition state these quantum-safe algorithms. Key are undoubtedly powerful, but not
for a long period is difficult. Interaction to this success will be successfully so reliable yet. As for now, the most
with the environment is necessary for completing this transition within a beneficial way will be to give users
quantum measurement. But if a qubit decade without any wholesale disruption access to both traditional and quantum
comes in contact with such occurrences to our current security systems and computers simultaneously.
as changing magnetic and electric infrastructure,” Dr Laforest adds.
fields and radiation from warm objects Dearth of required skill set, lack
By: Ayushee Sharma
nearby, or if there is a cross talk between of available resources, and cost are
The author is a technology journalist
qubits, it undergoes decoherence, and some other impediments for it to be the at EFY.
changes in the uncertain state cause technology in the making.
the information to be lost. Due to these
interference problems, in spite of high The article was originally published in the May 2020 issue of Electronics For You.
Introduction to Green
Computing and its Importance
Foundation of green computing was laid as far back as 1992 with the launch of
the Energy Star program in the USA. The success of Energy Star motivated other
countries to take up the subject for investigation and implementation.
A
ny technology that aspires Motivation for the subject of any green initiative should have direct
to be nature-friendly ought green computing arose to protect the or indirect motivation to reduce this
to be green. Recognition environment against hazards generated thermal vibration. Reduced circuitry or
of this fact has led to at three different states of ICT, namely, a minimal system helps in reducing the
development of green generators, information collection (by electronic number of vibrating particles.
green automobiles, green energy, green devices), information processing Minimal circuit designs, which
chemistry as well as green computing. (through algorithms and storage) and lead to technologies of very large
Green computing is a leap forward information transportation (through scale integration (VLSI) or ultra large
for information technology (IT), and networking and communication). scale integration (ULSI), are now
more specifically for information and Carbon dioxide accounts for about well-established technical solutions.
communication technology (ICT). eighty per cent of global warming. As a These solutions meet the objectives
Green computing has emerged as the rule of thumb, if world-wide increasing of realising low cost and smaller-size
next wave of ICT. application of ICT is assumed to systems. It was never thought these
Total CO2 emissions 2017 (gigatons) 2017 CO2 emissions per person (tons/person) of e-waste, air management and
cooling management, among others.
For ICT scientists and engineers, the
challenge will be to design technology
and algorithms to minimise particle
vibration, travel path and heat loss
due to input-output mismatch. Design,
operational and transmission related
thermal loss are core issues of ICT.
This makes production of green ICT
a great challenge, although, as parts
of its implementation, energy-smart
devices, sleep-mode devices, cluster
computing, cloud computing, etc are
already in place.
Foundation of green ICT was laid
as far back as 1992 with the launch of
the Energy Star program in the USA.
The success of Energy Star motivated
other countries to take up the subject
for investigation and implementation.
Leading countries working on green
ICT now include Japan, Australia,
Canada and the European Union.
Formalisation of green ICT is in fact
due to standards proposed by IEEE
who has formalised Green Ethernet and
802.3az-enabled devices for green ICT.
Green ICT is a clean-environment-
based technology. However, fruitful
realisation of green ICT is equally
dependent upon awareness in society.
Society needs to practice common
ethics of ‘don’t keep computer on, when
not needed,’ ‘don’t use Internet as a free
tool, but as a valuable tool of necessity
only,’ ‘don’t unnecessarily replace
Figure 2: Global carbon dioxide emissions (Credit: Wikipedia.org) devices after devices just because
you can afford to’ and so on. Without
more energy than others. Impact of length, minimising protocol overhead, societal responsibility, technology alone
ICT industries on emission of carbon- protocol for compressed header, cannot ensure achieving the objectives
dioxide is immense. As shown in green networking, management of green ICT.
Figure 2, India is currently the third
largest producer of carbon dioxide.
By: Prof. Chandan Tilak Bhunia, Abhinandan Bhunia
Urgent solutions required at the
level of hardware design management Prof. Chandan Tilak Bhunia, PhD in computer engineering from Jadavpur University,
is fellow of Computer Society of India, Institution of Electronics & Telecommunication
include minimal configuration, adoptive Engineers, and Institution of Engineers (India).
configuration, consolidation by
virtualisation, algorithmic efficiency, Abhinandan Bhunia did B S in computer engineering from Drexel University, USA and
MBA from University of Washington, USA.
optimal resource utilisation, optimal
data centres, optimal link utilisation,
limiting power by reducing cable The article was originally published in the April 2020 issue of Electronics For You.
I
n the battle against novel coronavirus (Covid-19), emerging services for people infected with Covid-19. Mobile healthcare
technologies have stood out by making immense solutions empower healthcare providers and patients by
contribution in an unexpected, creative and amazingly creating a platform for interaction and medical care services.
responsive way. Delivery drones, disinfecting robots, Face recognition technology is being used in surveillance
smart helmets, and thermal imaging cameras are all being systems that can recognise people, even while they are wearing
deployed in the fight against Covid-19. Latest technologies masks, with a relatively high degree of accuracy. The surveillance
are being used to predict and combat spread of the infectious systems use facial recognition technology and temperature
disease. These technologies include artificial intelligence (AI), detection software to identify people who might have fever or
analytics software, chatbots, apps, telemedicine, blockchain, are not wearing masks. Facial recognition technology has been
and advanced facial recognition software. integrated with thermal imaging to make fever-detection cameras.
The better we can track the virus, the better we can fight it. Contactless temperature detection software, AI-powered non-
Advanced AI has been used to help diagnose the disease and contact infrared sensor systems, and smart helmets that can measure
accelerate the development of a vaccine. Google’s DeepMind the temperature of anyone within five-metre radius can quickly
division has used its latest AI algorithms and computing power detect a person who is suspected of having a fever. These are being
to understand the proteins that might make up the virus, and has deployed at stations, airports, schools, malls, community centres
published their findings to speed up the process of treatments. and other public places that have large gatherings.
Several drug companies are also using AI-powered drug Robots are being used to clean or sterilize hospitals,
discovery platforms to search for possible treatments. and perform basic diagnostic functions, to minimise the
AI-based systems are being used to detect coronavirus risk of cross-infection. The robots allow physicians to
infection via CT scans with 96 per cent accuracy. Portable lab- communicate with the patient via a screen, and are equipped
on-chip detection kits are helping medical teams on the ground with a stethoscope to help doctors take a person’s vitals while
to identify infected individuals for proper medical care quickly. minimising exposure to the staff. They can deliver food and
These tools are helping remote areas with limited medical medicine to reduce the amount of human-to-human contact.
resources to immediately screen out suspected coronavirus- Robots use ultraviolet light to autonomously kill bacteria and
infected patients for further diagnosis and treatment. viruses in quarantine wards without human intervention.
Blockchain-powered services are helping hospitals to Drones are being used for contactless medicine delivery,
spend less time on administrative work and allocate staff to the and for spraying disinfectant around the country, especially
frontlines. Blockchain platforms speed up claims processing in quarantine zones. These transport medical samples and
and minimise the need for face-to-face contact amidst the conduct thermal imaging. Drones are also deployed to
coronavirus outbreak. check travellers’ temperatures and disposal of hospitals’
Chatbots are being used to share information and free medical waste.
online health consultation services. These can answer queries
related to the virus such as symptoms, preventive measures By: Deepshikha Shukla
and treatment procedures.
The author is a freelance technology journalist.
Software solutions that are transforming the healthcare
industry include hospital management, mobile healthcare,
telemedicine, and wearables. Telemedicine enables remote The article was originally published in the April 2020 issue
monitoring and care for patients. It can provide necessary of Electronics For You.
Role of Technology
in Maintaining Law and Order
With criminals becoming tech-savvy, police and the courts also need to know the
latest tools and make use of the latest technologies like cyber policing, artificial
intelligence, data analytics, blockchain and cloud computing.
R
(Credit: www.tagesspiegel.de)
educing crime rates and participation of over 6,000 people across
delivering speedy justice to 46 cities in 24 countries was seen. past history of all records. Analysing the
the needy is a challenging As criminals become more tech- data from first-hand evidence at the site
task globally. It requires savvy, police personnel need to know in real time provides a good overview of
smart policing and effectively dealing the latest tools and scientific methods cases from the beginning.
with roadblocks in legal proceedings. to keep up with them. Automating Telangana Police is among those
Although the adoption is slower manual activities that are rule-based taking diverse initiatives since past
when compared to other sectors like and repetitive through robotic process few years to maintain law and order
finance and insurance due to the automation can save time and manpower. in the state. For instance, mobile
ethics constraints involved, science Investigating and understanding the apps like TSCOP, ePetty case, Cop
and technology are being increasingly intricacies of even the most pressing Connect, Facial Recognition System,
leveraged by enterprises to accomplish cases has become easier with the aid and e-Challan system launched by the
this mission. This is proved by the of CCTV footage, which also serves as department have played a huge role in
success of Global Legal Hackathon 2019 valuable evidence in the court, especially data sharing and structuring the system.
that was organised to develop technical in the absence of witnesses. Digitisation
solutions for the industry, in which of cases ensures full electronic access to Continue to page 39...
H
ere are six things one platform providers should offer a Class Universal package management
should consider when A tool stack as part of their platform, All of the metadata and dependencies
engaging with any in addition to very strong ecosystem in your myriad technologies must
individual vendor for integrations and plugins to make be supported (such as Docker, npm,
a DevOps digital transformation developers’ lives easier, respecting the Maven, PyPi, Golang, NuGet, Conan,
to the cloud. freedom of choice they want. etc – but also the 20+ more that
An E2E platform also requires a you may find in your portfolios).
End-to-end (E2E) solutions vendor commitment to allow a true Point solutions for single or limited
Developers today are looking for an ‘one-browser solution’ and not simply technology types will only serve to
E2E solution and an all-in-one user bundled tools that are integrated. frustrate your development teams
experience. However this doesn’t mean This ensures that the user will have a and require the adoption of multiple
that they will compromise on the ‘best full experience from a single UI that solutions and repositories within your
of breed’ approach. Therefore, DevOps connects all services. organisation. Large enterprises have
not only myriad technologies, but also have completely separate solutions Multi-cloud
a long legacy of deployed, mission- that provide different features and While you might think one cloud is
critical applications that must be methods that don’t talk to each other, enough, you should select a vendor that
supported at scale with local, remote requiring you to learn a new product, provides services across and between
and virtual repositories. user experience and user interface. As all major clouds. Keep your options
you transition to a cloud environment, open and your peace of mind intact
Both cloud and on-premise both cloud and on-premise solutions by avoiding any vendor lock-in and
solutions need to be integrated need to be able to function in the same providing maximum resilience.
100 per cent way 100 per cent of the time to ensure
This isn’t about whether or not to have a smooth transition. For instance, as Security
a cloud solution. Many companies you go through a cloud migration, you Security is an integrated part of
that offer cloud solutions don’t have will need the same tools and functions the pipeline that supports all of
corresponding on-premise/self-hosted in both places in order to keep the your package types and it is now
options, or vice-versa. More still business running. a line-item for many companies.
Cloud DevSecOps tools should secure all software packages in the of today as well as keep pace with
make it possible to block artifact pipeline from build-to-production. technology evolution. It should
downloads (or break builds) that provide a way to assemble pipelines
contain vulnerabilities, requiring Cloud-ready CI/CD from pre-packaged building blocks,
tight integration all the way into the Traditionally, application rather than developing them from
repository. Security policies should be development teams were responsible scratch. These pipelines can be
easy to define and manage across your for creating localised CI/CD templatised and shared as libraries
repositories. And any cloud security (continuous integration/continuous across the organisation, thereby
solution should allow you to easily delivery) automation. This approach building a knowledge base that is
identify the impact of a vulnerability provides short-term gains for the constantly growing and improving.
across the entire DevOps pipeline. individual teams but ends up being In other words, your CI/CD provider
The world of containers also comes a constraint in the long run since should give you economies of scale
with a challenge. Your DevSecOps enterprises get no economies of scale over time in the cloud, and help you
tools should be able to ‘open’ any across their CI/CD implementations. ship code faster.
container and scan several tiers in, and A modern CI/CD provider
with all packages look for dependencies should support and scale enterprise-
By: Jens Eckels
that include vulnerabilities. wide workflows (aka the ‘software
A DevOps platform should always supply chain’) that span all popular The author is the director, product
marketing at JFrog.
strive to be ahead of any hacker and technologies and architectures
Another such attempt comes from analysis. UK’s Tessian employs AI to which includes law firms, software
IIIT Delhi, where a research centre secure confidential data and emails companies, and universities, among
has been built to assist the capital’s for law firms. others.
police for such purposes as criminals’ In India, researchers from IIT According to a 2019 report titled
identification, cyber policing, traffic Kharagpur have recently developed LawTech Adoption Research by tech
management, and combating crimes an AI-powered method to automate analysts TechMarketView, the number
by using artificial intelligence (AI), the reading of legal case judgements, of lawtech companies has grown over
biometrics, image processing, Big case law analysis and enhance legal the past few years, but the adoption
Data, social media analysis and search across several domains. Deep rate is not that high among legal
network forensics. neural models enable understanding practitioners. One of the major reasons
Technologies like AI, analytics, the rhetorical roles of sentences or noted behind this is the partnership
blockchain, and cloud computing are jargons in a judgement when adequate model in which spending is done from
making their way into the courtrooms, data is available, and hence aid in the partners’ profit pool.
too. AI-powered tools can be used organising legal documents. To change this scenario, the US
by lawyers for most daily tasks, from Blockchain finds application in states like North Carolina and Florida
reviewing documents and performing email encryption, verifying processes have already mandated technology
legal research rapidly to predicting and securing evidence, like in financial training for CLE (continuing legal
various outcomes of a case. Businesses transactions and many other purposes. education) credits. Pressure from clients
can utilise AI to review contracts for For example, Legaler’s blockchain and for cheaper offerings is also pushing
partnerships without any bias, and developer tools provide infrastructure law firms to move to cloud computing
perform background checks before to build decentralised applications for and other tech solutions.
hiring new employees to avoid getting legal services. In 2017, Global Legal
into legal troubles later. Blockchain Consortium was formed to
By: Ayushee Sharma
Several companies in countries like drive the standardisation of blockchain
the US, Singapore, the UK, Canada technology in the legal sector. It has The author is a technology journalist
at EFY.
and Australia are using technologies already surpassed 150-member mark,
to solve issues in this space.
Canada-based Kira Systems uses
machine learning (ML) for contract The article was originally published in the April 2020 issue of Electronics For You.
D
evOps, which began by administration processes has increased. 4. Containerisation – Kubernetes
uniting engineers and tasks, The DevOps mantra is, “Automate 5. Virtualisation – Parasoft Virtualize
has now turned out to be a and monitor the procedure of software Now, let’s list the top five tools
key tool in the most basic creation, extending from integration, among these.
parts of the software development testing and releasing, to deploying and
life cycle. With the introduction of overseeing it.” Ansible
cloud computing and virtualisation, This open source tool monitors
the requirement for new systems Stages of the DevOps life cycle application deployment, configuration
The following are the five phases of the management, orchestration and so on.
DevOps life cycle, and the popular tools The Ansible development steps are
used in each phase. given in Figure 1.
DEVELOP TEST
1. Continuous integration – Jenkins
2. Configuration management – Key features
Ansible, Chef and Puppet It has an agentless design.
3. Continuous inspection – Selenium It is powerful due to the work
MONITOR DEPLOY
Node
2
1 Catalog
3 Facts
Report
Puppet Master
process arrangement.
I t is straightforward and
simple to use.
Chef
This tool is used for checking the
software designs. Figure 2 shows how
to create, test and deploy Chef code.
Key features
It guarantees that your configuration
strategies will stay adaptable,
versionable, testable and
intelligible.
It helps to normalise configurations.
It automates the entire procedure of
guaranteeing that all frameworks
are accurately designed. Figure 5: Old vs new ways of Kubernetes
2. Easy to implement, learn and use on daily basis for 2. Other platforms have better pre-configured deployment scripts
configuration management
Docker 1. Docker produces an API for container management 1. Containers don’t work at bare-metal rates.
Puppet 1. Well-established support community through 1. For more advanced tasks, you will need to use the CLI (Command Line
Puppet Labs. Interface), which is Ruby based.
2. Because of the DSL (Domain Specific Language) and a design that does not
2. Simple installation and initial setup focus on simplicity, the Puppet code base can grow large and unwieldy.
Kubernetes 1. Microservices rolling updates 1. Businesses need a certain degree of reorganization when using Kubernetes
with an existing app.
2. Makes it a lot easier to establish effective CI/CD 2. Pods sometime need a manual start/restart before they start
pipelines. working as intended. This can happen in certain situations such as
when running near full capacity
Understanding Continuous
Integration and Continuous
Delivery/Deployment
This article discusses continuous integration (CI) and continuous delivery/deployment (CD),
which are part and parcel of the DevOps software development culture. The goal of all
developers is to produce software that is reliable, reusable, extendable, flexible, correct
and efficient. DevOps ensures this, with CI and CD as integral parts of the process.
I
n simple terms, integrate when
you commit. Continuous
integration implementation
doesn’t mean less bugs. Rather, it
Development
highlights the issues or bugs in the early Code Commit
stages and hence is useful because the
earlier in the development cycle you
fail, the faster you recover!
Static Code
Analysis (SCA)
The benefits of detecting
bugs early
In the case of discovering failures or
bugs, it becomes the priority of the SCA - Quality
respective stakeholders to focus on Gate
solving build or continuous integration
issues at the earliest and fix the
broken build.
Continuous integration (CI) is a Compilation
popular DevOps practice that requires
the development team to commit code
into a shared repository (Centralised
Version Control or Distributed Version Unit Test
Control) as and when a feature is Execution
completed or a bug is fixed. Each
commit goes through a build validation
process using an automated build
Code Coverage
process with any automation tool -Quality Gate
feasible, based on the knowledge or the
culture of the organisation.
It is important for the development
team to commit frequently whenever a Build Package
feature is implemented or a bug is fixed.
There are still some developers who
commit code even if it is not properly
tested or is not working fine. There are
also instances when code is committed Figure 1: Continuous integration
Figure 4: CI/CD
Getting Started
Instance Configuration
Jenkins URL: http//localhost:9999/
The Jenkins URL is used to provide the root URL for absolute links to various
Jenking resources. That mean this value is required for proper operation of many
Jenkins features including email notification, PR status updates, and the BUILD_
URL environment variable provided to build steps.
The proposed default value shown is not saved yet and is generated from the
current request, if possible. The best practice is to set this value to the URL that
users are expected to use. This will avoid confusion when sharing or viewing links. Figure 6: Upstream and downstream projects
Figure 3: Jenkins instance configuration application life cycle management activities. There are two
types of pipelines in Jenkins, as of today. This means that
Jenkinsfile can contain two different types of styles/syntax
and yet it can achieve the same thing.
Scripted pipelines follow the imperative programming
model. They are written in the Groovy script in Jenkins. All
Groovy blocks/constructs help to manage flow as well as
error reporting.
node {
/* Stages and Steps */
}
}
}
stage(‘CI’) {
steps {
//
}
}
stage(‘CD’) {
steps {
//
}
}
}
}
Blue Ocean
Blue Ocean provides
an easy way to create
a declarative pipeline
using new user
experiences available
in its dashboard. It is Figure 9: Blue Ocean repository
like creating a script by
selecting components,
steps or tasks.
Open the Blue Ocean
dashboard and click on
Create Pipeline. Connect
with the required
repository; then create
stages, select the steps
and configure.
Blue Ocean is a new
user experience, and it Figure 10: Blue Ocean tests
provides an easy way to
access unit test results
(Figure 10).
Click on Pipeline
to get the status of the
pipeline. Click on the
specific stages to access
the logs of these stages.
Click on Artifacts
to access the package
file and other artifacts
(Figure 12).
The following list of
open source tools can be
integrated in the pipeline
to implement DevOps
practices. Figure 11: Blue Ocean automated deployment
Tool Description
Travis CI Hosted continuous integration service that supports integration with BitBucket and GitHub.
https://travis-ci.org/
GoCD GoCD is a build and release tool that helps to perform end-to-end orchestration for application life
cycle management activities.
https://www.gocd.org/
Nagios Nagios is an open source tool that can be used to monitor network and infrastructure.
https://www.nagios.org/
Docker This is a very popular container management tool. Kubernetes supports Docker as a container pro-
vider.
https://www.docker.com/
Ansible Ansible is used for automation, such as for configuration management and continuous delivery.
https://www.ansible.com/
Collectl This is used to gather the performance data of systems such as CPU, network, and data.
http://collectl.sourceforge.net/
GitHub This provides a repository for public and private access to maintain version control. It is hosted.
https://github.com/
Kubernetes This is one of the most popular container orchestration tools available in the market.
https://kubernetes.io/
Artifactory This provides community and enterprise versions of artifact management tools.
http://www.jfrog.com/artifactory/
Selenium Selenium is a popular automated functional testing tool that is used for Web applications. It is open
source.
https://www.selenium.dev/
Appium Appium is a popular automated functional testing tool that is used for mobile applications.
It is open source.
http://appium.io/
SonarQube This is used to analyse the code to track bugs, security vulnerabilities, and code smells. It supports
more than 15 programming languages.
https://www.sonarqube.org/
SaltStack https://www.saltstack.com/
Apache JMeter https://jmeter.apache.org/
OWASP ZAP This is used to scan security issues of applications for penetration testing. It is an active open source
Web application security scanner.
https://www.zaproxy.org/
Ant Apache Ant is an XML based build management tool for Java based projects.
https://ant.apache.org/
Gradle This is a popular build management tool for Android based projects. It is also used in Java based ap-
plications. It supports domain-specific languages.
https://gradle.org/
Maven Apache Maven is one of the most popular build tools, with multiple goals for an application’s life cycle
phases such as build, test, and deploy. It is mainly used for Java projects.
http://maven.apache.org/
Hygieia This is a one-of-a-kind DevOps dashboard that helps to integrate with tools such as Bamboo,
Jenkins, Jenkins-codequality, Jenkins Cucumber, Sonar, AWS, uDeploy, XLDeploy, Jira, VersionOne,
Gitlab, Rally, Chat Ops, Score, Bitbucket, GitHub, Gitlab, Subversion, GitHub GraphQL, HP Service
Manager (HPSM), AppDynamics, Nexus IQ and Artifactory. It has two types of dashboards — one for
engineers and the other for executives.
https://www.capitalone.com/tech/solutions/hygieia/
CFEngine This is a popular DevOps tool that is used to automate IT infrastructure related operations.
https://cfengine.com/
GitLab This is a Git repository. Now it also provides support for automation pipelines to configure continuous
integration and continuous deployment.
https://about.gitlab.com/
Junit This is a popular yet simple unit testing framework to write unit tests for the Java programming lan-
guage.
https://junit.org/
Jasmine Jasmine is a popular behavioural data driven framework to verify JavaScript based applications.
http://jasmine.github.io/
References
[1] Agile, DevOps and Cloud Computing with Microsoft Azure.
https://www.amazon.in/Agile-DevOps-Cloud-Computing- By: Mitesh S
Microsoft/dp/9388511905
[2] Continuous Integration. https://www.martinfowler.com/ The author has written the book ‘Agile, DevOps and Cloud
articles/continuousIntegration.html Computing with Microsoft Azure’. The title of his upcoming
book is ‘Hands-on Azure DevOps’.
Understanding DevOps:
A Revolution in Software
Development
In the field of IT and software development, DevOps implementation is gaining
significant popularity. This article takes a quick look at the important concepts of
DevOps and the phases of its life cycle. It also maps the relevant open source tools,
and finally, highlights how it can bring value to the IT industry.
T
he word DevOps is DevOps is gaining popularity Various stages in the
a combination of DevOps is gaining popularity DevOps life cycle
development and because it bridges the gap between An understanding of DevOps
operations. DevOps is developers and the supporting is incomplete without having
not software — not a tool, not a operations team. The DevOps knowledge of its life cycle. The
product and not a programming approach aims at: various phases that comprise the
language, but an approach whereby Delivering high-quality software DevOps life cycle are highlighted in
the development and operations in a shorter development life Figure 1 and described below.
teams work together, instead of cycle. 1. Continuous development: The
waiting for the other to finish tasks, Deployment in frequent cycles. first phase of the DevOps life
to enable continuous delivery of Reducing the time to move cycle involves ‘planning’ and
value to end users. This software into production, from ‘coding’ of the software. Planning
development methodology improves conceptualising an idea. includes activities like designing
the collaboration between the Accelerating application delivery the blueprint of the module, and
development and the operations across enterprise portfolios. identifying the resources and the
teams using various automation tools. Enabling rapid building and algorithm to be used. Once the
These tools are implemented during delivery of products. plan is finalised, the developers
different phases, which are a part of Delivering applications code the application and maintain
the DevOps life cycle. and services at high velocity. it using popular tools like Git,
Gradle and Maven.
2. Continuous testing: To
PLAN catch any error and ensure
Continuous DEPLOY
Development Continuous
the reliability of the software,
CODE Deployment the developed modules are
OPERATE
continuously tested for bugs.
BUILD
Integration In this phase, testing tools like
Continuous Testing Selenium (Se), TestNG, JUnit,
Continuous
TEST
MONITOR Monitoring etc, are used.
3. Continuous integration:
This phase is the heart of the
Continuous Integration
DevOps life cycle in which code
supporting new functionality in
Figure 1: Different phases of the DevOps life cycle the Git repository is continuously
PLAN DEPLOY
DEVELOPER OPERATIONS
CODE
OPERATE
Continuous
Integration
BUILD
MONITOR
TEST
CODE DEPLOY
PLAN RELEASE
BUILD OPERATE
TEST MONITOR
D
evOps is a set of practices Why it is needed of synchronising timelines, causing
that promotes partnership B
efore DevOps, the development more delays in deployment.
between the development and operations teams worked in a
and operations teams to completely segregated way. Comparing DevOps and
deploy code to production faster in an Testing and deployment were traditional IT
atomically and continuous manner. The different activities done separately When comparing traditional IT ops with
word ‘DevOps’ is a combination of the after design-build. Hence, they DevOps, it’s clear how they differ and
words ‘development’ and ‘operations’. required more time. why the latter is increasingly embraced
DevOps helps to increase an Using traditional IT practices, team by organisations worldwide. Given
organisation’s speed to provide members spent large amounts of below are some points of comparison.
applications and services. It allows their time in testing, deploying and
companies to serve their customers designing, instead of building the 1. Time
better and compete more powerfully in project. DevOps teams spend 33 per cent more
the market. In simple words, DevOps Manually deploying the code leads time to refine infrastructure against
can be interpreted as an arrangement to human errors in production. failure than traditional IT ops teams. In
of development and IT operations with Different teams have different addition, DevOps teams spend about 21
better communication and collaboration. timelines. This leads to problems per cent less time putting out fires on a
weekly basis and 37 per cent less time to recover, while recoveries in less organisational collaboration. The 2017
handling support cases. DevOps teams than 30 minutes are 33 per cent more State of DevOps Report quantifies
also waste less time on administrative likely for DevOps teams. Automated this increase in efficiency, reporting
support due to a higher level of deployments and an infrastructure that high performing organisations
automation, self-service tools and that’s programmable are the key employing DevOps practices spend 21
scripts for support tasks. With all of features for quick recovery. per cent less time on unplanned work
this additional time, DevOps teams are and rework, and 44 per cent more
able to spend 33 per cent more time to 4. Release of software time on new work. More generally
ameliorate infrastructure and 15 per When it comes to releasing software, speaking, however, successfully
cent more time on self-enhancement DevOps teams need roughly 36.6 implementing DevOps practices can
through further education and training. minutes to release an application have a profound impact on a company
whereas traditional IT ops teams through improving efficiency and
2. Data and speed need about 85.1 minutes. This means execution in areas that are both
DevOps teams tend to be small, agile, that DevOps teams release apps essential and decidedly unglamorous.
driven by innovation and focused on more than twice as fast as traditional Fredrik Håård, an engineer
addressing tasks in an accelerated IT ops teams. with over 12 years of DevOps
manner. According to a Gartner experience who worked as a senior
report, they work on the mantra, Why DevOps is better cloud architect at McKinsey and
“Don’t fail fast in production; embed There are many advantages of using at Wondersign, articulates this
monitoring earlier in your DevOps DevOps rather than traditional IT. point more fully, “Good DevOps
cycle.” Agility is one of the top five Reduced chances of product engineers must be champions – and
objectives of DevOps. In the case of failure: Software delivered by take responsibility for all the areas
traditional IT ops, the data count for DevOps teams is usually more that might not be prioritised by the
the feedback loop is confined to the fit-for-purpose and relevant to organisation such as data security,
service or application being worked the market thanks to the constant disaster recovery, mitigation, and
upon. If there’s a downstream effect feedback loop. audits.” He adds, “The choices you
that is not known or noticed, it can’t Improved flexibility and support: make in DevOps can have long-lasting
be addressed. It’s up to IT ops to Applications developed by effects at a company.”
pick up the pieces. That is the reason DevOps teams are typically more So the conclusion is, DevOps
why DevOps is faster in delivering expansive and easy to maintain teams get more time and solve
business applications and the due to the use of microservices problems faster. They spend more
challenge to IT ops is to keep pace and cloud technologies (we’ll get time in ameliorating things, less time
with the speed of business. to that later). fixing things, recover from failures
Faster time to market: Application faster and release applications more
3. Recovery and crunch time deployment becomes quick and than twice as fast as traditional IT
The average DevOps teams see only dependable thanks to the advanced ops. Through DevOps, all members
two app failures per month, and continuous integration (CI) and of different teams work together
the recovery time is less than 30 automation tools DevOps teams because they have the same goal,
minutes for over 50 per cent of all usually count on. which is to deliver quality software
respondents. Of the DevOps teams Better team efficiency: DevOps to the market.
surveyed, 71 per cent can recover means joint responsibility, which
from failures in less than 60 minutes leads to better team engagement and
while 40 per cent of traditional IT ops productivity.
References
teams need over an hour to recover. Clear product vision within the [1] https://en.wikipedia.org++++
A key practice of DevOps is to be team: Product knowledge is no [2] https://www.toptal.com
prepared for the possibility of failures. longer spread across different roles
Continuous testing, alerts, monitoring and departments, which means
and feedback loops are put in place better process transparency and By: Neetesh Mehrotra
so that DevOps teams can react decision making. The author works at NIIT Technologies
quickly and effectively. Traditional The DevOps culture comes with as a senior test engineer. His areas
IT ops teams are almost twice as a variety of rewards, some of which of interest are Java development and
automation testing.
likely to require more than 60 minutes include greater efficiency, security and
D
evOps for data science is Features of RCloud on RCloud can be verified and
gaining popularity as it C an be used from anywhere: It is executed by anyone with access to
involves infrastructure, browser based software; so if you the notebooks without any concern
configuration management, have Internet connectivity, you can for environmental variables.
integration, testing and monitoring. use it from anywhere. Live code execution: RCloud
Hence, it accelerates data analysis Project container facility: RCloud notebooks are not static Web pages
insights. DevOps supports data notebook consists of all the but code that is executed live.
scientists by creating integrated required components and associated Unique RCloud Web service
environments for most vital tasks such data dependencies. It contains interface: RCloud provides a
as data exploration and visualisation. dependencies of data analysis that unique Web service interface
Data scientists need varied types of include code, comments, equations, through which any notebook
infrastructure to handle any complex visualisations, etc. asset can be integrated with other
project. DevOps does the provisioning Association feature: RCloud technologies by simple means.
and configuration of infrastructure for provides an excellent capability Promotes user engagement:
a variety of environments. Any data for association. It is browser based RCloud is platform-independent.
science model is iterative in nature software that is installed on a server Access and control remains
as it involves new data that needs or any distributed environment constant, which increases user
to be trained, and based on that, it such as Hadoop. It provides access confidence and engagement.
evolves new models that need to be to all notebooks so that new users
made available to users. For this, can view, copy, edit and update the Why RCloud?
the data scientist applies continuous analysis and visualisations with new For effective results of a data science
integration and deployment practices. data sets. It is very efficient and easy project, certain information must
DevOps bridges the gaps between the to use as it only needs a browser. be shared among team mates. They
training environment and the model Distribution capability: RCloud is must be agile enough to address new
deployment environment through based on the URL sharing concept. features and functionalities. They must
continuous integration and continuous It delivers value faster. It is agile move their results at different levels
deployment pipelines. enough for us to share the URL at of the data science project such as
RCloud is considered as DevOps any stage of analysis. data pre-processing, exploratory data
for data science as it resolves the data Scalability: RCloud can perform analysis, predictive modelling and
analysis development and deployment parallel connections to multi- visualisation. RCloud is the perfect
issues of collaboration, sharing, server systems. It provides the software to address these issues as
scalability and reproducibility. RCloud data scientist with flexibility to it contains association, distribution,
was created at AT&T labs by Simon run Big Data packages without scaling and reproducibility (ADSR)
Urbanek, Gordon Woodhull and Carlos writing complex code. characteristics. RCloud is very vital
Scheidegger and is open source software. User directory based access: for data science for many reasons:
It is a Web based platform for all aspects It contains a user directory that RCloud is an open source Web
of data science such as analytics, provides access to the notebooks of based platform for data science. It
visualisation and collaboration. It uses every user registered with RCloud. has excellent capability to help you
the R language for all tasks. Reproducibility: A data analysis share your ideas or work with your
team mates. Figure 1 describes the Data packages without writing R Cloud maintains automatic Git
RCloud components on GitHub. complex code. based trials of code modifications.
RCloud provides a platform for the RCloud is well suited for Big Data DevOps involves infrastructure,
data scientist to search relevant things applications due to its scalability configuration management,
without reinventing the wheel. feature. integration, testing and monitoring.
It provides fast interaction with It provides great security features RCloud is considered as DevOps for
data in the Hadoop Distributed File so that unauthorised clients cannot data science as it resolves the data
System (HDFS) or similar kinds of make calls to the RCloud runtime analysis development to deployment
distributed file systems. This feature environment. Notebooks of RCloud issues of ADSR. In this article, we
is very well suited to Big Data can also be encrypted for advanced have described the main features of
analytics. security. Authenticated client server RCloud in detail, with an emphasis
RCloud differs from other DevOps channelling is also a unique feature on those that are very essential for
of data science as it provides of RCloud. data scientists.
browser based access.
It gives a lot of flexibility to users to
create any type of complex widgets, References
notebooks or dashboards. [1] https://www.forbes.com/sites/janakirammsv/2018/11/04/the-growing-
Both registered and non-registered significance-of-devops-for-data-science/#7120d32a7481
users can view or interact with [2] http://stats.research.att.com/RCloud/
[3] https://www.kdnuggets.com/2016/11/rcloud-devops-data-science.html
live notebooks of the RCloud
environment.
Communication in RCloud is done
with standard communication By: Dr Dharmendra Patel and Dr Atul Patel
protocols such as HTTP. Both the authors are associated with the Smt. Chandaben Mohanbhai Patel Institute of
RCloud provides data scientists Computer Applications, Charusat, Gujarat. Their areas of interest are data mining, data
science, artificial intelligence, deep learning and image processing.
with the capability to run Big
D
evelopment combined Cooperation’ was made at Flickr by in DevOps. This resulted in open source
with operations leads to John Allspaw and Paul Hammond. tools like Prometheus and Sensu, along
the high impact DevOps That year can be treated as the year with the older Nagios. Let’s take a
practices, and microservices the DevOps movement began. The closer look at Prometheus, while getting
based architecture is ubiquitous in Linux containerisation technology the basics on Sensu.
such an environment. Although such based Docker came into play in
architecture is not new — it’s been 2013. In 2014, we saw the journey of Monitoring tools
around since the 1980s – the DevOps Kubernetes begin as an orchestration We typically talk about the following
practice is relatively new. The idea attempt of multiple Docker containers three tools in the context of Kubernetes
of DevOps began in 2008 with a from Google. Subsequently, Kubernetes monitoring.
discussion between Patrick Debois began to be maintained by the Cloud 1. Prometheus: This open source
and Andrew Clay Shafer, concerning Native Computing Foundation Kubernetes monitoring tool
the concept of agile infrastructure. In (CNCF). The consequent proliferation collects the metrics of Kubernetes
June 2009, the epochal presentation of Kubernetes led to the absolute deployment from various
of ‘10+ Deploys a Day: Dev and Ops necessity of monitoring its deployment components, relying on the pull
method. The data is stored in an 1. The apps are ever evolving; they 1. The hosts (nodes) which are
inbuilt time series database. are always moving. running the Kubelets.
2. Sensu: This complements 2. There are many moving pieces to 2. The process associated with
Prometheus. It provides flexibility monitor. Kubernetes is not really a Kubelets — the Kubelet metrices.
and scalability to measure the monolithic architectural tool, as we 3. Kubelet’s build in the cAdvisor
telemetry and execute service checks. all know. Those components also data source.
The collected data can be more keep on changing. 4. The Kubernetes cluster. This is
contextual here, and it is possible 3. Once upon a time, everything was the whole deployment. From a
to extend it to provide automatic server based, often bare metal. Then monitoring perspective, these are
remediation workflows as well. we got cloud based deployment. The Kube state metrices.
3. Nagios: This is the old friend we natural progression makes multi- The good part is that Prometheus
used to monitor deployments. cloud a reality, which adds an extra can monitor all of the above, addressing
It serves its purpose well, facet to the monitoring challenges. all the monitoring challenges we
especially in the context of bare 4. We typically annotate and tag described earlier.
metal deployments to a host of the pods and containers in a
administrators. Kubernetes deployment to give The Prometheus architecture
them nicknames. The monitoring The heart of Prometheus is its server
Kubernetes monitoring should reflect the same. component, which has a collector
challenges that pulls the data from different data
When we decide to go with Kubernetes data sources sources to store it in an inbuilt time
Kubernetes, we are ready to embrace Once we understand the challenges series database (TSDB). It also has an
the mantra that the only constant in of monitoring in a Kubernetes API server (HTTP server) to serve the
life is change. Naturally, this adds environment, we need to understand API request/response.
associated challenges with respect to the moving parts of a typical, sizable The unique feature of Prometheus
monitoring Kubernetes deployments. deployment. These moving parts are the is that it is designed for pulling out data
Let’s discuss a few monitoring data sources from which Prometheus to make it scalable. So, in general, one
challenges to understand them better. pulls the monitoring metrices. need not install an agent to the various
HTTP
pull Retrieval TSDB
metrics server
PromQL
Prometheus
web UI
Grafana
Node HDD/SSD Data
Jobs/
exporters visualisation
and export
Prometheus
targets API clients
components to monitor them. However, So let us play around with To configure Prometheus.yml, use
Prometheus also has an optional push Prometheus. the following code:
mechanism called Pushgateway. It is now time to get our hands
The Alert Manager is the dirty. A typical Prometheus installation global:
component to configure the alerts and involves the following best practices: scrape_interval: 30s
send notifications to various configured 1. Install kube-prometheus. evaluation_interval: 30s
notifiers like email, etc. The PromQL is 2. Annotate the applications with
the query engine to support the query Prometheus’s instrumentation. scrape_configs:
language. Typically, it is used with 3. Label the applications for easy - job_name: ‘prometheus’
Graphana, although Prometheus has an correlation. static_configs:
intuitive inbuild Web GUI. Data can be 4. Configure the Alert Manager for - targets: [‘127.0.0.1:9090’,
exported as well as visualised. For the receiving timely and precious alerts. ‘127.0.0.1:9100’]
sake of brevity, we are only covering Let us configure Prometheus in a labels:
the Prometheus Web GUI, not the demo-like minimalistic deployment to group: ‘prometheus’
Graphana based visualisation. get a taste of it.
> --net=”host” \
> --name=prometheus \ Figure 4: Sensu deployment (Reference: https://blog.sensu.io/monitoring-kubernetes-
> quay.io/prometheus/node- docker-part-3-sensu-prometheus)
exporter:v0.13.0 \
> -collector.procfs /host/proc \> data model loses various contexts of the is of paramount interest for self-
-collector.sysfs /host/sys \ measurements. It is tightly integrated healing purposes.
> -collector.filesystem.ignored- with Kubernetes, so if the deployment 3. Secure transport: Prometheus
mount-points “^/(sys|proc|dev|host|etc) is multi-generational, it may face does not encrypt or authenticate
($|/)” challenges. The data transported is the metrics.
neither authenticated nor encrypted. In this article, we discussed how
Sensu complements Prometheus on microservices based architectural
Note: The paths shown here are this aspect aptly. Figure 4 shows how patterns are getting contextual in
purely local to a demo deployment. Sensu agents are typically deployed in a a DevOps environment with the
You may have to adjust the paths as Kubernetes environment. ubiquitous usage of Kubernetes.
per your own deployment. Some of the advantages Sensu Monitoring of such a sizable
provides to complement Prometheus deployment becomes challenging
1. Raw metrices can be viewed by are as follows: because of its highly dynamic
curl localhost:9100/metrices. In the 1. Adds more context to Prometheus nature. So, we need tools like
URL, it can be viewed in /metrics collected data to make it more Prometheus to overcome some of
endpoint. meaningful. the monitoring challenges. Sensu
2. The metrics view can be found 2. Workflow automation of acts as a complementary tool to
in /targets URL. For production, remediation actions based on alerts. Prometheus, improving the latter’s
Graphana is the preferred tool. So, alerts become actionable. This performance considerably.
3. The graph view can be found in
/graph URL. Any metrics collected
(viewable using /metric endpoint) References
can be viewed in /graph. [1] https://dzone.com/articles/devops-tools-for-monitoring
[2] https://blog.sensu.io/monitoring-kubernetes-part-1-the-challenges-data-sources
[3] https://blog.sensu.io/monitoring-kubernetes-docker-part-2-prometheus
Sensu: Complementing [4] https://prometheus.io/docs/introduction/overview/
Prometheus [5] https://blog.sensu.io/monitoring-kubernetes-docker-part-3-sensu-prometheus
[6] https://www.katacoda.com/courses/observability-analysis/prometheus
Prometheus has lots of advantages
because of its scalability, pluggability
and resilience. It is tightly integrated
with Kubernetes and has a thriving By: Pradip Mukhopadhyay
supportive community. However, The author has 19 years of experience across the stack — from low level systems
it has a few disadvantages also. For programming to high level GUIs. He is a FOSS enthusiast and currently works for
NetApp, Bengaluru.
example, the simplified constrained
DevOps vs Agile:
What You Should Know
About Both
DevOps is a practice of bringing the development and operations teams
together, whereas agile is an iterative approach that focuses on collaboration,
customer feedback and small rapid releases. This article highlights the
differences between the two software development technologies.
D
evOps is a software emphasis is on iterative, incremental and D evOps focuses on constant testing
development method that evolutionary development. and delivery while the agile process
focuses on communication, focuses on constant changes.
integration and collaboration Key differences DevOps requires a relatively large
among IT professionals to enable rapid D
evOps is a practice of bringing team while agile requires a small
deployment of products. On the other the development and operations team.
hand, the agile methodology involves teams together, whereas agile is an DevOps leverages both shift-left
continuous iteration of development iterative approach that focuses on and shift-right principles; on the
and testing in the SDLC process. In collaboration, customer feedback other hand, agile leverages the shift-
this software development method, the and small rapid releases. left principle.
Customer Operations
Developer
+ +
+
Software Requirement IT Infrastructure
Tester
Figure 2: Agile addresses the communication gaps between the customer and developer
Figure 3: DevOps addresses the communication gaps between the developer and IT operations teams
What is it? Agile refers to an iterative approach that fo- DevOps is considered a practice of bring-
cuses on collaboration, customer feedback and ing the development and operations teams
small, rapid releases. together.
Purpose Agile helps to manage complex projects. DevOps’ central concept is to manage end-
to-end engineering processes.
Task The agile process focuses on constant DevOps focuses on constant testing and
changes. delivery.
Implementation The agile method can be implemented within a The primary goal of DevOps is to focus on col-
range of tactical frameworks like ‘sprint’, ‘safe’ laboration, so it doesn’t have any commonly
and ‘scrum’. accepted framework.
Team skillset Agile development emphasises training all team DevOps divides and spreads the skillset
members to have a wide variety of similar and between the development and operations
equal skills. teams.
Team size A small team is at the core of agile. Because Relatively larger team size, as it involves all the
the smaller the team, the faster they can move. stakeholders.
Duration Agile development is managed in units of DevOps strives for deadlines and benchmarks
‘sprints’. This time is much less than a month with major releases. The ideal goal is to deliver
for each sprint. code to production daily, or every few hours.
Feedback Feedback is given by the customer. Feedback comes from the internal team.
Emphasis Agile emphasises on the software development DevOps is all about taking software that is
methodology to create software. When the ready for release and deploying it in a reliable
software has been developed and released, the and secure manner.
agile team will not care what happens to it.
Cross-functional Any team member should be able to do what’s In DevOps, development teams and opera-
required for the progress of the project. Also, tional teams are separate. So, communication
when each team member can perform every is quite complex.
job, it increases understanding and bonding
between them.
Communication Scrum is the most common method of im- DevOps communications involve specs
plementing agile software development. Daily and design documents. It’s essential for
Scrum meetings are carried out. the operational team to fully understand
the software release and its hardware/net-
work implications for adequately running the
deployment process.
Documentation The agile method gives priority to the working In DevOps, process documentation is fore-
system over complete documentation. It is ideal most because the software is sent to the
when you’re flexible and responsive. However, operational team for deployment. Automation
it can hurt when you’re trying to turn things minimises the impact of insufficient docu-
over to another team for deployment. mentation. However, in the development of
complex software, it’s difficult to transfer all
the knowledge required.
Automation Agile doesn’t emphasise on automation. Automation is the primary goal of DevOps. It
Though it helps. works on the principle of maximising efficiency
when deploying software.
Goal It addresses the gap between customer needs, It addresses the gap between development +
and the development and testing teams. testing and operations.
Focus It focuses on functional and non-functional It focuses more on operational and business
readiness. readiness.
Importance Developing software is inherent to agile. Developing, testing and implementation -- all
are equally important.
Speed vs risk Teams using agile support rapid change, and a In the DevOps method, the teams must make
robust application structure. sure that the changes that are made to the
architecture never develop a risk to the entire
project.
Quality Agile produces better application suites with DevOps, along with automation and early bug
the desired requirements. It can easily adapt removal, contributes to creating better quality
according to the changes made on time, during software. Developers need to follow coding
the project’s life. and architectural best practices to maintain
quality standards.
Tools used JIRA, Bugzilla and Kanboard are some popular Puppet, Chef, TeamCity, OpenStack and AWS
agile tools. are popular DevOps tools.
Challenges The agile method needs teams to be more pro- The DevOps process needs the develop-
ductive, which is difficult to manage every time. ment, testing and production environments to
streamline work.
Advantage Agile offers a shorter development cycle and DevOps supports agile’s release cycle.
improved defect detection.
T
he target area of DevOps is DevOps focuses more on operational By: Harsukhdeep Singh
software development whereas the and business readiness, whereas The author has worked as a QA
target area of agile is to deliver end- agile focuses on functional and non- engineer for the last four years and is
an open source enthusiast.
to-end business solutions quickly. functional readiness.
S
oftware development teams The design, development and testing this process, the team identifies
need to work collaboratively teams take more time and more errors bugs in the code with the help of
on numerous tasks that are time occur during deployment. automated testing, which is more
consuming. DevOps is very The architecture of DevOps reliable and less time consuming
popular in the corporate world for the provides a combined approach than manual testing.
automation, collaboration and real-time for development and operations. Continuous monitoring: This
control that it enables on different types This approach involves developing monitors all the processes that
of software projects. Many corporate code, testing, planning, monitoring, take place in different phases of
giants including Amazon, Oracle, Red deployment, operations and the the software development life
Hat, SaltLake, Nagios, etc, use DevOps release. This architecture is followed cycle and records information
tools for real-world projects. by large applications and those regarding problems.
DevOps refers to the approach used that are hosted on cloud platforms. Continuous feedback: This
to correlate the software development This is because, in the case of large improves subsequent versions of the
process with the various IT tasks to applications where the development software by removing what is not
provide better software quality and and operational teams do not work relevant according to the customer.
streamlined deliveries. It speeds up the in synchronised environments, Continuous deployment: This
development, testing and deployment of long delays can occur in the involves code deployment on all
code of high quality. process of design, deployment and the servers.
As the name indicates, DevOps testing. DevOps overcomes delays, Continuous operation: This is
is a combination of the development maintaining high quality and timely the automation of the process and
(Dev) and information technology delivery of the product. key operations.
DevOps scores
over legacy,
monolithic and
agile software
development.
This article
discusses the
various stages
of the DevOps
software
development
cycle and
delineates the
appropriate
FOSS tools that
can be used
at each of the
stages.
D
evOps is a method of by the development team through a case of DevOps, instead of delivering
software engineering that software life cycle comprising various the whole software, small chunks of
combines both software stages like customer requirement(s), code are updated, and the updated
development (Dev) and planning, modelling, construction software is released to the operations
information technology operations and deployment. However, such a team continuously, which speeds up
(Ops) to create high quality software development cycle could take a lot the software development cycle. The
within a short period of time. DevOps of time and collaborative effort to DevOps life cycle can be automated
shortens the time taken for software successfully deliver software to the using various development tools and
development through continuous customer. This is also known as agile requires less manual activity. The
delivery and the integration of code. software development. DevOps concept first originated in 2008
As the name suggests, DevOps is DevOps can help overcome the during a discussion between Patrick
a combination of development and drawbacks of the agile software Debois and Andrew Clay Shafer.
operations. Traditionally, software development life cycle. In the agile However, the idea only started to spread
development is done by a team of method, to develop software, every in 2009 with the advent of the first
stakeholders comprising business stage of the life cycle needs to be DevOps Days event held in Belgium.
analysts, software engineers, completed prior to the final release Now, most tech giants like Facebook,
programmers and software testers. and that could take a lot of time for Google, Amazon, Netflix, etc, have
The software development is done the software to reach maturity. In the adopted the DevOps culture.
Dev Ops
Test Plan Monitor
products and services with the updated
software. This is done by either using
proprietary or FOSS tools.
Release 8. Monitor: This is the final stage for
the IT operations team and also for the
Deve DevOps life cycle. Here the customer
lopment Operations
Continuous Delivery
requirements are gathered and the
data is sent to the development team
Figure 1: DevOps life cycle to update the software product/service
for the next iteration of DevOps.
DevOps life cycle and tools with the codebase, it is tested in a FOSS tools like Nagios are used by
The DevOps life cycle consists of eight virtual environment, using a VM the operations team to automate the
stages – plan, code, build, test, release, or Kubernetes. A series of both monitoring process.
deploy, operate and monitor. There are manual and automated tests are
various free and open source software conducted at this stage. This is Benefits of DevOps
(FOSS) tools that can be used to improve the most crucial stage of DevOps DevOps has many more benefits than
and automate the DevOps life cycle. and should go through without the agile software development process.
The eight stages of DevOps are failure. This stage could also In case of the latter, the development
explained below. cause bottlenecks and increase the process progresses through four main
1. Plan: This is the starting stage of timeline. In order to complete the stages – planning, coding, testing and
DevOps where the planning for the stage successfully without much release. The planning for the software
software development is done by delay, various continuous testing development is done based on customer
the development team. Everything tools are used. FOSS tools like requirements and involves assigning
from the software requirements to the CruiseControl and Selenium are used various tasks to the stakeholders,
development timeline is planned and to automate the process. like designing flowcharts, writing
the team works accordingly. FOSS 5. Release: After the code is tested documentation, estimating timelines, etc.
tools like Redmine, Trac and Git are successfully, it is prepared for After the planning is done, the coding
used at this stage. release. At this stage the development for the software development starts
2. Code: After everything is planned, team decides which features of the and a working prototype is created.
the coding for the software software product should be enabled The prototype is then tested in the third
development is done. Coding can be or disabled by default and when it (testing) stage. The testing stage is the
done from scratch or can be reused, should be released. This is the final most important part of the agile software
depending on the requirements. Many stage for the DevOps development development life cycle because it can
FOSS tools can be used for coding. cycle, and after this the software/ affect the overall quality of the software.
Git is one such tool that is used to code is delivered to the operations This stage can also cause bottlenecks
automate the process. team. This stage can also be because the time required for software
3. Build: After the coding is done, it is automated using FOSS tools like testing cannot be estimated precisely and
shared with other software engineers Jenkins and Bamboo. could go beyond the project’s timeline.
of the development team. The code is 6. Deploy: This stage belongs to the After successfully testing the
either approved or rejected after it is operations team and it starts after software, it is then released to the
reviewed. If it’s approved, then it is the release of the software by the customer in the fourth stage. However,
merged with the main codebase of the development team. The software if the customer finds any problems with
repository. FOSS tools like Gradle is then deployed by the operations the software, then the development
can be used to automate this process. team using various tools. The release team takes the customer’s feedback for
4. Test: Once the new code is merged is configured according to the another iteration of the development
cycle. Then the whole development With DevOps, the development and services to reach maturity
process takes place again, with the team communicates with the operations in a very short period of time.
four stages, to improve the software team and does not deal with the DevOps is mainly suitable for the
and then it is released again, with customers. In DevOps, only small development of cloud computing
another version number. Once again chunks of code are continuously products and services, because
the customer’s feedback is taken, and coded and tested, which prevents any it needs collaboration between
if any improvements are required then bottlenecks or delays in delivering the the development team and the IT
the development cycle goes into the code. DevOps provides continuous operations team. As there is a growing
iteration stage again and the cycle delivery and integration of the code. demand for cloud computing products
keeps continuing until the software This is why it is better than agile and services, DevOps is the right
reaches maturity (a stable software). software development. choice for most software companies
This type of development cycle could As DevOps has many benefits over and has a great future.
take an indefinite amount of time and agile software development, many
causes delays in software development. software companies are resorting to the
That’s why using DevOps can make DevOps culture. Because it provides By: Debojit Acharjee
the development cycle quicker and continuous delivery and integration, The author is a software
engineer and writer.
more effective. DevOps can help software products
T
here are typically a number of position of these stones is given as integers
patterns in coding interview in a sorted array. The frog is sitting on the
questions. Some of these include first stone, which is at position 0, and the first
dynamic programming, sliding jump must be of one unit. At any point during
window, monotonic queue/stack, divide and the journey, when the frog jumps, it needs to
conquer patterns, etc. Problems that appear only land on a stone, else it can’t cross the
difficult at first, turn out to be quite simple river. At any stone, it can only jump (k-1), k
once the underlying pattern can be identified. or (k+1) units from the current stone, where
Let us talk about a few examples of each of k is the size of the previous jump. The frog
these patterns. needs to land on the last stone for it to cross
Dynamic programming (DP) continues the river. You need to write code to determine
to be quite popular in coding interviews. whether the frog can actually cross the river,
Dynamic programming questions are given an array representing the position of
difficult unless you have practised earlier the stones on the path.
on similar types of questions. While the The brute force solution would be to frame
solution once framed into a DP equation looks this problem as a recursive solution. Given
quite simple, there is quite a leap from the the position of the current stone, and the size
problem statement to framing the dynamic of the last jump, we can recursively keep
programming solution equation. One way checking whether we can reach a stone from
to navigate the gap, typically, is by thinking the current position, and from then onwards to
about the problem first in terms of a top- the next stone, and so on, until we end up at
down recursive solution. Once we can find the last stone or we run out of stones. Given
the recursive equation, extend the solution by the size of the last jump, k, the possible sizes
applying memorisation so that you don’t have of the new jump are (k-1), k and (k+1). We
to recompute the same sub-problems over need to check if there exists a stone at the
and over again. Once we have the memorised current position + the new jump size. If so,
solution, try and see if you can frame it as a we can then again recursively do the check
DP equation by looking at the sub-problems. from that stone (with that stone as the starting
Let us look at a sample question. point and the jump size with which that stone
There is a frog that needs to cross a was reached). If we can reach the last stone
river. There are stones on the crossing path, through any of the paths, the crossing was
which are divided into units of one, on which successful. If there is no successful path to the
the frog can jump and cross the river. The last stone, the frog fails to cross.
The brute force recursive solution has many test the actual code for this problem. The sliding window
redundant computations since we can end up with the pattern is typically applicable to problems that are over
same starting stone and jump size from many of our the sequential single dimensional data structure such as
recursive paths. Hence, an obvious improvement is to an array or a string, though they can be applied with some
memorise these computations. We can cache the result modifications to matrix problems also. Here are a few
of the recursive function for the stone index and jump- sample problems for our readers to practice.
size combination and if this same recursive call is made, (1) Given an array of integers A of size N, find the
we can return the cached result instead of the recursive contiguous sub-array with the minimum sum. A brute
function call. force approach would be to consider all sub-arrays (which
As we compute the recursive solution, it becomes is N^2), compute the sum for each of them and choose the
clear that at each stone, we are trying to find out what minimum. Can you do better than that?
the possible jump sizes are from this stone. Once we (2) Given a binary array containing 1s and 0s, find the
have the set of jump sizes known for each stone, we can length of the longest consecutive set of 1s in the array
then check if there exists a stone at any of these jump if you can replace k 0s with 1s. First assume that k is 1.
sizes. If there exists a stone with that position, we can Then solve this problem for any arbitrary ‘k’.
then update the new jump sizes for the newly reached (3) Given K sorted arrays, write a function to form a single
stone. If we end up reaching the last stone during our sorted array from all these arrays.
traversal of all stones, then we have figured out that (4) You are given an array of N integers. Write a function
the frog can cross successfully. Else the frog cannot that computes the maximum absolute difference between
cross. We create a map which can remember the set the nearest smaller left element and the nearest smaller
of possible jump sizes associated from each stone’s right element among all the array elements. If there is
position. Hence, given a stone and its set of jump sizes, no left smaller element for a given array element, take
the DP solution computes for all reachable stones from the left smaller element as 0 (and do the same for the
the current stone, and their possible jump sizes. Given right smaller element).
that this is a reachability problem, it is also possible (5) You are given an array of N integers, for which each
to formulate this in terms of graph reachability. Given array element represents the height of a histogram.
the starting stone and destination stone, the idea is to Each of the histograms has unit widths. You are
find out whether a possible path exists between them asked to find the area of the largest rectangle in the
while obeying the constraints on the jump sizes. I will histogram. Remember that for any two histogram bars,
leave it to our readers to come up with the depth first the smallest bar lying between them decides the height
search/breadth first search method of computing this of the rectangle.
reachability solution. (6) You are given an integer N. You need to find another
Another pattern that appears often in coding interviews integer M that has the same set of digits present in N but
is the sliding window pattern. Let us look at an example. is greater in value than N. M should be the smallest such
Given a source string S and a target string t, write a integer. If no such integer exists, return -1.
function that can return the minimum length substring of Feel free to reach out to me over LinkedIn/email if you
S which contains all the characters in the target. The brute need any help for your coding interview preparations. If
force approach is to enumerate all possible substrings you have any favourite programming questions/software
of S, see which of them contain all characters in target t topics that you would like to discuss on this forum, please
and choose the minimum length substring among these send them to me, along with your solutions and feedback, at
substrings. Instead, we can apply a sliding window sandyasm_AT_yahoo_DOT_com.
approach to this problem. I request our readers to follow the public health guidelines
Typically, a sliding window has a left pointer and and stay safe. Please don’t worry if you are not able to focus
a right pointer. We first expand the right pointer on the fully on learning new technologies, programming, etc. These
source string till the window includes all characters in are difficult times. So first take care of your mental and
target t. Now we have a valid substring containing all physical health, help others wherever you can, and stay safe.
of the target characters, which is represented by the Wishing all our readers happy coding until next month!
window. We can now move the left pointer forward to Stay healthy and stay safe.
shrink the window so that we can reduce the length of
the string. We keep repeating this operation over the By: Sandya Mannarswamy
complete source string and keep updating the minimum The author is an expert in natural language processing and is
length substring among all the valid substrings we have currently working as an independent researcher. Her interests
include natural language processing, machine learning and AI.
encountered. I will leave it to our readers to write and
T
he term ‘blockchain applications that help in making Amazon would record the name along
technology’ typically refers business operations more transparent with other information to amazon.
to the transparent, trustless, and efficient. com. Instead of using the actual name,
publicly accessible ledger the purchase is recorded without
that securely transfers the ownership What exactly is blockchain? any identifying information using a
of units of value using public key At the most basic level, blockchain is unique ‘digital signature’ like a sort of
encryption and proof of work methods. just a chain of blocks, but not in the user name.
The technology uses decentralised traditional sense of those words. When Part 3: Blocks store information
consensus to maintain the network, we use the words ‘block’ and ‘chain’ that distinguishes them from other
which means it is not centrally controlled in this context, we are actually talking blocks, much like names of people, and
by a bank, corporation or government. about digital information (block) stored each block stores a unique code called
In fact, the larger the network grows in a public database (chain). a hash that allows every block to be
and becomes increasingly decentralised, Blocks on the blockchain are made different from the other. Hashes are the
the more secure it becomes. Blockchain up of digital pieces of information and cryptographic codes created by special
technology is picking up at a fast they have three main parts. algorithms.
pace nowadays. The technology that Part 1: Blocks store information
emerged in 2009 as the underlying about the transaction like the date, time Blockchain variants
platform for Bitcoin exchange has now and amount involved in the transaction Public: The ledgers of these
evolved into a mainstream technology. of the recent purchase. blockchains are visible to everyone
It finds application in various fields Part 2: These blocks store on the Internet. They allow anyone to
like healthcare and finance. Many information about the participants in verify and add a block of transactions
companies venturing into this domain the transactions. For example, a block to the blockchain. Public networks have
are developing blockchain based for purchase of some items from incentives for people to join and are
free for use. Anyone can use a public and all the nodes are connected. This copies in the network. This ensures that
blockchain network. blockchain platform is highly reliable the records in the transaction processing
Private: Private blockchains exist as there is no possibility of tampering stages are immutable in nature. Any
within a single organisation, allowing with the transaction record because correction in the record is amended as a
only specific people of the organisation it is copied to all the nodes — and new transaction processing stage in the
to verify and add transaction blocks. tampering with all the nodes in the network, thus ensuring all the record
However, everyone on the Internet is network is practically impossible at processing transaction stages are kept as
generally allowed to view. any given time. distinct entries in the ledger.
Consortium: In this blockchain Platform security: Though the
variant, only a group of organisations blockchain platform is decentralised in Blockchain platforms
can verify and add transactions. Here, nature with many users being a part of Blockchain platforms assist us in
the ledger can be open or restricted to the workflow process and participating creating applications that implement
select groups. A consortium blockchain in the various stages of executing the the concepts of blockchain. Every
is used across organisations. It is only transaction, there is a higher level of person or company does not have
controlled by pre-authorised nodes. security ensured in any blockchain either the resources or the time to
platform due to its decentralised nature develop their own blockchain from
Pillars of blockchain platforms and multi-node record copy. Also, scratch, and hence these companies
When there are many public and private different blockchain platforms ensure leverage blockchain platforms
blockchain platforms available in higher levels of platform security like developed by tech giants for faster and
the marketplace, one has to carefully permissionless ledgers, the consensus easier application development. Any
choose a suitable option based on one’s algorithm, usage of cryptocurrency organisation deciding to implement
requirements. This has to be evaluated for transactions and the smart contract blockchain may choose from the
based on the pillars of the blockchain facility, to name a few. prominent frameworks available,
platform as listed below: Record immutability: In any i.e., Ethereum, Hyperledger Fabric,
Decentralised network blockchain platform, the decentralised Quorum, Corda, Ripple, etc. The final
Platform security network and ledger copied across all decision should be based upon the
Immutability of the record state the nodes of the network ensures that suitability of each, to the organisation.
Decentralised network: One of all the records kept in the ledger for Let’s now discuss the most popular
the key architectural principals of the any transaction are secure enough so open source blockchain platforms.
blockchain platform is its decentralised that a change in a record is accepted
nature. This means the transaction only if it’s accepted by all participants Ethereum
in the blockchain network is copied across nodes, thus changing the record Ethereum (a public blockchain network)
across all the nodes of the network unanimously across all the ledger was developed by Vitalik Buterin and
is considered an efficiently developed
platform that has smart contract features,
Governance and Free structure flexibility and multi-industry adaptability.
Ethereum acts as a base component in
Principle and Business Goals
building and developing most of the
decentralised applications. ERC20 is
Transparency in Transaction
the most popular token standard among
cryptocurrencies. Stability, security,
Decentralized Network
Record Immutability
permissions entities have over the in Figure 3, in the sense that an asset which is immutable and can’t be
system. In a public BigChainDB, transaction receives an asset input, modified once the asset is created;
any participant is able to access the which is then transformed to an and metadata, which can be modified
network or deploy that participant’s output that may be used in the future through subsequent transfer
own MongoDB+Tendermint node and as an input for a new transaction. transactions.
connect it to the database federation, Asset outputs can only be used once Transfer transactions allow the
while a permissioned BigchainDB as input for a transaction. There transfer of ownership of an asset or
could be managed by a consortium or a are two types of transactions in the modification of the metadata.
governing entity, where every member BigChainDB, as given below. The only one entitled to perform this
of the consortium manages their own Create transactions generate transaction over an asset is its owner.
node in the network and no one can join a new asset in the system (as a These transactions use as input an
without permission. JSON document in MongoDB) unused output of the asset, generating
BigChainDB’s transaction model is with two types of information in as a result a new output with the
analogous to that of Bitcoin as shown it. These are asset information, corresponding modifications.
Key highlights
CLI Activity: Active on GitHub
GUI
Type of ledger: Multi-ledger
Interfacing Layer integration
Pricing: Open source
Supported languages: JavaScript
Node
and Python
Driver
GitHub repo: bigchaindb
Consortium
HydraChain
Node Cluster HydraChain, an extension of
the Ethereum platform, is fully
compatible with the Ethereum
Node Protocol on an API and contract
Validations
level. It supports distributed ledgers
Member
and private chains, which are mainly
Node set up for the financial industry.
Store The infrastructure of HydraChain
Blockchain Platform allows users to develop smart
contracts in Python, which can
Figure 4: BigChainDB architecture improve development and debug
efficiency enormously.
Ledger
HydraChain has a well-defined
configuration system, as shown in
Figure 5, which provides flexible
customisation adjustments such as
transaction fees, gas limits, genesis
Node Node Node allocation or block time.
Features
Fully compatible with the
Node Node Node Ethereum Protocol
Accountable validators and native
contracts
User Fully customisable, easy to
deploy, and open source with
Figure 5: Hydrachain architecture commercial support available
Key highlights
Activity: Low but actively updated Services Contracts Whitelist Blacklist
on GitHub
Type of ledger: Private Flow Services
Pricing: Open source
Supported language: Python Blockchain Platform
GitHub Repo: Hydrachain (Python)
Assets Transaction Store
Corda
Corda is an open source blockchain
platform that enables businesses to Members Workflow Rules
transact directly and in strict privacy
using smart contracts, reducing Key
transactions and record-keeping Vault Identity Nodes
Management
costs while streamlining business
operations. In a world of permissionless Approval
Messaging Client Persistent DB
Process
blockchain platforms, in which all
data is shared with all parties, Corda’s
strict privacy model allows businesses Figure 6: Corda architecture
to transact securely and seamlessly, as
shown in Figure 6.
User Interface
R3 delivers two interoperable and
fully compatible distributions of the
platform – Corda, a free download
based on the code available on GitHub; Services Layer Data Store
and Corda Enterprise, an enterprise
blockchain platform that offers features
and services fine-tuned for modern-day
businesses.
Corda provides three main tools to Node 1 Node 2 Node 3 Node 4 Node 5 Node 6
achieve global distributed consensus:
Smart contract logic, which
specifies constraints that ensure
state transitions are valid according
to pre-agreed rules, described in the Assets Transaction API Permission
contract code as part of CorDapps.
Uniqueness and timestamping Blockchain Platform
services known as notary pools to
order transactions temporally and
eliminate conflicts. Figure 7: MultiChain architecture
A unique component called the flow S upported language: Kotlin MultiChain is for creating new
framework, which simplifies the GitHub repo: Corda blockchains with their own native
process of writing complex multi- currencies and/or issued assets.
step protocols between multiple MultiChain Users cannot transact existing
mutually distrusting parties across MultiChain technology is a platform cryptocurrencies on MultiChain unless
the Internet. that helps users to establish a certain someone trusted acts as a bridge in the
private blockchain that can be used by middle, holding some cryptocurrency
Key highlights organisations for financial transactions. and issuing tokens on MultiChain to
Activity: Actively updated on A simple API and a command-line represent it as shown in Figure 7.
GitHub interface are what MultiChain provides MultiChain is an off-the-
Type of ledger: Distributed us. This helps to preserve and set shelf platform for the creation
Pricing: Open source up the chain. and deployment of private
I
learned C more than 20 years ago discussed later in this article), and I was example, a Java (an object-oriented
and as a C programmer at heart, I literally surprised. This is something programming language) programmer
found it difficult to adjust to other that has happened to me a lot while learning Haskell (a functional
languages. Nevertheless, I had learning new programming languages. programming language) might come
to learn or use many others like C++, While learning Python, it was a surprise across a large number of features that
Java, x86 assembly language, Python, to learn that you don’t need to explicitly are surprising and a few which might
etc, over the years. I am no expert in declare the type of a variable before even look counter intuitive. Similarly,
all these programming languages. If I storing data into it. compiled and interpreted languages
can order food, ask for water and call This article discusses some often differ a lot in their features. An
a taxi in some (natural) language, I of the unique features of different example for this is a C++ (a compiled
assume proficiency in that language. programming languages that might language) programmer learning Python
Often, the same standards apply while surprise programmers well versed in (an interpreted language).
claiming proficiency in a programming some other programming language. Similar surprises might await
language. Recently, while learning Now let us also try to find out why this a person switching from a general-
Haskell, a functional programming occurs. Major surprises tend to occur purpose programming language
language, I came across a feature called when a programmer learns a language to a domain-specific one — for
lazy evaluation of expressions (to be that has a different paradigm. For example, a C (a general-purpose
programming language) programmer as an argument and returns a pointer The array num contains the
learning JavaScript (a domain- to an array of pointers to integers. first five numbers in the geometric
specific programming language). The pointer variable ptr2 is a pointer sequence starting at 1 with common
The programming experience might to a pointer to a pointer to a pointer ratio 5. Figure 1 shows the output of
also depend on the underlying to a pointer to a pointer to a pointer the program octal.cc. But why is the
architecture and operating system. An to an integer (I hope I have counted number 21 printed instead of 25? Well,
x86 assembly language programmer correctly!). There are 50 asterisk in C, C++ and Java, a number like
(from Intel) learning MIPS assembly symbols before the declaration of the 0123 is treated as an octal number and
language (from MIPS Technologies) pointer variable ptr3. But is there a ‘0x123’ is treated as a hexadecimal
or a PowerShell (from Microsoft) user limit to this? I tried up to 1000 asterisk number. In the program octal.cc the
learning Linux shell scripting are bound symbols and it was still compiling fine. numbers 001, 005 and 025 are treated
to encounter a few surprises. Of course, I believe there are no upper limits set as octal numbers due to this reason. But
there could be many other reasons for by the C standard. There may be a limit the numbers 001 and 005 are the same
this, but the above mentioned ones at which the C compiler might fail to in decimal and octal number systems,
seem to be the most obvious. handle this, but I don’t know where whereas the number 025 in octal is 21
For those learning their first that limit is. No programmer will ever in decimal. Thus, the sequence printed
programming language, it’s most use these sorts of pointers in a real is 1, 5, 21, 125, 625. This feature is
probable that none of the features are program. Nevertheless, these features convenient on many occasions, but
surprising. Now let us look at a few are available for us to use. A similar could lead to potential bugs if one is
features of programming languages that C++ program will also compile without not careful. For example, a Python
might surprise someone who learns these any errors. Now back to our business. programmer who is familiar with the
as their second programming language. Imagine the horror of a Java, Python or notation ‘0o25’ might consider ‘025’ as
Haskell (all of which are programming the number 25 with a leading zero.
Complicated pointers in C/C++ languages that do not use pointers)
Most undergraduate programmes in programmer who comes across such
computer science do have a course on monstrosities.
C programming and, to many, the most
difficult section is the one on pointers. Confusing Octal numbers in C/
To add to the misery, you can declare C++/Java
pointers of arbitrary complexity. As This may not be as big a surprise as
an example, consider the C program the previous one. But once, a long
named pointer.c given below. This and time back, this feature of C/C++/Java
all the other programs discussed in gave me quite a headache and I believe Figure 1: Output of the C++ program octal.cc
this article can be downloaded from we should discuss it. Consider the
opensourceforu.com/article_source_ C++ program octal.cc given below. The for loop with an else in
code/June20surprisinglanguage.zip. Similar C and Java programs (octal.c Python
and Octal.java) are also available for As mentioned earlier, a C/C++/Java
#include<Stdio.h> download. What is the output of the programmer newly learning Python
program octal.cc? will be surprised that explicit type
int main() declaration is not required in Python.
{ #include<iostream> Since this dynamic typing feature
int *(*(* ptr)(int *))[2]; using namespace std; of Python is well known to many
int *******ptr1; programmers, I will discuss a simple
int ******************************* int main() yet surprising feature of Python; for
*******************ptr3; { loop with an else part. Consider the
return 0; int num[ ]={001,005,025,125,625}; Python script loop.py shown below.
} for(int i=0;i<5;i++)
{ for i in range(5):
The above C program compiles cout<<num[i]<<endl; print(i)
without any errors. The pointer } else:
variable ptr1 is a pointer to a function return 0; print(“Noraml Exit from for loop”)
that accepts a pointer to an integer } for i in range(10):
print(i) programming language takes hundreds Figure 3 shows the output of the PHP
if i == 4: of characters just to print the message script case.php. Do notice that PHP
break ‘Hello World’ on the screen. variable names are case-sensitive like
else: Warning! If you execute this code most other programming languages.
print(“Break from for loop”) on a Linux terminal and do not close
the terminal immediately, your system I AM PHP
Figure 2 shows the output of the will hang. In such a situation, you will I AM PHP
Python script loop.py. Notice that the have to force restart your system.
I AM PHP
else part of the for loop gets executed
only when exited normally from the Case-insensitive function I AM PHP
loop and not when exited through a names in PHP I AM PHP
break statement. Though not essential, Imagine copying the directories named
for-else is a convenient feature that ‘SONGS’, ‘Songs’ and ‘songs’ from Figure 3: Output of the PHP script case.php
is absent in most of the programming your Linux machine to your friend’s
languages. Windows machine. The Windows file Implicit type conversion in
system in general is not case-sensitive JavaScript
and you will be forced to rename 2 of Imagine the case of a Java programmer
these directories before copying them, learning JavaScript. Due to the
because all the three directory names similarity in names we might think that
(SONGS, Songs and songs) are the this will be an easy task. Even though
same if treated in a case-insensitive the syntax is somewhat similar, the
manner. Some programming languages transition from Java to JavaScript is
behave like this. A classic example is not that easy. Java is a strongly typed
PHP. Consider the PHP script named language where type checking is very
case.php given below: rigorous, while on the other hand,
JavaScript is a weakly typed language
<!DOCTYPE html> with extensive implicit type conversion.
Figure 2: Output of the Python script loop.py <html> This is one feature of JavaScript that
<body> should be avoided if possible. To
Fork bombing with a Linux understand the pitfalls in the implicit
shell script <?php type conversion of JavaScript, let us
What is the smallest program (in terms function PRINTMSG( ) { go through the JavaScript script named
of source code size) in any language echo “I AM PHP <br>”; type.js given below. What is the output
that comes to your mind, which when } of the script type.js?
executed will crash your system? I am
sure the winner will be the following printmsg( ); <!DOCTYPE html>
shell script. PrintMsg( ); <html>
PRINTMSG( ); <body>
x( ) { x | x & }; x pRiNtMsG( );
PrInTmSg( ); <script>
It defines a function called x, which a = ‘2’ + 1
calls the function x itself recursively ?> b = ‘2’ - 1
and then pipes this result to another document.write(a + “<br>”);
recursive call to x in the background. </body> document.write(b + “<br>”);
The fourth x in the script is a call to c = ‘1’ + 2 + 3
the function x to begin the bombing. </html> d = 1 + 2 + ‘3’
Soon your CPU will be allocating all document.write(c + “<br>”);
its time to process just these calls to Function names in PHP are document.write(d);
the function x. Just 11 non-white space case-insensitive and the lines of code </script>
characters in the script and your system printmsg(), PrintMsg(), PRINTMSG(),
is down, so imagine the surprise of pRiNtMsG(), and PrInTmSg() are all </body>
those programmers whose favourite calling the function PRINTMSG(). </html>
A seasoned Java programmer knowledge, can you explain why the natural numbers by the line of code let x
will expect a number of errors in the variable ‘c’ contains the string ‘123’ = [1..]. Notice that ‘x’ is not evaluated
above code. But everything is fine with and the variable ‘d’ contains the string at this point. The line of code take 10
JavaScript. Figure 4 shows the output of ‘33’? I will give you a hint — the x gives the list [1,2,3,4,5,6,7,8,9,10]
the script type.js. Why does the variable associativity of the operator ‘+’ is what as output (the first ten elements of the
a have the value 21 and variable b matters. The operator ‘+’ has left- list x). But the third line of code print
have the value 1? This is due to the associativity. Hence in the line of code x will lead to the complete evaluation
implicit type conversion in JavaScript. c = ‘1’ + 2 + 3, the operation ‘1’ + 2 is of the list ‘x’ resulting in printing the
The operator ‘-’ performs just one carried out first, resulting in the string natural numbers (from which you
function, a mathematical subtraction. ‘12’ because ‘1’ is a string. Then the have to forcefully exit). Notice that the
When a string and a number are the expression becomes ‘12’ + 3, resulting lazy evaluation feature of Haskell is
operands of the ‘-’ operator, JavaScript in the final string ‘123’ stored in the very powerful but at the same time has
converts the string to a number. So in variable c. But in the line of code d = received some criticism from experts.
the line of code, b = ‘2’ - 1, the string 1 + 2 + ‘3’, due to left-associativity
‘2’ is converted to the number 2, and of the operator ‘+’, the first operation Multiple main methods in Java
the number 1 is subtracted from it to performed is 1+2, resulting in the When I first learned Java
obtain 1. Notice that, here, the variable number 3, because both 1 and 2 are programming, I did a very poor job
b contains a number, 1. Now let us look numbers. Then the expression becomes because I tried to learn some advanced
at what happens with the operator ‘+’. 3 + ‘3’, resulting in the final string ‘33’ features (Swing and AWT) without
The operator ‘+’ performs two stored in the variable d. Imagine the mastering the basics. Till recently, I
functions, mathematical addition and horror of a Java programmer going was under the false impression that
string concatenation. When a string through these results in his favourite a Java program can contain only
and a number are the operands of the programming language! one main method. But then I got a
‘+’ operator, instead of converting the rude shock — it is possible to have
string to a number, JavaScript converts Lazy evaluation in Haskell more than one main method in a Java
the number to a string. So in the line Haskell is a general-purpose purely program. For example, consider the
of code a = ‘2’ + 1, the number 1 is functional programming language, which Java program AAA.java shown below
converted to the string ‘1’, and the has a feature called lazy evaluation! with two different classes AAA and
strings ‘2’ and ‘1’ are concatenated to Because of lazy evaluation, expressions BBB, each having one main method.
give ‘21’. Notice that here, the variable are not evaluated immediately if they
‘a’ contains a string, ‘21’. With that are just bound to variables. Instead, the class AAA
evaluation of an expression is carried {
21 out only when its results are needed by void disp()
other operations. For this reason, lazy {
1 evaluation is often called ‘call-by-need’. System.out.println(“Hello From
123 Lazy evaluation enables infinite functions AAA...”);
to be stored in Haskell lists. Figure 5 }
33 shows the processing of one such infinite
list in the GHCi interactive environment. public static void main(String[ ]
Figure 4: Output of the JavaScript script type.js The list ‘x’ represents the infinite list of args)
“M
ore than 50 Hubel and Wiesel also discovered automatically learn the important
per cent of that there are three types of cells in features without any guided supervision.
the cortex, the the visual cortex and each has distinct Given the images of two classes, for
surface of the characteristics, based on the features example, dogs and cats, a CNN will be
brain, is devoted to processing visual they learn or react to. These are: simple, able to learn the distinctive features of
information,” said William G. Allyn, a complex and hyper-complex. Simple dogs and cats by itself.
professor of medical optics. cells are responsible for learning Compared to other deep learning
The human visual cortex is basic features like lines and colours; models for image related problems,
approximately made up of around 140 complex cells for learning features CNNs are computationally efficient,
million neurons. In addition to other like edges, corners and contours, while which makes them the preferred
tasks, this is the key organ responsible hyper-complex cells are responsible option since they can be configured
for detecting, segmenting and for learning combinations of features to run on many devices. Ever since
integrating the visual data received by learnt by simple and complex cells. the arrival of AlexNet in 2012,
the retina. It is then sent to other regions This powerful insight paved the way researchers have successfully built
in the brain where it is processed and to understanding how perception is new CNN architectures that have
analysed to perform object recognition built – the receptive fields of multiple very good accuracy, with powerful
and detection, and subsequently retain it neurons may overlap and together they and efficient models. Popular CNN
as applicable. tile the whole field. These neurons model architectures include VGG-16,
In the early 1950s, David H. Hubel work together, with a few neurons Inception models, ResNet-50, Xception
and Torsten Wiesel conducted a series learning features on their own and and MobileNet.
of experiments on cats and won two others learning by combining the In general, all CNN models have
Nobel prizes for their amazing findings features learnt by other neurons, and architectures that are built using the
on the structure of the visual cortex. finally integrating to detect all forms of following building blocks. Figure 2
They revolutionised our understanding complex features in any region of the depicts the basic architecture of a CNN
of the human visual cortex and paved visual field. model.
the way for much further research based For the video of the Hubel and Input layer
on this topic. Wiesel experiment, visit https://www. Convolution layers
Their studies showed that in the youtube.com/watch?v=y_l4kQ5wjiw. Activation functions
visual cortex, many neurons have Pooling layers
small local receptive fields. A local CNN architecture Dropout layers
receptive field means that these CNNs are the most preferred deep Fully connected layers
neurons react to visual stimuli only if learning models for image classification Softmax layer
they are located in a specific region or image related problems. Of late, In the CNN architecture, the
of the visual field of perception. In CNNs have also been used to handle building blocks involving the
other words, these neurons are fired problems in fields as diverse as convolution layer, activation function,
or activated only if the visual stimulus natural language processing, voice pooling layer and dropout layer are
is located at a particular place on the recognition, action recognition and repeated before ending with one or
retina or visual field. They found that document analysis. The most important more fully connected layer and the
some neurons have larger receptive characteristic of CNNs is that they softmax layer.
fields and they are fired, activated or
react to complex features in the visual
Sample Cell Complex Cell Hypercomplex Cells
field, which in a way are a combination
of low-level features to which other
neurons react. The low-level features
mean horizontal and vertical lines,
edges and corners and different angles
of lines, while high-level features are
Final Object Seen
the simple or complex combinations
of these low-level features. Figure
1 illustrates the local receptive
Local Receptive Fields
fields of different cells in the human
visual cortex. Figure 1: Illustration of local receptive fields and cells in the human visual cortex
in rows x to:
activation values of neurons, and b is studying the features of the eye and vertical strides, respectively.
the bias term. All neurons in a feature eyebrows together as a combination,
map share the same weights (kernel) then normal CNN would learn the zx, y, k
and bias term, but different activation features of the eyebrow and the eye as
values that correspond to the values two different features, and it may not The above is the output of the
of the receptive field corresponding to help in learning the minor differences neuron located at row m, column n in
that neuron in the feature map. This is of the combination of features. In that the feature map k of the convolutional
one of the reasons why computations case, the convolution layers have to layer l.
are less in a CNN compared to a DNN be tweaked to learn those features
(deep neural network). A feature learnt appropriately. kl-1
by a neuron at one particular location
will allow it to detect the same feature Activation feature maps in CNN The above is the number of feature
anywhere else in the input, regardless of Although activations are generated maps in the layer l-1 (previous layer).
the location. This is another important for each neuron, they may not be
feature of the CNN as compared to propagated to the final prediction. ai, j, fm
a DNN, which is not translational Let us now get the equations to
invariant. determine the output of a neuron at a The above is the activation output of
Next, consider the convolution convolutional layer. For a grey scale the neuron located in layer l – 1, row i,
layers after the first convolution layer. image (single channel) the output of column j, and feature map fm. bk is the
The neuron at location (x, y) in the the neuron in the first convolution layer bias term for feature map k (in layer l).
feature map k in a convolutional layer l located at (x, y) of the feature map k is
gets its input from all the neurons in the given by the following: wi, j, fm, k
previous layer l -1, which are located
in rows: zx, y = Σm=1 to kh (Σn=1 to kw (ai, j The above is the weight (kernel)
* wm,n)) + bk for i = m * sh + kh – 1 of any neuron in the feature map k of
x * sw to * sh + kw - 1, columns y * and j = n * sw – 1 and ai, j the layer l and its input located at row
sh to y * sw + kh – 1 m, column n, relative to the neuron’s
…which will be the pixel values of receptive field and feature map fm.
…and for all feature maps in the the input image. And by: The feature maps produced in the
layer l – 1. These are the same inputs convolution layer are input to the next
to all neurons located in the same row kh, kw, sh, and sw layer (pooling layer) after applying the
x and column y, in all the feature maps. activation function. The feature map
This explains how the features learnt …which are the height and width generated after applying the activation
in one layer get integrated to learn of the kernel and the horizontal and function is called the activation feature
combined features or complex patterns vertical strides, respectively. map. The visualisation of the activation
in successive layers. This also explains The output of the neuron in any feature map for a filter in a particular
how a CNN learns distinct features convolutional layer l (l is not the first convolution layer depicts the regions of
at the initial layers; and convolutes convolutional layer) and for the feature the input that are activated in the feature
to integrate basic features to learn map k is given by the following: map after applying the filter. In each
complex distinct features or patterns activation feature map, the neurons
in the intermediate convolution layers, zx, y, k = Σm=1 to kh (Σn=1 to kw have a range of activation values with
before finally learning objects present (Σfm = 1 to kl-1 (ai, j, fm * wm, n, the maximum value representing the
in the input in the last convolution fm, k))) + bk for i = m * sh + kh – 1 most activated neuron, the minimum
layers. As discussed earlier, any feature, and j = n * sw – 1 and ai, j, fm value representing the least activated
pattern or object learnt by the CNN neuron, and the value zero representing
at any layer will allow it to detect this …which will be the activation a neuron not activated.
anywhere else in the input, independent values of previous layer l – 1. And by: Let us consider an illustration. The
of the location. So one needs to be extra AlexNet model is used for transfer
careful when using a CNN to detect kh, kw, sh, and sw learning to build a binary classifier
combined objects that are similar in that classifies the given flight image
the input. For example, in the case of …which are the height and width to either a passenger flight or a fighter
face detection, if one is interested in of the kernel, and the horizontal and flight. AlexNet has five convolutional
features learnt belong to class 0 only. fighter class, then the count of the type This article provides an overview
Class 1: In these feature maps, the of filters will not match the count of the of how the visual cortex perceives
features learnt belong to class 1 only. type of filters for the fighter class during objects. It explains the parallels
Mixed: In these feature maps, the training, which is quite obvious. The between CNNs and different building
features learnt belong to both class potential reason for mis-classification blocks of the CNN architecture. It
0 and class 1. could be any of the following: also briefly covers each building
Inactive: These feature maps do The model has evaluated the block and how the convolution
not learn any features of class 0 or features of both the fighter and layer processes the image features
class 1. passenger flight classes, and found from the input. It then highlights
The type of activation feature maps that fighter features dominating the importance of activation during
in each layer can be easily visualized the passenger features during image classification. Knowing
using the Deepvis tool specified above. classification and hence it is mis- the details about the activations
Many a times, complex standard models classified. of each filter in each layer allows
are used to build a simple binary as The model has evaluated and found model developers to understand the
above, or a model for up to ten class more of fighter features matching importance of each filter and update
classifiers using transfer learning. during classification and hence it is their CNN architectures or build new
During those scenarios, as seen above, mis-classified. CNNs with reduced parameters and
many filters may not be required since The model has evaluated and computations. The article also gives
they will not be learning any features at found comparable fighter and details on the types of activation
all, and fewer filters will be sufficient to passenger features matching during feature maps, and how they can be
learn features of all the classes. In these classification and due to minor used for debugging CNN models and
cases, if the requirement is to retain the differences in features learnt, it has to detect potential mis-classification.
original model architecture for some been mis-classified Further exploration and research on
reason, then the inactive filters can be To understand the above activation feature maps for different
removed to reduce the computations reasoning, the activation feature maps types of models other than CNN will
and the size of the model. of each convolution layer need to be provide many insights, especially
The types of filters will also analysed, and the average count of for object detection and recognition
help to debug when the test image the type of filters at each layer may models. The same can be extended to
is mis-classified. The types of filters have to be considered to come to a natural language processing and voice
at each layer can be studied for both conclusion. There are approaches to recognition models, the activation
the classes during training and during handle this, which will be discussed feature maps for which would be a bit
evaluation. The filters activated can in the next article. more complex to analyse.
be compared during training and
evaluation. When making a correct
prediction of the test image, the count References
of the type of filters for each class [1] https://towardsdatascience.com/applied-deep-learning-part-4-convolutional-
will approximately match the average neural-networks-584bc134c1e2
count of the type of filters for each [2] http://yosinski.com/deepvis
[3] https://towardsdatascience.com/wtf-is-image-classification-8e78a8235acb
class during training. This means that [4] https://www.tensorflow.org/
if a test image is correctly classified
as a passenger flight class, then its
count of the type of filters (class 0, By: B.N. Chandrashekar, Dr Manjunath Ramachandra and Shashidhar Soppin
class 1, mixed and inactive) must be B.N. Chandrashekar is a principal consultant and researcher with hands-on programming
approximately equal to the average experience. He has around two decades of industry experience. With a Masters from
count of the type of filters of the IISc, he specialises in embedded, retail, e-commerce and AI driven technology.
passenger flight determined during Dr Manjunath Ramachandra has over two decades of work experience in the
training, and the same will be true for overlapping verticals of artificial intelligence, computer vision, healthcare and wireless/
an image correctly classified as the mobile technologies. He figures in the list of the ‘2000 outstanding intellectuals of the
21st century’, brought out by the International Biographical Center, UK.
fighter flight class.
These findings can help to debug Shashidhar Soppin, DMTS (distinguished member of the technical staff) senior
mis-classified images. For example, if member of Wipro, has 19+ years of experience in the IT industry. He specialises in
virtualisation, Docker, the cloud, AI, ML, deep learning and OpenStack.
the test image is mis-classified as the
following code:
N
atural language processing By the end of this article, you will import spacy
(NLP) is an important have hands-on experience in using myspacy = spacy.load(‘en_core_web_sm’)
precursor to machine spaCy to apply all the basic level text
learning (ML) when processing techniques in NLP. If you The myspacy object that we have
textual data is involved. Textual data are looking for a lucrative career in created is an instance of the language
is largely unstructured and requires machine learning and NLP, I would model en_core_web_sm. We will use
cleaning, processing and tokenising highly recommend adding spaCy to this instance throughout the article for
before a machine learning model can your roster. performing NLP on text.
be applied to it. Python has a variety
of NLP libraries available for free Installing spaCy Reading and tokenising text
such as NLTK, TextBlob and Gensim. As with most of the machine learning Let’s start off with the basics.
However, spaCy stands out when it libraries in Python, installing spaCy First create some sample text and
comes to its speed of processing text requires a simple pip install command. then convert it into an object that
and applying beautiful visualisations While it is recommended that you use a can be understood by spaCy. We
to aid in understanding the structure of virtual environment when experimenting will then apply tokenisation to the
text. spaCy is written in Cython; hence with a new library, for the sake of text. Tokenisation is an essential
the upper case ‘C’ in its name. simplicity, I am going to install spaCy characteristic of NLP as it helps us
This article discusses some key without a virtual environment. To install in breaking down a piece of text into
features of the spaCy library in action. it, open a terminal and execute the separate units. This is very important
for applying functions to the text such As you can see in the following are all inflected forms of the word
as NER and POS tagging. output, we have successfully broken ‘character’. Here, ‘character’ is the
down the sample_passage into lemma or the root word. Lemmatisation
#reading and tokenizing text discernible sentences. is essential for normalising text. We
some_text = “This is some example text use the lemma_ property in spaCy to
that I will be using to demonstrate the This is an example of a passage. lemmatise text.
features of spacy.” A passage contains many sentences.
read_text = myspacy(some_text) Sentences are denoted using the dot #lemmatisation of text
print([token.text for token in read_ sign. for word in read_text:
text]) It is important to detect sentences in print(word, word.lemma_)
nlp.
The following will be the output of The following is the lemmatised
the code: Removing stop words output of the sample text. We output the
An important function of NLP is to word along with its lemmatised form.
[‘This’, ‘is’, ‘some’, ‘example’, remove stop words from the text. Stop To preserve page space, I am sharing
‘text’, ‘that’, ‘I’, ‘will’, ‘be’, words are the most commonly repeated the output of a single sentence from our
‘using’, ‘to’, ‘demonstrate’, ‘the’, words in a language. In English, words sample text.
‘features’, ‘of’, ‘spacy’, ‘.’] such as ‘are’, ‘they’, ‘and’, ‘is’, ‘the’,
etc, are some of the common stop Sentences sentence
We can also read text from a file as words. You cannot form sentences that are be
follows: make semantic sense without the usage denoted denote
of stop words. However, when it comes using use
#reading and tokenizing text from a file to machine learning, it is important the
file_name = ‘sample_text.txt’ to remove stop words as they tend to dot dot
sample_file = open(file_name).read() distort the word frequency count, thus sign sign
read_text = myspacy(sample_file) affecting the accuracy of the model. . .
print([token.text for token in read_ spaCy has a list of stop words in its
text]) library for English. To be precise, there Finding word frequency
are 326 stop words in English. You can The frequency at which each word
Sentence detection remove them from the text using the occurs in a text can be vital information
A key feature of NLP libraries is is_stop property of spaCy. when applying a machine learning
detecting sentences. By finding the model. It helps us to find the main
beginning and end of a sentence, you #removing stopwords topic of discussion in a piece of text
can break down text into linguistically print([token.text for token in read_ and helps search engines provide users
meaningful units, which can be very text if not token.is_stop]) with relevant information. To find the
important for applying machine frequency of words in our sample text,
learning models. It also helps you in After removing the stop words, we will import the Counter method
applying parts of speech tagging and the following will be the output of our from the collections module.
named entity recognition. spaCy has sample text.
a sents property that can be used for #finding word frequency
sentence extraction. [‘example’, ‘passage’, ‘.’, from collections import Counter
‘passage’, ‘contains’, ‘sentences’, word_frequency = Counter(read_text)
#sentence detection ‘.’, ‘Sentences’, ‘denoted’, ‘dot’, print(word_frequency)
sample_passage = “This is an example ‘sign’, ‘.’, ‘important’, ‘detect’,
of a passage. A passage contains many ‘sentences’, ‘nlp’, ‘.’] The following is the output for the
sentences. Sentences are denoted using frequency of words in our sample text:
the dot sign. It is important to detect Lemmatisation of text
sentences in nlp.” Lemmatisation is the process of Counter({This: 1, is: 1, an: 1,
read_text = myspacy(sample_passage) reducing the inflected forms of a word example: 1, of: 1, a: 1, passage: 1,
sentences = list(read_text.sents) such that we are left with the root of the .: 1, A: 1, passage: 1, contains: 1,
for sentence in sentences: word. For example, ‘characterisation’, many: 1, sentences: 1, .: 1, Sentences:
print(sentence) ‘characteristic’ and ‘characterise’, 1, are: 1, denoted: 1, using: 1, the:
T
he primary aim of SPA JS is about frameworks like Knockout, Problem 2: Earlier, we had frontend
to keep development simple Ember and Backbone. It’s not that designers and backend developers.
and eradicate repetitions. Angular, React, etc, are really game The former designed the page, while
It also follows the crucial changers. We already had the concept the latter used programming languages
principle, YAGNI (You Ain't Gonna of building a single page application like C, C++, Java and .NET to build
Need It), which states that you should using these frameworks. So, let us a page and call APIs. JavaScript was
“Always implement things only when examine the problem with Angular, then introduced as frontends for micro-
you need them but never when you React or Vue.js, as well as other interactions, but frontend designers
just think that you may need them.'' programming options. faced difficulties designing pages with
For example, in open source software, Problem 1: The question is why it. To address this situation, backend
contributors may develop different do we have different frameworks? developers were brought to the frontend.
modules by assuming that you may need This is because developers try to build But unfortunately, backend developers
them. However, these modules may not some applications on one of these are generally not good in design.
be useful at all. frameworks, but the latter do not solve So, companies and developers
their particular problem in the way they are facing the issue of how to create a
Why SPA JS? want to. So, developers try to create perfect bridge between the frontend and
Many of you may be familiar with their own frameworks that solve their backend. Recently, the term ‘full-stack
Angular, React and the Vue.js individual problem, but this leads to a developer’ has emerged, describing those
framework but very few may be aware lot of frameworks. who know all the backend development,
A Headless CMS:
Delivering Pure Content in the
Age of Mobile-first Internet
CMS
A headless CMS (content
management system) is a back-
end-only version built from the
ground up as a content repository
that makes content accessible via
a RESTful API for display on any
device. It is designated ‘headless’
because it does not have a
front-end. A headless CMS is
focused on storing and delivering
structured content.
W
hat started on October individually and as part of communities, This gave birth to content
29, 1969, with so that the content can be created to suit management systems (CMSs), which
ARPANET (under their tastes and mood. revolutionised the way we create
the US Department of information, store it and present it
Defence) delivering a message from The need for CMSs to users. The CMSs enabled people
one computer to another, became the Much of what you experience as the to collaborate to create content
Internet as we know it today. It is the Internet is powered by the information that and share it with others, in an
hub of information that’s available is stored in back-end systems of the apps instant. Soon, people didn’t need
and accessible around the globe to that you use today. Internet companies, to understand HTML in order to
everyone. About 60 per cent of the schools, universities and even governments create and share content, as the CMS
world’s population has access to the store information about you to understand evolved to accommodate complex
Internet and there are innumerable your behaviour, share content with you content creation requirements and
apps, software, distributed systems and and to function in meaningful ways. provided the building blocks to enable
smart devices that use it — enabling When the World Wide Web became collaboration, document management
what was once in the realm of sci-fi a phenomenon, it allowed people to and storage.
movies and stories. The Internet today share information without boundaries. As content became king and with
is one of the most profitable mediums to Anyone with a basic understanding of more users and complex content, there
make money. From small time creators HTML could create Web pages and was the need to ensure that users could
to large Internet companies, everyone host them on a website. It was just Web access the content whenever they
depends on the content to generate pages initially and when things caught wanted and from wherever they were.
revenue. This is coupled with analytics on, we needed something better to The content delivery networks (CDN)
to understand what people like, create, store and present the content. evolved to meet this need. The CDNs
need to run, or you use them as best you to understand what is being asked listed above, as a headless CMS provides
can and fork your code to accommodate in natural language either using text first class APIs to interact with the data and
different consumption mediums. or voice inputs. Once the processing data types so you don’t have to worry about
With a headless system, you gain for this task is over, the bot makes database handling. You can get started with
complete control over how the content itself useful by surfacing the relevant how you will integrate these APIs with
stored in it will be used; and you can content. This is similar for the digital your application surfaces, be it a mobile
design a pipeline that doesn’t need to be assistants — in their hardware as well as or Web app, chatbot or a skill for a digital
refactored any time you want to support software avatars. assistant.
a new medium for your users, be it a The content that you consume Strapi (https://strapi.io/) is one of
smart watch or an AR system. this way is usually small snippets of the headless CMSs that I am working
If you have run into the challenges information that can be sourced from with, and it includes a UI to add content
mentioned with a traditional CMS, emails, calendars, third party services as well as user roles and permissions
you are most likely using one of the such as flight tracking and weather, etc. to control access to the content. Ghost
workarounds mentioned as well. If not, But other content, such as the witty (https://ghost.org/) and Netlify CMS
you need to start thinking ahead about responses that you get, are best served (https://www.netlifycms.org/) are others
when you will run into them and how from a headless content. These aren’t that I have tried out a little bit. You
you will address them, starting today. only limited to jokes, etc, but include can look at other options at https://
meaningful content as well. headlesscms.org/.
Leveraging a headless CMS As an example, consider a food A headless CMS also makes an
Headless CMSs not only can be used in ordering or a shopping portal. These excellent case for Web apps or websites,
places where a CMS is needed but also in services take a large number of partners even the ones that are a good mix of
other scenarios where you need to store and sellers onto their platform, who list static and dynamic content. You can
content in a structured and consistent their services or products. A traditional front-end your headless CMS with
format. Let’s explore this further! approach is to store this content in a a static site generator such as Hugo
Almost any scenario that you NoSQL DB and fetch the content as the or Gatsby, and achieve much greater
believe requires a CMS is a candidate UI is being rendered. On a high level, performance while lowering your
for a headless CMS too. However, this approach requires that you build an computing costs. Static site generators
headless CMSs also have uses beyond API wrapper that does the following: also support plugins to fetch data from
their traditional content serving Builds a UI to take onboard the dynamic sources such as APIs. This
functions. If you look at the tech partners, sellers, items and services combination offers a number of benefits
trends of the last couple of years, you Validates the content before it is when used in conjunction with a CDN.
might have realised that chatbots, saved in the database with the Dynamic site acceleration solutions
smart devices and digital assistants are right structure such as Azure Front Door or CloudFront
topping the charts. These interfaces are Fetches the content when being add an additional layer of performance
powered by the innovation in hardware, served to the users benefits that can further reduce the load
NUI (natural user interface) and of Add a chatbot or a digital assistant on your content repository. A prime
course, the content that can be served (or a skill) to the mix and you can serve example of this scenario is GitHub Pages
in a variety of ways. the content on those mediums too if (https://pages.github.com/) that allows
A good chatbot is powered by your APIs are designed efficiently. you to host a static website directly from
state-of-art NLP models, and optionally, With a headless CMS, you are left your GitHub repo, free of cost. Figure 2
with a voice interface that allows it with just the work needed for Point 1 is a schematic for this scenario.
Variations of this architecture can be
useful in other scenarios where serving
content is part of the solution.
Dynamic
Content Challenges with a headless CMS
When looking at the reference
Static Site architecture diagram, it is evident that the
Generators CDN Clients lack of a presentation layer in a headless
Headless
CMS CMS introduces the flexibility to use
any front-end. But it also means that you
Figure 2: A schematic for GitHub Pages have another piece of the puzzle that
you need to account for when decidingecosystem having been built around needs to be built before you can realise
on whether to opt for a headless CMS.them. For example, the popular CMS the value they bring to the table.
A traditional CMS covers your Web WordPress is estimated to have over a Moving to a headless CMS may not
and mobile bases out-of-the-box with 60 per cent share of the overall CMS be a straightforward activity and will
themes to customise the look and feel.
market and 35 per cent share of all require planning and effort to transition
To get started with content delivery on
websites functioning today. One of your workflows and users seamlessly.
a headless CMS, developers will need the reasons is the extensibility and Before you start planning on this, take
to build a user experience (UX) for the
the plugin marketplace this CMS a step back and see if your existing
content. This is also true for the content
offers to customise it from a simple CMS satisfies your current and future
creators – a traditional CMS includesblog to a full-fledged e-commerce content delivery requirements. If you see
content creation and collaboration website. With responsive themes and yourself extending your CMS because
workflows which are non-existent plugins, you can convert WordPress it does not support certain delivery
with the other choice. A pure contentinstallations to PWAs and push them surfaces that either your business plans
management system will also be a into the App Store and Play Store to dictate you use or you believe your users
culture shock to content authors since
provide a more engaged experience are moving to, it is time to consider
well-known concepts such as pages andfor the users. Drupal is another evaluating a headless CMS.
other UI constructs do not exist at all,
contender in this domain.
and a UX for authors will need to be Headless CMSs are a newer breed
built from scratch too. in the content management space, and By: Ashish Sahu
Visit ‘Online Only’ expo-cum-conference
What’s New In Electronics & IoT
they do tick all the boxes for building from
The the comfort
author ofsolutions
is a cloud your HOME…architect
So, is it a traditional CMS scalable and extensible content delivery working with Microsoft India. He helps
or a headless one? pipelines that can support virtually ISVs and startups overcome technical
During These Challenging Times?
Traditional CMSs have been around any kind of device available today and
challenges, adopt the latest technologies
and take their solutions to the next level.
for a long time, with quite an tomorrow. The user experience, though,
VISIT NOW https://IndiaTechnologyWeek.com JUNE EDITION: 17th – 19th JUNE, 2020
JUNE EDITION:
17th – 19th JUNE, 2020
T
he advent of DevOps helped Infrastructure-as-Code (IaC) What is Terraform?
minimise the dependence on This approach is the management Terraform is an open source
sysadmins who used to set and configuration of infrastructure provisioning tool from HashiCorp
up infrastructure manually, (virtual machines, databases, load (more can be read at http://terraform.
while seated at some unknown corner balancers and connection topology) in io/) written in the Go language. It
of the office. Managing servers a descriptive cloud operating model. is used for building, changing and
and services manually is not a very Infrastructure can be maintained just versioning infrastructure, safely and
complicated task in a data centre. But like application source code under efficiently. Provisioning tools are
when we move to the cloud, scale and the same version control. This lets responsible for the creation of server
start working with many resources the engineers maintain, review, test, and associated services rather than
from multiple providers (AWS, GCP, modify and reuse their infrastructure configuration management (installation
Azure, etc), manually setting up and and avoid direct dependence on the and management of software) on
configuring to achieve on-demand IT team. Systems can be deployed, existing servers. Terraform acts as a
capacity slows things down. Being managed and delivered fast, and provisioner, and focuses on the higher
repetitive, the manual process is most automatically, through the IaC. abstraction level of setting up the
likely error-prone. And it cannot be There are many tools available servers and associated services.
managed and automated when working for IaC such as CloudFormation The infrastructure Terraform can
with resources from different service (only for AWS), Terraform, manage includes low level components
providers together. Chef, Ansible and Puppet. such as compute instances, storage,
and networking, as well as high level 4. Confirm the Terraform installation: Terraform. These APIs are
components such as DNS entries, SaaS maintained by the community.
features, etc. It leaves the configuration terraform –v //v0.12.9 • Username: Key given by
management (CM) to tools such as provider
Chef that do the job better. It lays Life cycle • Password: Key given by
the foundation for automation in Terraform template (code) is written provider
infrastructure (both cloud and on- in HCL language and stored as a • Region: Specify region of
premise) using IaC. Its governance configuration file with a .tf extension. deployment
policy makes the cloud operating model HCL is a declarative language, which Resource: There are many kinds
system-compliant, which otherwise is means our goal is just to provide the of resources such as an open-stack
only known internally to the IT team. end state of the infrastructure and basic instance, AWS EC2, Droplet
Terraform is cloud-agnostic and uses Terraform will figure out how to create and Azure VM, which can be
a high level declarative style language it. Terraform can be used to create and created as follows:
called HashiCorp Configuration manage infrastructure across all major • resource <provider_instance_
Language (HCL) for defining cloud platforms. These platforms are type> <identifier>
infrastructure in ‘simple for humans to referred to as ‘providers’ in Terraform • Image ID: This is machine
read’ configuration files. Organisations jargon, and cover AWS, Google Cloud, image specific, a tag for an
can use public templates and can have Azure, Digital Ocean, Open Stack and image we need to install
a unique private registry. Templates many others. (Ubuntu and Windows).
are a maintained repository containing Let us now discuss each stage in • Flavour type: This is the
pre-made modules for infrastructure the IaC lifecycle (Figure 1), which is type of instance governing
components needed under version managed using Terraform templates. the CPU, memory and disk
control systems (like Git). space.
Code Here, template defines the
Installation Figure 2 is sample code for starting provider as AWS, and provides the
Installing Terraform is very simple; just an EC2 or t2.micro instance on AWS. access key, secret key and region for
follow the steps mentioned below. As visible, only a few lines will being able to connect to AWS. After
1. Download the archive instantiate an instance on AWS. It’s that, resource to be create is specified
also implicit that the same code can be ie. an aws_instance here and is
(https://releases.hashicorp.com/ maintained under VCS and be used to named as “example”. Also, the count
terraform/${VER}/terraform_${VER}_linux_ instantiate instances in various regions and instance type (size of instance)
with different resource configurations, is mentioned as code and can be
amd64.zip): removing error-prone and time- seen in Figure 2.
export VER=”0.12.9” consuming manual work. The same code can be used to
Provider: All major cloud players set up an instance in another region.
wget such as AWS, Azure, GCP and Also, Terraform offers users the power
https://releases.hashicorp.com/ OpenStack have their APIs for of using variables and other logical
terraform/${VER}/terraform_${VER}_linux_ statements such as if-else and the for
amd64.zip loop, which can optimise the setting up
of infrastructure even more.
2. Once downloaded, extract the
archive:
unzip terraform_${VER}_linux_amd64.zip
Init
The Terraform binary contains the basic
functionality and everything else is
downloaded as and when required. The
‘terraform init’ step analyses the code, Figure 3: The ‘Terraform init’ step initialises all the required resources and plugins
figures out the provider and downloads
all the plugins (code) needed by the
provider (here, it’s AWS). Provider
plugins are responsible for interacting
over APIs provided by the cloud
platforms using the corresponding CLI
tools. They are responsible for the life
cycle of the resource, i.e., create, read,
update and delete. Figure 3 shows
the checking and downloading of the
provider ‘aws’ plugins after scanning
the configuration file.
Plan
The ‘terraform plan’ is a dry run for
our changes. It builds the topology of
all the resources and services needed
and, in parallel, handles the creation of
dependent and non-dependent resources.
It efficiently analyses the previously
running state and resources, using a
resource graph to calculate the required
modifications. It provides the flexibility Figure 4: ‘Terraform plan’ dry runs the instantiation
of validating and scanning infrastructure
resources before provisioning, which
otherwise would have been risky. The
‘+’ symbol signifies the new resources
that will be added to the already
existing ones, if any. Figure 4 shows the
generation of the plan and shows the
changes in resources, with ‘+’ indicating
addition and ‘-’ indicating deletion.
Validation
Administrators can validate and
approve significant changes produced
in ‘terraform planning’s’ dry run. This Figure 5: ‘Terraform apply’ instantiates the validated infrastructure in the planning step
completely prevents specific workspaces Additional features of Terraform 4. Terraform uses a declarative style
from exceeding predetermined thresholds, 1. It provides a GUI to manage all the when the desired end state is written
lessening the costs and increasing running services. It also provides directly. Here, the tool is responsible
productivity. Also, any standards or an access control model based for figuring out and achieving that
governing policy for the cloud operating on the organisation, teams and end state by itself.
model have not been put in place yet. users. Its audit logging emits logs Let’s say, we want to deploy five
A policy has not been codified yet; whenever a change (here, change Elastic instances on AWS using Chef
rather, certain practices are internally signifies sensitive write to existing (Figure 8a) and Terraform (Figure 8b).
known among teams and organisations. IaC) happens in the infrastructure. Observing the script given in Figure
Terraform enforces sentinel policies as 2. Existing pipeline integration: 8, one can see that both are equivalent
code before provisioning the workflow Terraform can be triggered from and will produce the same results.
to minimise the risks through active within most continuous integration/ But let’s assume a festive sale is
policy enforcement. continuous deployment (CI/CD) coming up. The expected traffic will
DevOps pipelines such as Travis, increase, and infrastructure must scale
Apply Circle, Jenkins and Gitlab. This our application. Let’s say, five more
The ‘terraform apply’ executes the enables a plugin provisioning instances are required to handle the
exact same provisioning plan defined workflow and sentinel policies into predicted traffic.
in the last step, after being reviewed. the CI/CD pipeline. As language is procedural, setting
Terraform transforms configuration 3. Terraform supports many providers count as 10 will start additional 10
files (.tf) based on appropriate API (more at https://www.terraform. instances rather than adding extra five, thus
calls to cloud provider(s) automating io/docs/providers/index.html), initiating a total of 15 instances. We must
resource creation (using the provider’s allowing users to easily manage manually remember the previous count, as
CLI) seamlessly. This will create the resources no matter where they shown in Figure 9a. Hence, we must write
resources (here, an EC2 server) on AWS are located. Instances can be a completely new script adding one more
in a flexible and straightforward manner. provisioned on cloud platforms redundant syntax code file.
Figure 5 shows the resources that will such as AWS, Azure, GCP and As language is declarative, setting
be created, and Figure 6 confirms the OpenStack using APIs provided by the count as 10 (as can be seen in
changes that took place. the cloud service providers. Figure 9b) will start an additional five
If we proceed to the AWS console
to verify the instantiation, a new EC2
instance will be up and running, as
shown in Figure 7.
Destroy
After resources are created, there may be
a need to terminate them. As Terraform
tracks all the resources, terminating
them is also simple. All that is needed
is to run ‘terraform destroy’. Again,
Terraform will evaluate the changes and
execute them after you give permission. Figure 6: Terraform showing the absolute changes made to the infrastructure
Advantages of incorporating
IaC using Terraform
1. Prevents configuration drift:
Terraform, being provisioning
software, binds you to make
changes in your container and only
then deploy the new ones across Figure 9a and 9b: Sample code for instantiating ten instances
every server. This separates server
configuration from any dependency, store local variables such as cloud saving extra infrastructure costs and
resulting in identical instances tokens and passwords in encrypted other overheads.
across our infrastructures. form on the terraform registry. Terraform is an open source tool
2. Easy collaboration: The terraform 5. Masterless: Terraform is masterless, that helps teams manage infrastructure
registry (Terraform’s central registry by default, i.e., it does not need in an efficient, automated and reusable
version control) enables teams to a master node to keep track of manner. It has a simple modular syntax
collaborate on infrastructure. all configuration and distributing and supports multi-cloud infrastructure
3. No separate documentation updates. This saves the extra configuration. Enterprises can use
needed: The code written for infrastructure and maintenance costs Terraform in their DevOps methodology
infrastructure will become your we’d have to incur in maintaining an to construct, modify, manage and deliver
documentation. By looking at the extra master node. Terraform directly infrastructure at a faster pace with less
script, thanks to its procedural nature, uses the cloud providers’ API, thus manual intervention.
we can figure out what’s currently
deployed and its configuration.
By: Vaibhav Aggarwal and Prof. B. Thangaraju
4. Flexibility: Terraform not only
The authors are associated with the open source technology lab in the International
handles IaaS (AWS, Azure, etc) but Institute of Information Technology, Bengaluru.
also PaaS (SQL, NodeJS). It can also
Lighttpd: A Lightweight
HTTP Server for
Embedded Systems
This article guides readers through the implementation of Lighttpd, a lightweight
interactive HTTP server for embedded systems that have limited memory and storage but
require real-time performance. Lighttpd is also very useful on Linux desktops.
L
ighttpd is an open source Web server optimised Next, use the tar command to untar the tarball:
for environments in which speed and high
performance are critical. It is a viable alternative $ tar -zxvf lighttpd-1.4.55.tar.gz
to Apache or other heavyweight Web servers. $ cd lighttpd-1.4.55
Lighttpd is standards-compliant and has built-in security as
well as flexibility. The entire source code has been written After extracting the contents from the tarball, we move
in C. This comes under the BSD ‘three-clause’ licence. It to the directory and do the compiling followed by the
also supports fast CGI (Common Gateway Interface) for installation.
creating dynamic content.
$ ./configure
Prerequisites
For the system: The desktop/embedded system should run
Linux. The demonstration here is based on an Ubuntu 16.04 Note: When you are cross-compiling, you have to
32-bit desktop. mention the build, host, target and other arguments to
make use of your cross-compiler.
For the reader: The reader should know the basics of
HTTP GET/POST/PUT and HTML syntax.
$ CC=”/path/to/cross-compiler-gcc”
Installation $ LD=”/path/to/cross-cmpiler-gnu-ld”
As the focus is on embedded platforms, the installation $ ./configure --host=ppc-linux-gnu \
shown is from source code. --build=i686-redhat-linux-gnu \
To start the installation, you can download the source --target=powerpc-*-elf --includedir=/path/to/sysroot-for-
code from https://download.lighttpd.net/lighttpd/releases- cross
1.4.x/lighttpd-1.4.55.tar.gz. You can download it manually
or by using the following command: To get help on all the options, type the following
command:
$ wget https://download.lighttpd.net/lighttpd/releases-
1.4.x/lighttpd-1.4.55.tar.gz $ ./configure –help
Then, to compile and install, use the command given server.errorlog = “/var/log/lighttpd/error.log”
below: accesslog.filename = “/var/log/lighttpd/access.log”
$ make server.modules = (
$ sudo make install “mod_access”,
“mod_accesslog”,
“mod_fastcgi”,
Note: If the target is an embedded one like PPC or
ARM, ‘make install’ is not of any use. Hence you have “mod_rewrite”,
to identify the necessary executables and library after “mod_auth”
compilation, and place them in your file system. )
Submit
#include <fcgi_stdio.h>
Sum = 6
#include <stdlib.h>
#include <string.h> Figure 2: A sample Web page with form (as per code)
For HTTP POST, the entire form’s contents come int tokenNo=0;
as a big string, which can be read and parsed in C code, int inputlen = atoi(getenv(“CONTENT_LENGTH”));
as shown below: fread(buffer, inputlen, 1, stdin);
/*parse input fno=4&ln0=-2 */
int num1,num2,sum; token = strtok(buffer, delim);
printf(“<!doctype> \ while(token!= NULL)
<html> <body> <form method=\”post\” action=\”\”> \ {
<label for=\”fno\”>Number1:</label><br> \ tokenNo++;
<input type=\”number\” id=\”fno\” name=\”fno\” if(2 == tokenNo) num1=atoi(token);
value=%d><br> \ if(4 == tokenNo) num2=atoi(token);
<label for=\”lno\”>Number2:</label><br> \ token = strtok (NULL, delim);
<input type=\”number\” id=\”ln0\” name=\”ln0\” }
value=%d><br><br> \ sum = num1+num2;
<input type=\”submit\” value=\”Submit\”> \ }
</form> \
Sum = %d\ You can see that the form is also generated in the code
</body> \ using the printf statement.
</html><br>”,num1,num2,sum); The entire source code is available at https://github.com/
env=getenv(“REQUEST_METHOD”); SupriyoGanguly/tryLighty.
if(strcmp(env, “POST”) == 0)
{ By: Supriyo Ganguly
char buffer[200] = {‘\0’};
The author is a senior technical officer at Electronics
const char delim[] = “&=”; Corporation of India Limited, Hyderabad.
char *token;