Академический Документы
Профессиональный Документы
Культура Документы
Through chat bots one can communicate with text or voice interface and get reply through
artificial intelligence. Typically, a chat bot will communicate with a real person. Chat bots are
used in applications such as ecommerce customer service, call centers and Internet gaming. Chat
bots are programs built to automatically engage with received messages. Chat bots can be
programmed to respond the same way each time, to respond differently to messages containing
certain keywords and even to use machine learning to adapt their responses to fit the situation. A
developing number of hospitals, nursing homes, and even private centers, presently utilize online
Chat bots for human services on their sites. These bots connect with potential patients visiting the
site, helping them discover specialists, booking their appointments, and getting them access to the
correct treatment. An ML model has to be created wherein we could give any text input and on the
basis of training data it must analyze the symptoms. A Supervised Logistic Regression machine
learning algorithm can be implemented to train the model with data sets containing various
diseases CSV files. The goal is to compare outputs of various models and suggest the best model
that can be used for symptoms in realworld inputs. Data set contains CSV file having all diseases
compiled together. The logistic regression algorithm in ML allows us to process the data
efficiently. The goal here is to model the underlying structure or distribution of the data in order to
learn more from the training set. In any case, the utilization of artificial intelligence in an industry
where individuals’ lives could be in question, still starts misgivings in individuals. It brings up
issues about whether the task mentioned above ought to be assigned to human staff. This
healthcare chat bot system will help hospitals to provide healthcare support online 24 x 7, it
answers deep as well as general questions. It also helps to generate leads and automatically
delivers the information of leads to sales. By asking the questions in series it helps patients by
guiding what exactly he/she is looking for.
Through chat bots one can communicate with text or voice interface and get reply through artificial
intelligence. Typically, a chat bot will communicate with a real person. Chat bots are used in
applications such as ecommerce customer service, call centers and Internet gaming. Chat bots are
programs built to automatically engage with received messages. Chat bots can be programmed to
respond the same way each time, to respond differently to messages containing certain keywords and
even to use machine learning to adapt their responses to fit the situation. A developing number of
hospitals, nursing homes, and even private centers, presently utilize online Chat bots for human
services on their sites. These bots connect with potential patients visiting the site, helping them
discover specialists, booking their appointments, and getting them access to the correct treatment. An
ML model has to be created wherein we could give any text input and on the basis of training data it
must analyze the symptoms. A Supervised Logistic Regression machine learning algorithm can be
implemented to train the model with data sets containing various diseases CSV files. The goal is to
compare outputs of various models and suggest the best model that can be used for symptoms in real
world inputs. Data set contains CSV file having all diseases compiled together. The logistic regression
algorithm in ML allows us to process the data efficiently. The goal here is to model the underlying
structure or distribution of the data in order to learn more from the training set. In any case, the
utilization of artificial intelligence in an industry where individuals’ lives could be in question, still
starts misgivings in individuals. It brings up issues about whether the task mentioned above ought to be
assigned to human staff. This healthcare chat bot system will help hospitals to provide healthcare
support online 24 x 7, it answers deep as well as general questions. It also helps to generate leads and
automatically delivers the information of leads to sales. By asking the questions in series it helps
patients by guiding what exactly he/she is looking for.
LITERATURE REVIEW
2.1 A review on Chat Interface
This unit is the front end of the system. It is responsible for collecting the user queries from the user which are
the input to the system. It is also responsible for displaying the system generated results to the user. Therefore,
it can be said that the chat interface is the face of the system through which the entire communication takes
place. It is the mediator of conversation between the system and the user. The query that user fires on the chat
interface is passed on to the chatting backend which acts as a message delivering system between the Chat
interface and the Machine Learning Layer. This interface can be accessed either as a website or as a smart
phone app. The type of interface depends on the requirements of the user that are to be satisfied by the
system. If the system is accessed from a smartphone, the interface will be in the form of an app and if the
system is accessed from a website, then the interface will be in the form of a website. For building apps on the
smartphone, it will require to use android for android phones or Swift for iOS. In this case, only the interfacing
platform will be programmed on android and the complete backend processing of the system will take place on
a server on which the system will be deployed. For making a website, either Java or Python web frameworks
can be used. Java provides Spring and Struts as the most advanced and latest web frameworks. Similarly,
Python allows usage of Django and Flask frameworks for building of a website. The criteria for selection of the
programming language depends upon the functionalities that the system intents to provide, the requirements
of the users that will use the system, the algorithms that are to be used by the system, etc. Selection of an
appropriate programming language makes it simpler for developers to develop a system which provides
maximum functionality to the user with high accuracy and minimum complexity.
Natural Language Understanding is a subpart of NLP (Natural Language Processing) which enables the system
to understand the natural language or the conversational language spoken by the users. The conversational
language used by humans for day to day conversations is not as perfect as the formal language. It does not
focus much on the vocabulary and the grammar. Hence, it becomes difficult for a system to understand the
intent of the sentence. The input received from the user is in unstructured text format which cannot be
understood by the system directly. It understands input only in structured formats. The unstructured text
received from the user is converted to structured format by extracting important words and patterns from the
user text using the NLU techniques. Humans are capable of understanding any mispronunciations,
homophones, swapped words, shortened form of words (like „it‟s‟ for „it is‟), slang words or phrases and also
words which are not used in formal vocabulary but exist in regular conversations. NLU techniques enables the
system to identify these twerks if the user makes use of them while conversing with the chatbot, so as to make
the user feel that the conversation is taking place between two humans and not between a human and a bot.
NLU systems do not directly understand the meaning of the user sentences. It involves a sequence of processes
to derive the actual intent of the sentence. To understand a complete sentence, the NLU system needs to
understand each word of that sentence. It means that the initial task is the segmentation of the sentences into
individual words. Next, to understand the word, the system needs to understand the grammar of the sentence.
This can be done by knowing the parts of speech of each word in that sentence. Here comes the POS (Parts-Of-
Speech) tagger into picture. After knowing the grammatical weightage of each word, all of them are parsed to
know the dependency among them. This is the most important step wherein the word with the highest
dependency is extracted, from which the intent of the system can be known. It is not possible that the
knowledge base would contain the exact sentence that the user has sent. It might contain a sentence with the
same intent but with different words used in it. To match these types of synonymic sentences, synonym
determination and sentence matching are required. The different tasks to be implemented under the NLU
Engine and the methods to do the same have been discussed further.
Segmentation, also referred to as tokenization is the process of splitting text into smaller and meaningful units.
These units could be paragraphs, sentences, clauses, phrases, words or letters. The smallest unit are the letters.
Word segmentation is the splitting of sentences into individual words separated by blank spaces. The tokenized
units of the sentences are called as tokens. The tokenizers split the sentences into words and punctuations
marks as independent units. The most commonly used tokenizer is of space type, i.e. it splits the sentences into
words at the blank spaces. It is also required that the tokenizer should consider abbreviations, acronyms, dates,
numbers in decimal formats, etc., which cannot split at punctuations and blank spaces, as they will lose their
meaning if done so.
Mohammed Javed et al. [1] [2015] explained a method to implement word segmentation. He proposed in his
algorithm to calculate character spaces in the sentences. The character spaces should include all types of gaps
between characters
They include the gaps between letter, punctuations and the words. The algorithm functions on the basis of the
amount of gap or character space between each unit in the sentence. After the calculation of character spaces,
an average of the gaps is calculated to know the mean average between characters in the sentence. This
average gap distance is then applied to the sentence which is to be segmented. The places at which the
character space is more than the average character space are said to be the points of tokenization. The gap
between words is always more than the average gap and hence tokenization takes place at the blank spaces
between words in the sentences.
Naeun Lee et al. [2] [2017] proposed the implementation of word segmentation using NLTK. Natural Language
ToolKit (NLTK) is a python package which caters to provide services for NLP. It has inbuilt tokenizers. Users
need to import the package and use the required type of tokenizer which is present in the form of functions.
The NLTK includes a wide range of tokenizers which are as follows standard, letter, word, classic, lowercase, N-
gram, pattern, keyword, path, etc. The most commonly used tokenizer is the word-punkt tokenizer which splits
the sentences at the blank spaces. The accuracy, speed and efficiency of the NLTK tokenizers is commendable.
Also, it does not require any algorithm implementation as the package executes them at the backend.
Tao Jaing [3] [2011] explains the usage of CRF (Conditional Random Fields) Algorithm for word segmentation.
This algorithm trains the system for spaces between the characters. Using this training, the system identifies
the gap between characters in the test sentence. The system keeps a threshold value for the gap distance. If
the value of gaps in the test sentence is more than the specified threshold, then the sentence splits at those
points. CRF requires a lot of training to be given to the system, which makes the process time consuming.
Comparing the three methods illustrated above, the NLTK proves to be more efficient in all aspects as
compared to the other two. The usage of NLTK does not require the implementation of any algorithm as
everything is taken care by the package itself. Also, the accuracy, speed and diversity provided by the package
is better than the two algorithms.
POS Tagging is the process of assigning grammatical annotations to individual words in the sentences. These
annotations include the Parts-Of-Speech Tags. They denote the grammatical importance of the word in the
sentence based on the dependency of that word with other words in that phrase, clause, sentence, paragraph,
etc. The common POS tags are noun, verb, pronoun, etc. There are number of ways which can be used to
perform the POS Tagging. Some of them are explained below.
Jerome R. Bellegarda [4] [2010] proposed a method called latent analogy for POS Tagging. In this algorithm,
latent semantic mapping (LSM) technique is used. It requires the training on the available corpus. The LSM
maintains a feature space of the trained corpus which has been tagged. Now, new sentences are provided to
the LSM for tagging and the analysis is performed so as to determine the sentences from the training data
which are closest to the test sentence. This is called as sentence neighbourhood. Sentence neighbourhood
holds true for two sentences if they share the same intent matter. Once the intent matching sentences are
found from the trained data, the POS tags attached to those sentences are then mapped to the test sentences.
Liner Yang et al.
[5] [2018] put forth the technique of implementing the POS Tagger using Neural Networks. This algorithm
consists of „n‟ numbers of hidden layers. Theses layers are determined by the number of iterations or
combinations required to tag the required sentence correctly. At each layer of the algorithm, each word in the
sentence is tagged with an appropriate POS tag and then passed to the next later for checking the correctness
of the tags. This keeps happening unless the next layer provides the same tags as provided by the previous
layer. Another technique to implement the POS tagger is following the traditional approach i.e. of maintaining
a dictionary of tags for the given language. Python NLTK provides an inbuilt Tagger which can be used just by
importing the NLTK package. The NLTK has a predefined set of tags and a trained data of its own. It tests the
sentence and applies an appropriate tag to it. On comparing the above three algorithms, the NLTK tagger
proves to be speed and usage efficient. But highest accuracy is provided by the neural network algorithm as it
undergoes many iterations.
2.2.3 A review on Dependency Parsing A dependency parser is used to establish the relationship between
words in a sentence based on the grammatical tags attached to it. It is the next step after parsing. A
dependency tree or graph is created for every sentence. This tree is called as the parsing tree or the
dependency tree. There are a number of ways by which the parsing can be implemented. The comparison of
the same is expressed below.
Bo Chen [6] [2011] proposed a method for implementing the dependency tree. It initially finds out the
dependencies among the words in the sentence. Each word is checked for its relationship or dependency with
the other word. The word with the highest dependency is selected to be the root. The other words with a
relation with the root node are attached to it as the child nodes. This keeps on continuing until all the words
are placed in the tree. The tree form of the sentence is called the dependency parser tree. The dependencies
among the words are found out by using the POS tags.
Zhenghua Li [7] [2014] provided a further improvised model of the dependency parser. In the traditional
method mentioned above the parser creates a parsed tree for the required sentence. In the graph-based
dependency parser, the tree created is converted to a graph where the words in the sentences are the vertices
and the dependency between the words are the represented by the edges. This data structure shows a better
representation of the parsed sentence. Parsing is always to be performed by the traditional method. But graph-
based parser improves the visibility, readability and understandability of the parser.
3.2.4 A review on Synonym and Pattern Recognition For information retrievals, no matter how big our data is,
no sentence sent by the user can be perfectly same to any sentence in the database. But there can be
sentences with the same intent. After understanding the intent of the user sentence, the database is checked
for a sentence with the same intent. The matched sentences have difference of words which are used to
express the same content. They use alternative words or synonyms. This makes synonym detection necessary
for the system. Synonyms for a particular word may be domain independent or domain dependent. Domain
independent synonyms are synonyms for a word in the entire vocabulary. But domain-dependent synonyms
are synonyms for a word in that respective domain only. There are various algorithms used for the detection
and extraction of synonyms, some of which are reviewed below. LinHua Gao et al.
[8] [2018] explains the traditional dictionary method of synonym extractions. In this method, the system
database maintains a dataset of synonyms for important keywords in that domain. The sentence sent by the
user is then mapped on to that synonym dataset. The keywords detected from the sentence are then checked
in that synonym set to check for same intent. All possible synonyms of that keyword are then looked out for a
match in the main database. The sentence which is closest to the user sentence is extracted. This method is
time consuming and requires more of storage and complexity. Sijun Qin
[9] [2015] proposed a feature selection method for synonym extraction. In this method, among all the parts of
speech tags, words having the tags as noun, verbs and adjectives are marked as positive tags and the others as
negative tags. The polarity for each feature (word) is then carried out by using the POS tags. If the overall
feature polarity is positive, then it can be identified categorically. All the positive features are then grouped
together and the synonyms detection for the group of features will be relatively strong, as an entire clause is
checked for its synonymic meaning. The synonym sets which are extracted for that clause of features is then
calculated for information gain. The one with the highest information gain is the strongest synonym extracted.
2.3 A review on Decision or ML Engine Scripted or monotonous chatbots have predefined replies to be given.
They provide replies to the user from a set of predefined replies categorized on the basis of the query given by
the user. Inclusion of ML in chatbots enables it to compute the replies from scratch. It is used to make
predictions to predict the responses for the user queries and also to update the system from its experiences. It
keeps updating the databases as and when it encounters something new from the user. This engine uses
supervised or unsupervised or both techniques to analyze what the user requires. It further uses a model to
interpret the intent of the user and provides the appropriate results. The results may be in the form of
predictions or any form of analysis which are based on the execution and analysis of mathematical models.
Most of the machine learning models are based on statistical and probabilistic evaluations of the instances
occurring and the calculations makes a prediction for the test instance. The decision engine not only includes
models for predictions, but also includes algorithms for information retrievals like entity extractions, multiple
text classifications, etc. Also, the inclusion of a machine learning layer in a chatbot system, is used to create an
ontological relationship for entities extracted, and also associate them with context-specific questions along
with their alternatives, synonyms and machine-enabled classes. These features of machine learning, converts a
static and basic FAQ system to a smart and more personalized communicating experience. For chatbots that
provide services in diverse domains, the machine learning layer adds on to the services that it can provide. It
intends to increase the accuracy of the responses provided by the system to the users and also extends the
scope of the system. The system is enabled to update itself by learning from its experiences. This makes the
system less prone to false predictions. The chatbots that are used in healthcare domain for disease predictions
can use a wide range of algorithms, some of which are clustering, Bayesian networks, decision trees, etc. The
methods of their execution and the comparison of the algorithms for the appropriate selection of the same is
briefed here. A decision engine is the brain of the system. It includes the incorporation of ML algorithms for
predictions, statistical and probabilistic calculations, etc. Also, ML enables the system to learn from its past
experiences, so as to provide better and revised results. The chatbots for health care domain require disease
predictions algorithm. Prediction can be carried out in many ways some of which are reviewed below. Sachin S.
Gavankar et al.
[10] [2017] proposed the eager decision tree algorithm for prediction. This type of decision tree is the
improvised version of the traditional decision tree. It creates this tree at runtime, based on the user‟s queries
and keeps updating the tree on new user messages. Consider its working for disease prediction. In this
algorithm, the symptoms detected in the user query are added as child nodes to the root node. The nodes keep
on getting added for new symptoms detected. Further for every symptom, the algorithm checks for the second
symptom which has the highest occurrence with the earlier symptom and asks the user for that symptom. If he
says yes, then the system traces that path to check for the disease present at the root node. This will keep
iterating for all users and the tree keeps getting updated for new entries or traces the path available.
HARDWARE DESCRIPTION
RASPBERRY PI
LCD DISPLAY
POWER SUPPLY
RASPBERRY PI:
The Raspberry Pi 3 Model B+ is the latest product in the Raspberry Pi 3 range, boasting a 64-bit quad
core processor running at 1.4GHz, dual-band 2.4GHz and 5GHz wireless LAN, Bluetooth 4.2/BLE,
faster Ethernet, and PoE capability via a separate PoE HAT The dual-band wireless LAN comes with
modular compliance certification, allowing the board to be designed into end products with
significantly reduced wireless LAN compliance testing, improving both cost and time to market. The
Raspberry Pi 3 Model B+ maintains the same mechanical footprint as both the Raspberry Pi 2 Model B
and the Raspberry Pi 3 Model B.
Hardware
• Low cost
• Low power
• High availability
• High reliability – Tested over millions of Raspberry Pis Produced to date – Module IO pins have 15
micro-inch hard gold plating over 2.5 micron Nickel
2.2 Peripherals
• 48x GPIO
• 2x I2C
• 2x SPI
• 2x UART
• 2x SD/SDIO
• 1x HDMI 1.3a
• 1x USB2 HOST/OTG
• 1x 2-lane DSI Display Interface (up to 1Gbps per lane) 2.3 Software
• Mature and stable Linux software stack – Latest Linux Kernel support – Many drivers upstreamed –
Stable and well supported userland – Full availability of GPU functions using standard APIs
Mechanical Specification The CM3+ modules conform to JEDEC MO-224 mechanical specification
for 200 pin DDR2 (1.8V) SODIMM modules and therefore should work with the many DDR2
SODIMM sockets available on the market. (Please note that the pinout of the Compute Module is not
the same as a DDR2 SODIMM module; they are not electrically compatible.) The SODIMM form
factor was chosen as a way to provide the 200 pin connections using a standard, readily available and
low cost connector compatible with low cost PCB manufacture. The maximum component height on
the underside of the Compute Module is 1.2mm. The maximum component height on the top side of
the Compute Module is 2.5mm. The Compute Module PCB thickness is 1.0mm +/- 0.1mm. Note that
the location and arrangement of components on the Compute Module may change slightly over time
due to revisions for cost and manufacturing considerations; however, maximum component heights and
PCB thickness will be kept as specified. Figure 2 gives the CM3+ mechanical dimensions
The Compute Module 3+ has six separate supplies that must be present and powered at all times; you
cannot leave any of them unpowered, even if a specific interface or GPIO bank is unused. The six
supplies are as follows: 1. VBAT is used to power the BCM2837 processor core. It feeds the SMPS
that generates the chip core voltage. 2. 3V3 powers various BCM2837 PHYs, IO and the eMMC Flash.
3. 1V8 powers various BCM2837 PHYs, IO and SDRAM. 4. VDAC powers the composite (TV-out)
DAC. 5. GPIO0-27 VREF powers the GPIO 0-27 IO bank. 6. GPIO28-45 VREF powers the GPIO 28-
45 IO bank.
I2C 16x2
Arduino LCD Display Module
This is a 16x2 LCD display screen with I2C interface. It is able to display 16x2 characters on 2 lines,
white characters on blue background.
Usually, Arduino LCD display projects will run out of pin resources easily, especially with Arduino
Uno. And it is also very complicated with the wire soldering and connection. This I2C 16x2 Arduino
LCD Screen is using an I2C communication interface. It means it only needs 4 pins for
the LCD display: VCC, GND, SDA, SCL. It will save at least 4 digital/analog pins on Arduino. All
connectors are standard XH2.54 (Breadboard type). You can connect with the jumper wire directly
To avoid the confliction of I2C address with other I2C devices, such ultrasonic sensor, IMU,
accelerometers, and gyroscope, the I2C address of the module is configurable from 0x20-0x27. And its
contrast can be adjusted manually.
Another alternative option is I2C 20x4 Arduino LCD Display Module if more characters are required.
The I2 C LCD component drives an I2 C interfaced 2 line by 16 character LCD. The I2 C LCD
component is a wrapper around an I2 C Master component and makes use of an existing I2 C Master
component. If a project does not already have an I2 C Master component, one is required in order to
operate. When one of the API functions is called, that function calls one or more of the I 2 C Master
functions in order to communicate with the LCD.
The I2 C LCD component is used in applications that require a visual or textual display. This
component is also used where a character display is needed but seven consecutive GPIOs on a single
GPIO port are not possible. In cases where the project already includes an I2 C master, no additional
GPIO pins are required.
POWER SUPPLY:
TRANSFORMER:
This document presents the solution for a 12V 1A flyback converter based on the Infineon
OPTIREG™ TLE8386-2EL controller and IPD50N08S4-13 OptiMOS™-T2. The user is guided
through the component selections, the circuit design and, finally, an overview of the experimental
results are presented. The TLE8386-2EL is part of the Automotive OPTIREG™ family and it
implements a low-side-sense current mode controller with built in protection features. The device is
AECQ-100 qualified. The IPD50N08S4-13 is an AEC-Q101 qualified 80V N-channel enhanced mode
MOSFET, it is part of the OptiMOS™-T2 family. Intended audience This document is intended for
power supply design engineers, application engineers, students, etc., who need to design a Flyback
converter for automotive power applications where a galvanic isolation between two voltage domains
is required. In particular the focus is on a battery connected flyback that delivers up to 12W at 12V
output voltage; the intention is to provide the user with all of the needed information to fully design
and characterize the SMPS bringing it from an engineering concept to its production. Specific features
and applications are: - 48V to 12V Automotive applications - Isolated current mode SMPS - Flyback
regulators with auxiliary sensing
Centre Tapped Transformer Specifications
A centre-tapped transformer also known as two phase three wire transformer is normally used for
rectifier circuits. When a digital project has to work with AC mains a Transformer is used to step-down
the voltage (in our case, to 24V or 12V) and then convert it to DC by using a rectifier circuit. In a
center-tapped transformer the peak inverse voltage is twice as in bridge rectifier hence this transformer
is commonly used in full wave rectifier circuits.
The operation and theory behind a Center tapped transformer is very similar to a normal secondary
transformer. A primary voltage will be induced in the primary coil (I1 and I3) and due to magnetic
induction the voltage will be transferred to the secondary coil. Here in the secondary coil of a centre
tapped transformer, there will be an additional wire (T2) which will be placed exactly at the center of
the secondary coil, hence the voltage here will always be zero.
If we combine this zero potential wire (T2) with either T1 or T2, we will get a voltage of 12V AC. If
this wire is ignored and voltage across T1 and T2 is considered then we will get a voltage of 24V AC.
This feature is very useful for the function of a full wave rectifier.
Let us consider the voltage given by the first half of the secondary coil as Va and the voltage across the
second half of the secondary coil as Vb as shown
RECTIFER CIRCUIT:
We have learnt in rectifier circuits about converting a sinusoidal ac voltage into its corresponding
pulsating dc. Apart from the dc component, this pulsating dc voltage will have unwanted ac
components like the components of its supply frequency along with its harmonics (together called
ripples). These ripples will be the highest for a single-phase half wave rectifier and will reduce further
for a single-phase full wave rectifier. The ripples will be minimum for 3-phase rectifier circuits. Such
supply is not useful for driving complex electronic circuits. For most supply purposes constant dc
voltage is required than the pulsating output of the rectifier. For most applications the supply from a
rectifier will make the operation of the circuit poor. If the rectifier output is smoothened and steady and
then passed on as the supply voltage, then the overall operation of the circuit becomes better. Thus, the
output of the rectifier has to be passed though a filter circuit to filter the ac components. The filter is a
device that allows passing the dc component of the load and blocks the ac component of the rectifier
output. Thus the output of the filter circuit will be a steady dc voltage. The filter circuit can be
constructed by the combination of components like capacitors, resistors, and inductors. Inductor is used
for its property that it allows only dc components to pass and blocks ac signals. Capacitor is used so as
to block the dc and allows ac to pass. All the combinations and their working are explained in detail
below. Series Inductor Filter The circuit diagram of a full wave rectifier with a series inductor filter is
given below. As the name of the filter circuit suggests, the Inductor L is connected in series between
the rectifier circuit and the load. The inductor carries the property of opposing the change in current
that flows through it. In other words, the inductor offers high impedance to the ripples and no
impedance to the desired dc components. Thus the ripple components will be eliminated. When the
rectifier output current increases above a certain value, energy is stored in it in the form of a magnetic
field and this energy is given up when the output current falls below the average value. Thus all the
sudden changes in current that occurs in the circuit will be smoothened by placing the inductor in series
between the rectifier and the load. The waveform below shows the use of inductor in the circuit. From
the circuit, for zero frequency dc voltage, the choke resistance Ri in series with the load resistance RL
forms a voltage divider circuit, and thus the dc voltage across the load is Vdc = RL/(Ri + RL) Vdc is
the output from a full wave rectifier. In this case, the value of Ri is negligibly small when compared to
RL. The effect of higher harmonic voltages can be easily neglected as better filtering for the higher
harmonic components take place. This is because of the fact that with the increase in frequency, the
reactance of the inductor also increases. It should be noted that a decrease in the value of load
resistance or an increase in the value of load current will decrease the amount of ripples in the circuit.
So, the series inductor filter is mostly used in cases of high load current or small load resistance. A
simple series inductor filter may not be properly used. It is always better to use a shunt capacitor (C)
with series inductor (L) to form an LC Filter. Shunt Capacitor Filter As the name suggests, a capacitor
is used as the filter and this high value capacitor is shunted or placed across the load impedance. This
capacitor, when placed across a rectifier gets charged and stores the charged energy during the
conduction period. When the rectifier is not conducting, this energy charged by the capacitor is
delivered back to the load. Through this energy storage and delivery process, the time duration during
which the current flows through the load resistor gets increased and the ripples are decreased by a great
amount. Thus for the ripple component with a frequency of ‘f’ megahertz, the capacitor ‘C’ will offer a
very low impedance. The value of this impedance can be written as: Shunt Capacitor Impedance = 1/2
fC Thus the dc components of the input signal along with the few residual ripple components, is only
allowed to go through the load resistance RLoad. The high amount of ripple components of current gets
bypassed through the capacitor C. Now let us look at the working of Half-wave rectifier and Full-wave
rectifier with Capacitor filters, their output filtered waveform, ripple factor, merits and demerits in
detail.
SOFTWARE DESCRIPTION
PYTHON:
PYTHON 3.7:
may be freely distributed. The same site also contains distributions of and pointers to
many free third party Python modules, programs and tools, and additional
documentation. The Python interpreter is easily extended with new functions and data
types implemented in C or C++ (or other languages callable from C). Python is also
suitable as an extension language for customizable applications. This tutorial
introduces the reader informally to the basic concepts and features of the Python
language and system. It helps to have a Python interpreter handy for hands-on
experience, but all examples are self-contained, so the tutorial can be read off- line as
well. For a description of standard objects and modules, see library-index. Reference-
index gives a more formal definition of the language. To write extensions in C or C+
+, read extending-index and c-api-index. There are also several books covering
Python in depth. This tutorial does not attempt to be comprehensive and cover every
single feature, or even every commonly used feature. Instead, it introduces many of
Python’s most notes worthy features, and will give you a good idea of the language’s
flavor and style. After reading it, you will be able to read and write Python modules
and programs, and you will be ready to learn more about the various Python library
modules described in library-index. If you do much work on computers, eventually
you find that there’s some task you’d like
to automate. For example, you may wish to perform a search-and-replace over a
large number of text files, or rename and rearrange a bunch of photo files in a
complicated way. Perhaps you’d like to write a small custom database, or a
specialized
GUI application or a simple game. If you’re a professional software
developer, you may have to work with several C/C++/Java libraries but find
the usual write/compile/test/re-compile cycle is too slow. Perhaps you’re
writing a test suite for such a library and find writing the testing code a
tedious task. Or maybe you’ve written a program that could use an extension
language, and you don’t want to design and implement a whole new
language for your application.
There are tools which use doc strings to automatically produce online or
printed documentation or to let the user interactively browse through code;
it’s good practice to include doc strings in code that you write, so make a
habit of it. The execution of a function introduces a new symbol table
usedfor the local variables of the function. More precisely, all variable
assignments in a functions to read the value in the local symbol table;
whereas variable references first look in the local symbol table, then in the
local symbol tables of enclosing functions, then in the global symbol table,
and finally in the table of built-in names. Thus, global variables cannot be
directly assigned a value within a function (unless named in a global
statement), although they may be referenced. The actual parameters
(arguments) to a function call are introduced in the local symbol table of the
called function when it is called; thus, arguments are passed using call by
value (where the value is always an object reference, not the value of the
object).1 When a function calls another function, a new local symbol table is
created for that call. A function definition introduces the function name in
the current symbol table. The value of the function name has a type that is
recognized by the interpreter as a user-defined function. This value can be
assigned to another name which can then also be used as a function.
The comparison operators in and not in check whether a value occurs (does
not occur) in a sequence. The operator is and does not compare whether two
objects are really the same object; this only matters for mutable objects like
lists. All comparison operators have the same priority, which is lower than
that of all numerical operators. Comparisons can be chained. For
example,a<b==ctestswhetheraislessthanbandmoreoverbequalsc.
Comparisons may be combined using the Boolean operators and the
outcome of a comparison (or of any other Boolean expression) may be
negated with not. These have lower priorities than comparison operators;
between them, not has the highest priority and or the lowest, so that A and
not B or C is equivalent to (A and (not B)) or C. As always, parentheses can
be used to express the desired composition. The Boolean operators and are
so-called short-circuit operators: their arguments are evaluated from left to
right, and evaluation stops as soon as the outcome is determined. For
example, if A and C are true but Bis false, A and B and C does not evaluate
the expression C. When used as a general value and not as a Boolean, the
return value of a short-circuit operator is the last evaluated argument.
Objects have individuality, and multiple names (in multiple scopes) can be
bound to the same object. This is known as aliasing in other languages. This
is usually not appreciated on a first glance at Python, and can be safely
ignored when dealing with immutable basic types (numbers, strings,
tuples).However, aliasing has a possibly surprising effect on these mantic of
Python code involving mutable objects such as lists, dictionaries, and most
other types. This is usually used to the benefit of the program, since aliases
behave like pointers in some respects. For example, passing an object is
cheap since only a pointer is passed by the implementation; and if a function
modifies an object passed as an argument, the caller will see the change —
this eliminates the need for two different argument passing mechanisms as
in Pascal.
modules have an opt- tag and are usually smaller. Future releases may
change the effects of optimization.
• A program doesn’t run any faster when it is read from a .pyc file
than when it is read from a .py file; the only thing that’s faster about .pyc
files is the speed with which they are loaded.
• The module compile all can create .pyc files for all modules in a
directory.
THONNY IDE:
Thonny is as mall and light weight Integrated Development Environment.
It was developed to provide a small and fast IDE, which has only a few
dependencies from other packages. Another goal was to be as independent
as possible from a special Desktop Environment like KDE or GNOME, so
Thonny only requires the GTK2 toolkit and therefore you only need the
GTK2 runtime libraries installd to run it.
For compiling Thonny yourself, you will need the GTK (>= 2.6.0) libraries
and header files. You will also need the Pango, Gliband ATK libraries and
header files. All these files are available at http://www.gtk.org. Furthermore
you need, of course, a C compiler and the Make tool; a C++ compiler is also
required for the included Scintilla library. The GNU versions of these tools
are recommended.
% ./configure
% make
% make install
The configure script supports several common options, for a detailed list,
type
% ./configure --help
There are also some compile time options which can be found in src/Thonny
.h. Please see Appendix C for more information. In the case that your system
lacks dynamic linking loader support, you probably want to pass the option
--disable-vte to the configure script. This prevents
At startup, Thonny loads all files from the last time Thonny was launched.
You can disable this feature in the preferences dialog (see Figure 3-4). If
you specify some files on the command line, only these files will be opened,
but you can find the files from the last session in the file menu under the
"Recent files" item. By default this contains the last 10 recently opened files.
You can change the amount of recently opened files in the preferences
dialog. You can start several instances of Thonny , but only the first will
load files from the last session. To run a second instance of Thonny , do not
specify any file names on the command-line, or disable opening files in a
running instance using the appropriate command line option.
Thonny detects an already running instance of itself and opens files from
the command-line in the already running instance. So, Thonny can be used
to view and edit files by opening them from other programs such as a file
manager. If you do not like this for some reason, you can disable using the
first instance by using the appropriate command line option
are saved when it is closed, or when Thonny is shut down. When restarting
Thonny , the previously opened project file that was in use at the end of the
last session will be reopened.
Execute will run the corresponding executable file, shell script or interpreted
script in a terminal window. Note that the Terminal tool path must be
correctly set in the Tools tab of the Preferences dialog - you can use any
terminal program that runs a Bourne compatible shell and accept the "-e"
command line argument to start a command. After your program or script
has finished executing, you will be prompted to press the return key. This
allows you to review any text output from the program before the terminal
window is closed.
By default the Compile and Build commands invoke the compiler and linker
with only the basic arguments needed by all programs. Using Set Includes
and Arguments you can add any include paths and compile flags for the
compiler, any library names and paths for the linker, and any arguments you
want to use when running Execute.
Thonny has basic printing support. This means you can print a file by passing the
filename of the current file to a command which actually prints the file.
However, the printed document contains no syntax highlighting.
BLOCK DIAGRAM
WORKING:
The raspberry pi is interfaced with chatbot activator button and once the button gets
activated and the chat bot starts asking the question once we starts replying to the
question the raspberry pi performs Machine learning and compares the answers with
the symptom database
If any of the symptom gets matches the respective symptom is shown through LCD
and the chatbot also suggest the Respective doctor to visit.
METHODOLOGY
The Health-Care Chat Bot System should be written in Python, GUI links and a simple,
accessible network API. The system must provide a capacity for parallel operation and
system design should not introduce scalability issues with regard to the number of surface
computers, tablets or displays connected at any one time. The end system should also allow
for seamless recovery, without data loss, from individual device failure. There must be a
strong audit chain with all system actions logged. While interfaces are worth noting that this
system is likely to conform to what is available. With that in mind, the most adaptable and
portable technologies should be used for the implementation. The system has criticality in so
far as it is a live system. If the system is down, then customers must not notice, or notice that
the system recovers quickly (seconds). The system must be reliable enough to run, crash and
glitch free more or less indefinitely, or facilitate error recovery strong enough such that
glitches are never revealed to its end-users.
In machine learning, support-vector machines DCNN are supervised learning models with
associated learning algorithms that analyse data used for classification and regression
analysis. Given a set of training examples, each marked as belonging to one or the other of
two categories, an DCNN training algorithm builds a model that assigns new examples to one
category or the other, making it a non-probabilistic binary linear classifier (although methods
such as Platt scaling exist to use DCNN in a probabilistic classification setting). An DCNN
model is a representation of the examples as points in space, mapped so that the examples of
the separate categories are divided by a clear gap that is as wide as possible. New examples
are then mapped into that same space and predicted to belong to a category based on the side
of the gap on which they fall. In addition to performing linear classification, SVMs can
efficiently perform a non-linear classification using what is called the kernel trick, implicitly
mapping their inputs into high dimensional feature spaces.
In recent years, a great variety of hardware solutions for real-time TSR has been proposed.
These include conventional (general purpose) computers, custom ASIC (application-specific
integrated circuit) chips, field programmable gate arrays (FPGAs), digital sign processors
(DSPs) and also graphic processing units
In a software-based solution running on a Linux or window system with a 2.4-GHz dual core
CPU is presented.
Conclusion:
Thus, we can conclude that this system giving the accurate result. As we are using large
dataset which will ensures the better performance. Thus we build up a system which is useful
for people to detect the disease by typing symptoms
Future Scope:
Chat bots are a thing of the future which is yet to uncover its potential but with its rising
popularity and craze among companies, they are bound to stay here for long. Machine
learning has changed the way companies were communicating with their customers. With
new platforms to build various types of chat bots being introduced, it is of great excitement to
witness the growth of a new domain in technology while surpassing the previous threshold.
References:
1. [1] Flora Amato, Stefano Marrone, “Chatbots meet eHealth: automat zing healthcare”,
proceeding of diet, May-2018.