Академический Документы
Профессиональный Документы
Культура Документы
2014 Paper
(2)
System Software:
System Software is the type of software which is the interface between application software and
system. Low level languages are used to write the system software. System Software maintain
the system resources and give the path for application software to run. An important thing is that
without system software, system cannot run. It is a general-purpose software.
Application Software:
Application Software is the type of software which runs as per user request. It runs on the
platform which is provide by system software. High level languages are used to write the
application software. It’s a specific purpose software.
The main difference between System Software and Application Software is that without system
software, system cannot run on the other hand without application software, system always runs.
0316‐4121225
(3)
Soft copy.
it is an output copy of document stored in memory and can be seen on screen.
It can be modified easily.
It need an electronic media for display.
It is intangible.
It is a digital version.
It can be transmitted electrically.
Hard copy.
It is printed on paper.
It can't be modified easily.
It doesn't need an electronic media for display.
It is tangible.
It is physical version.
It can be transmitted physically.
(4)
The following is a summary of the boot process or startup process in a PC:
1. The power button activates the power supply in the PC, sending power to the
motherboard and other components.
2. The PC performs a power-on self-test (POST). The POST is a small computer program
within the BIOS that checks for hardware failures. A single beep after the POST signals
that everything's okay. Other beep sequences signal a hardware failure, and PC repair
specialists compare these sequences with a chart to determine which component has
failed.
3. The PC displays information on the attached monitor showing details about the boot
process. These include the BIOS manufacturer and revision, processor specs, the amount
of RAM installed, and the drives detected. Many PCs have replaced displaying this
information with a splash screen showing the manufacturer's logo. You can turn off the
splash screen in the BIOS settings if you'd rather see the text.
4. The BIOS attempts to access the first sector of the drive designated as the boot disk. The
first sector is the first kilobytes of the disk in sequence, if the drive is read sequentially
0316‐4121225
starting with the first available storage address. The boot disk is typically the same hard
disk or solid-state drive that contains your operating system. You can change the boot
disk by configuring the BIOS or interrupting the boot process with a key sequence (often
indicated on the boot screens).
5. The BIOS confirms there's a bootstrap loader, or boot loader, in that first sector of the
boot disk, and it loads that boot loader into memory (RAM). The boot loader is a small
program designed to find and launch the PC's operating system.
6. Once the boot loader is in memory, the BIOS hands over its work to the boot loader,
which in turn begins loading the operating system into memory.
7. When the boot loader finishes its task, it turns control of the PC over to the operating
system. Then, the OS is ready for user interaction.
(5)
A byte is a data measurement unit that contains eight bits, or a series of eight zeros and ones. A
single byte can be used to represent 28 or 256 different values. The byte was originally created to
store a single character, since 256 values is sufficient to represent all lowercase and uppercase
letters, numbers, and symbols in western languages. However, since some languages have more
than 256 characters, modern character encoding standards, such as UTF-16, use two bytes, or 16
bits for each character. With two bytes, it is possible to represent 216 or 65,536 values.
Byte:
1. A combination of 8 bits is known as 1 Byte.
2. The storage capacity of the memory is expressed in terms of number of “Byte”.
3. It is used to store a character.
Bit:
1. The binary digit 1 or 0 is called a bit.
2. It is the basic unit for storing data in a computer.
3. It can either store 1 or 0.
(6)
A computer virus is a malicious program that self-replicates by copying itself to another
program. In other words, the computer virus spreads by itself into other executable code or
documents. The purpose of creating a computer virus is to infect vulnerable systems, gain admin
control and steal user sensitive data. Hackers design computer viruses with malicious intent and
prey on online users by tricking them.
One of the ideal methods by which viruses spread is through emails – opening the attachment in
the email, visiting an infected website, clicking on an executable file, or viewing an infected
0316‐4121225
advertisement can cause the virus to spread to your system. Besides that, infections also spread
while connecting with already infected removable storage devices, such as USB drives.
(7)
A touch screen is a computer display screen that serves as an input device. When a touch screen
is touched by a finger or stylus, it registers the event and sends it to a controller for processing. A
touch screen may contain pictures or words that the user can touch to interact with the device.
How a touch screen event is registered depends on the touch screen's inherent technology. The
three main touch screen technologies are:
Resistive: This screen has a thin metallic layer that is conductive and resistive, so that
touching results in a change in the electrical current sent to the controller. Pros: More
affordable, not damaged by dust or water, responds to finger or stylus. Cons: Only 75%
clarity and susceptible to damage by sharp objects.
Surface Acoustic Wave (SAW): Ultrasonic waves pass over this screen. Touching it
results in absorption of part of the wave, registering the position of the touch, which is
sent to the controller. Pros: Responds to finger or stylus. Cons: May be damaged by dust
or water.
Capacitive: This screen is coated with an electrically-charged material. Touching it
causes a change in capacitance, which allows the location to be determined and sent to
the controller. Pros: Not damaged by dust or water and has high clarity. Cons: Must be
touched with a finger only - a stylus cannot be used.
(8)
Input devices
An input device can send data to another device, but it cannot receive data from another device.
Examples of input devices include the following.
Keyboard and Mouse - Accepts input from a user and sends that data (input) to the
computer. They cannot accept or reproduce information (output) from the computer.
Microphone - Receives sound generated by an input source and sends that sound to a
computer.
Webcam - Receives images generated by whatever it is pointed at (input) and sends
those images to a computer.
Output devices
An output device can receive data from another device and generate output with that data, but it
cannot send data to another device. Examples of output devices include the following.
0316‐4121225
Monitor - Receives data from a computer (output) and displays that information as text
and images for users to view. It cannot accept data from a user and send that data to
another device.
Projector - Receives data from a computer (output) and displays, or projects, that
information as text and images onto a surface, like a wall or a screen. It cannot accept
data from a user and send that data to another device.
Speakers - Receives sound data from a computer and plays the sounds for users to hear.
It cannot accept sound generated by users and send that sound to another device.
(9)
Input/output devices
An input/output device can receive data from users, or another device (input), and send data to
another device (output). Examples of input/output devices include the following.
CD-RW drive and DVD-RW drive - Receives data from a computer (input), to copy
onto a writable CD or DVD. Also, the drive sends data contained on a CD or DVD
(output) to a computer.
USB flash drive - Receives, or saves, data from a computer (input). Also, the drive sends
data to a computer or another device (output).
(10)
Following are some tips to Increase Computer Speed:
Reduce Items that Run on Startup
Use Microsoft Fix It
Remove Programs No Longer in Use
Clean Your Drive
Consider Switching from Internet Explorer to Chrome
Scan for And Remove Malware
Scan for And Remove Viruses and Spyware
Make Software Adjustments for Better Performance
Defrag Your Drive
Add More RAM Memory
Upgrade to An SSD Drive
from some “higher-level” language. Although there are many computer languages, relatively few
are widely used.
Machine and assembly languages are “low-level,” requiring a programmer to manage explicitly
all of a computer’s idiosyncratic features of data storage and operation. In contrast, high-level
languages shield a programmer from worrying about such considerations and provide a notation
that is more easily written and read by programmers.
Language Types:
1. Machine and assembly languages
A machine language consists of the numeric codes for the operations that a particular computer
can execute directly. The codes are strings of 0s and 1s, or binary digits (“bits”), which are
frequently converted both from and to hexadecimal (base 16) for human viewing and
modification. Machine language instructions typically use some bits to represent operations, such
as addition, and some to represent operands, or perhaps the location of the next instruction.
Machine language is difficult to read and write, since it does not resemble conventional
mathematical notation or human language, and its codes vary from computer to computer.
Assembly language is one level above machine language. It uses short mnemonic codes for
instructions and allows the programmer to introduce names for blocks of memory that hold data.
One might thus write “add pay, total” instead of “0110101100101000” for an instruction that
adds two numbers.
Assembly language is designed to be easily translated into machine language. Although blocks
of data may be referred to by name instead of by their machine addresses, assembly language
does not provide more sophisticated means of organizing complex information. Like machine
language, assembly language requires detailed knowledge of internal computer architecture. It is
useful when such details are important, as in programming a computer to interact with
input/output devices (printers, scanners, storage devices, and so forth).
2. Algorithmic languages
Algorithmic languages are designed to express mathematical or symbolic computations. They
can express algebraic operations in notation similar to mathematics and allow the use of
subprograms that package commonly used operations for reuse. They were the first high-level
languages.
FORTRAN
The first important algorithmic language was FORTRAN (formula translation), designed in 1957
by an IBM team led by John Backus. It was intended for scientific computations with real
numbers and collections of them organized as one- or multidimensional arrays. Its control
structures included conditional IF statements, repetitive loops (so-called DO loops), and a GOTO
statement that allowed nonsequential execution of program code. FORTRAN made it convenient
to have subprograms for common mathematical operations and built libraries of them.
0316‐4121225
FORTRAN was also designed to translate into efficient machine language. It was immediately
successful and continues to evolve.
ALGOL
ALGOL (algorithmic language) was designed by a committee of American and European
computer scientists during 1958–60 for publishing algorithms, as well as for doing computations.
Like LISP (described in the next section), ALGOL had recursive subprograms—procedures that
could invoke themselves to solve a problem by reducing it to a smaller problem of the same kind.
ALGOL introduced block structure, in which a program is composed of blocks that might
contain both data and instructions and have the same structure as an entire program. Block
structure became a powerful tool for building large programs out of small components.
ALGOL contributed a notation for describing the structure of a programming language, Backus–
Naur Form, which in some variation became the standard tool for stating the syntax (grammar)
of programming languages. ALGOL was widely used in Europe, and for many years it remained
the language in which computer algorithms were published. Many important languages, such
as Pascal and Ada (both described later), are its descendants.
LISP
LISP (list processing) was developed about 1960 by John McCarthy at the Massachusetts
Institute of Technology (MIT) and was founded on the mathematical theory of
recursive functions (in which a function appears in its own definition). A LISP program is a
function applied to data, rather than being a sequence of procedural steps as in FORTRAN and
ALGOL. LISP uses a very simple notation in which operations and their operands are given in a
parenthesized list. For example, (+ a (* b c)) stands for a + b*c. Although this appears awkward,
the notation works well for computers. LISP also uses the list structure to represent data, and,
because programs and data use the same structure, it is easy for a LISP program to operate on
other programs as data.
LISP became a common language for artificial intelligence (AI) programming, partly owing to
the confluence of LISP and AI work at MIT and partly because AI programs capable of
“learning” could be written in LISP as self-modifying programs. LISP has evolved through
numerous dialects, such as Scheme and Common LISP.
C
The C programming language was developed in 1972 by Dennis Ritchie and Brian Kernighan at
the AT&T Corporation for programming computer operating systems. Its capacity to structure
data and programs through the composition of smaller units is comparable to that of ALGOL. It
uses a compact notation and provides the programmer with the ability to operate with the
addresses of data as well as with their values. This ability is important in systems programming,
and C shares with assembly language the power to exploit all the features of a computer’s
internal architecture. C, along with its descendant C++, remains one of the most common
languages.
0316‐4121225
3. Business-oriented languages
COBOL
COBOL (common business oriented language) has been heavily used by businesses since its
inception in 1959. A committee of computer manufacturers and users and U.S. government
organizations established CODASYL (Committee on Data Systems and Languages) to develop
and oversee the language standard in order to ensure its portability across diverse systems.
COBOL uses an English-like notation—novel when introduced. Business computations organize
and manipulate large quantities of data, and COBOL introduced the record data structure for
such tasks. A record clusters heterogeneous data such as a name, ID number, age, and address
into a single unit. This contrasts with scientific languages, in which homogeneous arrays of
numbers are common. Records are an important example of “chunking” data into a single object,
and they appear in nearly all modern languages.
SQL
SQL (structured query language) is a language for specifying the organization of databases
(collections of records). Databases organized with SQL are called relational because SQL
provides the ability to query a database for information that falls in a given relation. For
example, a query might be “find all records with both last_name Smith and city New York.”
Commercial database programs commonly use a SQL-like language for their queries.
4. Education-oriented languages
BASIC
BASIC (beginner’s all-purpose symbolic instruction code) was designed at Dartmouth College in
the mid-1960s by John Kemeny and Thomas Kurtz. It was intended to be easy to learn by
novices, particularly non-computer science majors, and to run well on a time-sharing
computer with many users. It had simple data structures and notation and it was interpreted: a
BASIC program was translated line-by-line and executed as it was translated, which made it easy
to locate programming errors.
Its small size and simplicity also made BASIC a popular language for early personal computers.
Its recent forms have adopted many of the data and control structures of other contemporary
languages, which makes it more powerful but less convenient for beginners.
Pascal
About 1970 Niklaus Wirth of Switzerland designed Pascal to teach structured programming,
which emphasized the orderly use of conditional and loop control structures without GOTO
statements. Although Pascal resembled ALGOL in notation, it provided the ability to define data
types with which to organize complex information, a feature beyond the capabilities of ALGOL
as well as FORTRAN and COBOL. User-defined data types allowed the programmer to
introduce names for complex data, which the language translator could then check for correct
usage before running a program.
0316‐4121225
During the late 1970s and ’80s, Pascal was one of the most widely used languages for
programming instruction. It was available on nearly all computers, and, because of its
familiarity, clarity, and security, it was used for production software as well as for education.
Logo
Logo originated in the late 1960s as a simplified LISP dialect for education; Seymour Papert and
others used it at MIT to teach mathematical thinking to schoolchildren. It had a more
conventional syntax than LISP and featured “turtle graphics,” a simple method for
generating computer graphics. (The name came from an early project to program a turtlelike
robot.) Turtle graphics used body-centred instructions, in which an object was moved around a
screen by commands, such as “left 90” and “forward,” that specified actions relative to the
current position and orientation of the object rather than in terms of a fixed framework. Together
with recursive routines, this technique made it easy to program intricate and attractive patterns.
Hypertalk
Hypertalk was designed as “programming for the rest of us” by Bill Atkinson for Apple’s
Macintosh. Using a simple English-like syntax, Hypertalk enabled anyone to combine text,
graphics, and audio quickly into “linked stacks” that could be navigated by clicking with
a mouse on standard buttons supplied by the program. Hypertalk was particularly popular among
educators in the 1980s and early ’90s for classroom multimedia presentations. Although
Hypertalk had many features of object-oriented languages (described in the next section), Apple
did not develop it for other computer platforms and let it languish; as Apple’s market share
declined in the 1990s, a new cross-platform way of displaying multimedia left Hypertalk all but
obsolete.
Object-oriented languages
Object-oriented languages help to manage complexity in large programs. Objects package data
and the operations on them so that only the operations are publicly accessible and internal details
of the data structures are hidden. This information hiding made large-scale programming easier
by allowing a programmer to think about each part of the program in isolation. In addition,
objects may be derived from more general ones, “inheriting” their capabilities. Such an
object hierarchy made it possible to define specialized objects without repeating all that is in the
more general ones.
Object-oriented programming began with the Simula language (1967), which added information
hiding to ALGOL. Another influential object-oriented language was Smalltalk (1980), in which a
program was a set of objects that interacted by sending messages to one another.
C++
The C++ language, developed by Bjarne Stroustrup at AT&T in the mid-1980s, extended C by
adding objects to it while preserving the efficiency of C programs. It has been one of the most
important languages for both education and industrial programming. Large parts of many
operating systems, such as the Microsoft Corporation’s Windows 98, were written in C++.
0316‐4121225
Ada
Ada was named for Augusta Ada King, countess of Lovelace, who was an assistant to the 19th-
century English inventor Charles Babbage, and is sometimes called the first computer
programmer. Ada, the language, was developed in the early 1980s for the U.S. Department of
Defense for large-scale programming. It combined Pascal-like notation with the ability to
package operations and data into independent modules. Its first form, Ada 83, was not fully
object-oriented, but the subsequent Ada 95 provided objects and the ability to
construct hierarchies of them. While no longer mandated for use in work for the Department of
Defense, Ada remains an effective language for engineering large programs.
Java
In the early 1990s, Java was designed by Sun Microsystems, Inc., as a programming language
for the World Wide Web (WWW). Although it resembled C++ in appearance, it was fully
object-oriented. In particular, Java dispensed with lower-level features, including the ability to
manipulate data addresses, a capability that is neither desirable nor useful in programs for
distributed systems. In order to be portable, Java programs are translated by a Java Virtual
Machine specific to each computer platform, which then executes the Java program. In addition
to adding interactive capabilities to the Internet through Web “applets,” Java has been widely
used for programming small and portable devices, such as mobile telephones.
Visual Basic
Visual Basic was developed by Microsoft to extend the capabilities of BASIC by adding objects
and “event-driven” programming: buttons, menus, and other elements of graphical user
interfaces (GUIs). Visual Basic can also be used within other Microsoft software to program
small routines.
5. Declarative languages
Declarative languages, also called nonprocedural or very high level, are programming languages
in which (ideally) a program specifies what is to be done rather than how to do it. In such
languages there is less difference between the specification of a program and its implementation
than in the procedural languages described so far. The two common kinds of declarative
languages are logic and functional languages.
i. Logic programming languages, of which PROLOG (programming in logic) is the best known,
state a program as a set of logical relations (e.g., a grandparent is the parent of a parent of
someone). Such languages are similar to the SQL database language. A program is executed by
an “inference engine” that answers a query by searching these relations systematically to make
inferences that will answer a query. PROLOG has been used extensively in natural language
processing and other AI programs.
ii. Functional languages have a mathematical style. A functional program is constructed by
applying functions to arguments. Functional languages, such as LISP, ML, and Haskell, are used
0316‐4121225
describe documents in terms that can be interpreted by a personal computer to display the
document on its screen or by a microprocessor in a printer or a typesetting device.
PostScript commands can, for example, precisely position text, in various fonts and sizes, draw
images that are mathematically described, and specify colour or shading. PostScript uses postfix,
also called reverse Polish notation, in which an operation name follows its arguments. Thus,
“300 600 20 270 arc stroke” means: draw (“stroke”) a 270-degree arc with radius 20 at location
(300, 600). Although PostScript can be read and written by a programmer, it is normally
produced by text formatting programs, word processors, or graphic display tools.
The success of PostScript is due to its specification’s being in the public domain and to its being
a good match for high-resolution laser printers. It has influenced the development of printing
fonts, and manufacturers produce a large variety of PostScript fonts.
SGML
SGML (standard generalized markup language) is an international standard for the definition of
markup languages; that is, it is a metalanguage. Markup consists of notations called tags that
specify the function of a piece of text or how it is to be displayed. SGML emphasizes descriptive
markup, in which a tag might be “<emphasis>.” Such a markup denotes the document function,
and it could be interpreted as reverse video on a computer screen, underlining by a typewriter, or
italics in typeset text.
SGML is used to specify DTDs (document type definitions). A DTD defines a kind of document,
such as a report, by specifying what elements must appear in the document—e.g., <Title>—and
giving rules for the use of document elements, such as that a paragraph may appear within a table
entry but a table may not appear within a paragraph. A marked-up text may be analyzed by a
parsing program to determine if it conforms to a DTD. Another program may read the markups
to prepare an index or to translate the document into PostScript for printing. Yet another might
generate large type or audio for readers with visual or hearing disabilities.
7. World Wide Web display languages
HTML
The World Wide Web is a system for displaying text, graphics, and audio retrieved over
the Internet on a computer monitor. Each retrieval unit is known as a Web page, and such pages
frequently contain “links” that allow related pages to be
retrieved. HTML (hypertext markup language) is the markup language for encoding Web pages.
It was designed by Tim Berners-Lee at the CERN nuclear physics laboratory in Switzerland
during the 1980s and is defined by an SGML DTD. HTML markup tags specify document
elements such as headings, paragraphs, and tables. They mark up a document for display by
a computer program known as a Web browser. The browser interprets the tags, displaying the
headings, paragraphs, and tables in a layout that is adapted to the screen size and fonts available
to it.
0316‐4121225
HTML documents also contain anchors, which are tags that specify links to other Web pages. An
anchor has the form <A HREF= “http://www.britannica.com”> Encyclopædia Britannica</A>,
where the quoted string is the URL (universal resource locator) to which the link points (the Web
“address”) and the text following it is what appears in a Web browser, underlined to show that it
is a link to another page. What is displayed as a single page may also be formed from multiple
URLs, some containing text and others graphics.
XML
HTML does not allow one to define new text elements; that is, it is not
extensible. XML (extensible markup language) is a simplified form of SGML intended for
documents that are published on the Web. Like SGML, XML uses DTDs to define document
types and the meanings of tags used in them. XML adopts conventions that make it easy to parse,
such as that document entities are marked by both a beginning and an ending tag, such as
<BEGIN>…</BEGIN>. XML provides more kinds of hypertext links than HTML, such as
bidirectional links and links relative to a document subsection.
Because an author may define new tags, an XML DTD must also contain rules that instruct a
Web browser how to interpret them—how an entity is to be displayed or how it is to generate an
action such as preparing an e-mail message.
(2)
Variable Declaration in C++
A variable declaration provides assurance to the compiler that there is one variable existing with
the given type and name so that compiler proceed for further compilation without needing
complete detail about the variable. A variable declaration has its meaning at the time of
compilation only, compiler needs actual variable definition at the time of linking of the program.
A variable declaration is useful when you are using multiple files and you define your variable in
one of the files which will be available at the time of linking of the program. You will
use extern keyword to declare a variable at any place. Though you can declare a variable
multiple time in your C++ program, but it can be defined only once in a file, a function or a
block of code.
To declare a variable, you need to know what data type it is going to be of and what its name
would be. The variable name has constraints on what you can name it. Following are the rules
for naming variables −
Variable names in C++ can range from 1 to 255 characters.
All variable names must begin with a letter of the alphabet or an underscore “_”
After the first initial letter, variable names can also contain letters and numbers.
Variable names are case sensitive.
0316‐4121225
int main()
{
int x = 5;
int y = 2;
int Result;
Result = x * y;
cout << Result;
}
Parallelogram: Represents the Inputs given to the process or an Output generated by the
process.
int main()
{
int radius;
float PI=3.14,area,ci;
ci = 2 * PI * radius;
printf("\n Circumference of a Circle is : %f ",ci);
0316‐4121225
return(0);
}
int main()
{
int firstNumber, secondNumber, sumOfTwoNumbers;
// Prints sum
cout << firstNumber << " + " << secondNumber << " = " << sumOfTwoNumbers;
return 0;
}
2015 Paper
MCQs
1. C
2. C
3. B
5. D
6. D
7. C
8. B
9. B
10. C
Q.2. Short Answers
(1)
See previous paper.
(2)
The gigabyte is a multiple of the unit byte for digital information. The prefix giga means 109 in
the International System of Units (SI). Therefore, one gigabyte is one billion bytes. The unit
symbol for the gigabyte is GB. This definition is used in all contexts of science, engineering,
business, and many areas of computing, including hard drive, solid state drive, and tape
capacities, as well as data transmission speeds.
See previous paper for the rest of the answer.
(3)
In computing, a device driver is a computer program that operates or controls a particular type of
device that is attached to a computer. A driver provides a software interface to hardware devices,
enabling operating systems and other computer programs to access hardware functions without
needing to know precise details about the hardware being used.
A driver communicates with the device through the computer bus or communications subsystem
to which the hardware connects. When a calling program invokes a routine in the driver, the
driver issues commands to the device. Once the device sends data back to the driver, the driver
may invoke routines in the original calling program. Drivers are hardware dependent and
operating-system-specific. They usually provide the interrupt handling required for any
necessary asynchronous time-dependent hardware interface.
0316‐4121225
(4)
Antivirus software, or anti-virus software (abbreviated to AV software), also known as anti-
malware, is a computer program used to prevent, detect, and remove malware. Antivirus
software was originally developed to detect and remove computer viruses, hence the name.
However, with the proliferation of other kinds of malware, antivirus software started to provide
protection from other computer threats. In particular, modern antivirus software can protect users
from: malicious browser helper objects (BHOs), browser hijackers, ransomware, keyloggers,
backdoors, rootkits, trojan horses, worms, malicious LSPs, dialers, fraudtools, adware and
spyware.
(5)
See previous paper.
(6)
45.
(7)
382.
(8)
I/O devices are the pieces of hardware used by a human (or other system) to communicate with a
computer. For instance, a keyboard or computer mouse is an input device for a computer, while
monitors and printers are output devices. Devices for communication between computers, such
as modems and network cards, typically perform both input and output operations.
Mouse and keyboards take physical movements that the human user outputs and convert them
into input signals that a computer can understand; the output from these devices is the computer's
input. Similarly, printers and monitors take signals that a computer outputs as input, and they
convert these signals into a representation that human users can understand. From the human
user's perspective, the process of reading or seeing these representations is receiving output; this
type of interaction between computers and humans is studied in the field of human–computer
interaction. This proves that I/O devices are most necessary for humans to use computers.
(9)
The basic difference between desktop and laptop computers is that laptops are designed for
portability. Laptops can easily be carried in a case or bag, while a desktop is typically set up on a
desk, table or counter with the intention that it stay there for use.
(10)
Comments can be used to explain C++ code, and to make it more readable. It can also be used to
prevent execution when testing alternative code. Comments can be singled-lined or multi-lined.
Single-line comments start with two forward slashes ( // ).
0316‐4121225
Comments are normally stripped out during preprocessing, so the compiler itself never sees them
at all. They can (and normally do) slow compilation a little though--the preprocessor has to read
through the entire comment to find its end (so subsequent code will be passed through to the
compiler. Unless you include truly gargantuan comments (e.g., megabytes) the difference
probably won't be very noticeable though.
2. Now square the number at unit's place. The square will be one of these;
{0,1,4,9,16,25,36,49,64,81}. The unit's place digit in this square is the unit's place digit in
actual final answer. Write it in the answer. If the square of digit at unit's place is a two
digit no like from 16 to 81 in above set; write only the digit at unit's place from this
square in the final answer and carry the remaining digit.
3. Multiply the actual number to be squared by part 'B'(the remaining part than the number
at unit's place as described in step 1).
4. Multiply the parts 'A' and 'B'.
5. Add results of step 3 with results of step 4.
6. Add the carried digit from step 2 to the sum in prior step, that is step 5.
7. Now write this sum before the number we wrote at unit's place of final answer in step 2.
8. This number we now obtain from step 7, is the square of our number.
3. EEPROM (Electrically erasable programmable read only memory) – The data can
be erased by applying electric field, no need of ultra violet light. We can erase only
portions of the chip.
A program is usually not limited to a linear sequence of instructions. During its process it may
bifurcate, repeat code or take decisions. For that purpose, C++ provides control structures that
serve to specify what has to be done by our program, when and under which circumstances.
0316‐4121225
// Initializing Value Of PI
float pi = 3.14159;
// Driver Function
0316‐4121225
int main()
{
float radius = 12;
float vol, sur_area;
// Function Call
vol = volume(radius);
sur_area = surface_area(radius);
(ii)
#include<iostream>
#include<conio.h>
#include<math.h>
using namespace std;
main()
{
long int i,j;
long int sum=0;
float avg;
cout<<"series of number from 1 to 100:\n";
for(i=1;i<=100;i++)
{
0316‐4121225
cout<<i<<endl;
sum=sum+i;
}
cout<<"\n sum of even integers in the range of 1 to 100="<<sum<<endl;
avg=sum/100;
cout<<"average="<<avg;
cout<<"\nprint even series in the range of 1 to 100:\n";
for(j=1;j<=100;j++)
{
if(j%2==1)
cout<<j<<endl;
}
getch();
}
Paper 2016
Q.2. Short Answers
(1)
See previous paper.
(2)
The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol
is MB. The unit prefix mega is a multiplier of 1000000 in the International System of Units.
Therefore, one megabyte is one million bytes of information.
See previous paper for rest of the answer.
(3)
A CPU's clock speed represents how many cycles per second it can execute. Clock speed is also
referred to as clock rate, PC frequency and CPU frequency. This is measured in gigahertz, which
refers to billions of pulses per second and is abbreviated as GHz.
0316‐4121225
A PC’s clock speed is an indicator of its performance and how rapidly a CPU can process data
(move individual bits). A higher frequency (bigger number) suggests better performance in
common tasks, such as gaming. A CPU with higher clock speed is generally better if all other
factors are equal, but a mixture of clock speed, how many instructions the CPU can process per
cycle (also known as instructions per clock cycle/clock, or IPC for short) and the number of
cores the CPU has all help determine overall performance.
(4)
Microsoft Word (or simply Word) is a word processor developed by Microsoft. It was first
released on October 25, 1983 under the name Multi-Tool Word for Xenix systems. Word
contains rudimentary desktop publishing capabilities and is the most widely used word
processing program on the market. Word files are commonly used as the format for sending text
documents via e-mail because almost every user with a computer can read a Word document by
using the Word application, a Word viewer or a word processor that imports the Word format.
(5)
See previous paper.
(6)
35.
(7)
3E6.
(8)
See previous paper.
(9)
See previous paper.
(10)
The getch() function is used to catch a character from the keyboard. The getch() function reads a
single character from the keyboard but does not show on the screen. For this functionality, you
can use the getch() function to hold the output window until hitting any key from the keyboard.
Paper 2017
MCQs
1. D
2. C
3. B
5. C
6. D
7. C
8. A
9. D
10. B
Q.2. Short Answers
(1)
Algorithm
An algorithm is used to provide a solution to a particular problem in form of well-defined steps.
Whenever you use a computer to solve a particular problem, the steps which lead to the solution
should be properly communicated to the computer. While executing an algorithm on a computer,
several operations such as additions and subtractions are combined to perform more complex
mathematical operations. Algorithms can be expressed using natural language, flowcharts, etc.
0316‐4121225
Program
A program is a set of instructions for the computer to follow. The machine can’t read a program
directly, because it only understands machine code. But you can write stuff in a computer
language, and then a compiler or interpreter can make it understandable to the computer.
(2)
The kilobyte is a multiple of the unit byte for digital information. The International System of
Units (SI) defines the prefix kilo as 1000 (103); per this definition, one kilobyte is 1000
bytes. The internationally recommended unit symbol for the kilobyte is kB. In some areas
of information technology, particularly in reference to digital memory capacity, kilobyte instead
denotes 1024 (210) bytes.
See previous paper for the remaining answer.
(3)
0316‐4121225
(4)
See previous paper.
(5)
Int, short for "integer," is a fundamental variable type built into the compiler and used to
define numeric variables holding whole numbers. Other data types include float and
double. C, C++, C# and many other programming languages recognize int as a data type.
Float is a shortened term for "floating point." By definition, it's a fundamental data type
built into the compiler that's used to define numeric values with floating decimal points.
C, C++, C# and many other programming languages recognize float as a data type. Other
common data types include int and double.
A for loop is a repetition control structure that allows you to efficiently write a loop that
needs to execute a specific number of times.
0316‐4121225
The if statement allows you to control if a program enters a section of code or not based on
whether a given condition is true or false.
(ii)
Program:
0316‐4121225
#include<iostream>
#include<conio.h>
#include<math.h>
using namespace std;
main()
{
long int i,j;
long int sum=0;
float avg;
cout<<"series of number from 1 to 100:\n";
for(i=1;i<=100;i++)
{
cout<<i<<endl;
sum=sum+i;
}
cout<<"\n sum of even integers in the range of 1 to 100="<<sum<<endl;
avg=sum/100;
cout<<"average="<<avg;
cout<<"\nprint even series in the range of 1 to 100:\n";
for(j=1;j<=100;j++)
{
if(j%2==0)
cout<<j<<endl;
}
getch();
}
#include<iostream>
#include<conio.h>
#include<math.h>
using namespace std;
main()
{
const float pi=3.14, g=9.8;
long double T[50];
int l,a=0;
cout<<"values of Time period are:\n";
for(l=1;l<=20;l=l+5)
{
T[a]=2*pi*sqrt(l/g);
cout<<"Time period against lenght l=:\t"<<l<<"\t\t T="<<T[a]<<endl;
a++;
}
getch();
}
(ii)
Program:
#include<iostream>
#include<conio.h>
#include<math.h>
using namespace std;
main()
{
const float pi=3.14;
0316‐4121225
int r;
float A,C;
cout<<"enter the radius:\n";
cin>>r;
A=pi*r*r; //Area of circle formula
C=2*pi*r; //Circumference of circle formula
cout<<"Area of circle having Radius r="<<r<<"is eqal to : A="<<A<<endl;
cout<<"Circumference of circle having Radius r="<<r<<"is eqal to : C="<<C<<endl;
getch();
}
2018 Paper
MCQs
1. B
2. A
3. A
4. D
5. D
6. A
7. B
9. D
10. B
Q.2. Short Answers
(1)
See previous paper.
(2)
202
0316‐4121225
(3)
Monitors
Monitors, commonly called as Visual Display Unit (VDU), are the main output device of a
computer. It forms images from tiny dots, called pixels that are arranged in a rectangular
form. The sharpness of the image depends upon the number of pixels.
There are two kinds of viewing screen used for monitors.
Cathode-Ray Tube (CRT)
Flat-Panel Display
Printers
Printer is an output device, which is used to print information on paper.
There are two types of printers −
Impact Printers
Non-Impact Printers
A monitor is a constantly-refreshing output designed for dynamic images. Monitors refresh
themselves at certain rates; if you move a picture 24 times a second it looks like continuing
motion to the eye and that’s the standard they use for films. Most cheap monitors refresh
at least twice that rate.
A printer, on the other hand, is a device that builds one static image and captures it
permanently on a suitable medium, paper being the most common. Once done, the image
on the paper does not refresh, and does not move. It remains the same at any two points in
time.
They’re both output devices....
(4)
See previous paper.
(5)
Loops are used for executing a block of program statements repeatedly until the given loop
condition returns false.
How while Loop works?
In while loop, condition is evaluated first and if it returns true then the statements inside while
loop execute, this happens repeatedly until the condition returns false. When condition returns
0316‐4121225
false, the control comes out of loop and jumps to the next statement in the program after while
loop.
for Loop
A for loop is a repetition control structure which allows us to write a loop that is executed a
specific number of times. The loop enables us to perform n number of steps together in one line.
In for loop, a loop variable is used to control the loop. First initialize this loop variable to some
value, then check whether this variable is less than or greater than counter value. If statement is
true, then loop body is executed and loop variable gets updated. Steps are repeated till exit
condition comes.
avg=sum/44;
}
}
cout<<"\n sum of even integers in
the range of 2 to 88="<<sum<<endl;
cout<<"average="<<avg;
cout<<"\nprint odd series in the
range of 2 to 88:\n";
for(i=2;i<=88;i++)
{
if(i%2==1)
cout<<i<<endl;
}
getch();
}
y2=pow(e,((2*t-4)/4));
y3=cos(sin(pow(e,t)));
cout<<"value of y1 is ="<<y1<<endl;
cout<<"value of y2 is ="<<y2<<endl;
cout<<"value of y3 is ="<<y3<<endl;
getch();
}
getch();
}
b) see previous past paper.
main() Function in C++
main() function is the entry point of any C++ program. It is the point at which execution of
program is started. When a C++ program is executed, the execution control goes directly to the
main() function. Every C++ program have a main() function.
2019 Paper
MCQs
1. D
2. D
3. B
4. C
5. D
6. A
8. B
9. D
10. B
Q.2. Short Answers
(1)
#include<conio.h>
#include<math.h>
#include<iostream>
#include<windows.h>
void drawAxis()
{
for(int x=0;x<=78;x++)
{
gotoxy(x,100);
cout<<"_";
}
for(int y=0;y<=200;y++)
{
gotoxy(39,y);
cout<<"|";
}
}
Input device
In computing, an input device is a piece of computer hardware equipment used to provide data
and control signals to an information processing system such as a computer or information
appliance. Examples of input devices include keyboards, mouse, scanners, digital cameras,
joysticks, and microphones.
Tactile input device
Tactile input device means a device such as a keyboard with which an elector provides
information to a voting system by touching the device.
(2)
199
(3)
See previous paper.
(4)
// Examination Hall
#include<iostream>
#include<conio.h>
0316‐4121225
#include<math.h>
using namespace std;
main()
{
long int i,j,odd;
cout<<"number:\t:square:\tcube\n";
for(i=1;i<=500;i++)
{
cout<<i<<"\t"<<pow(i,2)<<"\t"<<pow(i,3)<<endl;
}
cout<<"\nprint odd series in the range of 1 to 500:\n";
for(i=1;i<=500;i++)
{
if(i%2==1)
odd=i;
cout<<odd<<endl;
}
getch();
}
int n;
const float pi=3.14;
long double t,y1,y2,y3;
cout<<"enter the number:\n";
cin>>n;
t=(n*pi)/180;
y1=sin(2*t);
y2=cos(4*t);
y3=y1+y2;
cout<<"value of y1 is ="<<y1<<endl;
cout<<"value of y2 is ="<<y2<<endl;
cout<<"value of y3 is ="<<y3<<endl;
getch();
}
b) Program:
#include<iostream>
#include<conio.h>
using namespace std;
main()
{
int i,a,n;
n=5;
cout<<"Table of number 5 :\n";
for(i=1;i<=10;i++)
{
a=n*i;
0316‐4121225
cout<<n<<"x"<<i<<"="<<a<<endl;
}
getch();
}
Recommended books:
1. Numerical recipes: the art of scientific computing - Cambridge University Press
2. Mathematica for physics by Robert L. Zimmerman