You are on page 1of 27



Part A Unit I
1. Define Artificial Intelligence The exciting new effort to make computers think machines with minds in the full and literal sense. A field of study that seeks to explain and emulate intelligent behaviors in terms of computational processes .The study of the computations that make it possible to perceive, reason and act. 2. List down the characteristics of intelligent agent? Rationality,Adaptability,autonomous. 3. Define an agent. An agent is anything that can be viewed as perceiving its environment through sensors and acting upon the environment through effectors. 4. Define rational agent. A rational agent is one that does the right thing. Here right thing is one that will cause agent to be more successful. That leaves us with the problem of deciding how and when to evaluate the agents success. 5. Define an Omniscient agent. An omniscient agent knows the actual outcome of its action and can act accordingly; but omniscience is impossible in reality. 6. What are the factors that a rational agent should depend on at any given time? 1. The performance measure that defines degree of success. 2. Ever thing that the agent has perceived so far. 3. When the agent knows about the environment. 4. The action that the agent can perform. 7. Define an Ideal rational agent. For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure on the basis of the evidence provided by the percept sequence & whatever built-in knowledge that the agent has. 8. Define an agent program. Agent program is a function that implements the agents mapping from percept to actions. 9. List the various type of agent program. Simple reflex agent program. Agent that keep track of the world.

Goal based agent program. Utility based agent program. 10. State the various properties of environment. Accessible Vs Inaccessible: If an agents sensing apparatus give it access to the complete state of the environment then we can say the environment is accessible to the agent. Deterministic Vs Non deterministic: If the next state of the environment is completely determined by the current state and the actions selected by the agent, then the environment is deterministic. Episodic Vs Non episodic: In this, agents experience is divided into episodes. Each episodes consists of agents perceiving and then acting. The quality of the action depends on the episode itself because subsequent episode do not depend on what action occur in previous experience. Discrete Vs Continuous: If there is a limited no. of distinct clearly defined percepts & action we say that the environment is discrete. 11. What are the phases involved in designing a problem solving agent? The three phases are: Problem formulation, Search solution, Execution. 12. List the basic elements that are to be include in problem definition. Initial state, operator, successor function, state space, path, goal test, path cost. 13. Mention the criteria for the evaluation of search strategy. There are 4 criteria: Completeness, time complexity, space complexity, optimality. 14. List the various search strategies. a. BFS b. Uniform cost search c. DFS d. Depth limited search e. Iterative deepening search f. Bidirectional search 15. List the various informed search strategy. Best first search greedy search A* search Memory bounded search-Iterative deepening A*search -simplified memory bounded A*search Iterative improvement search hill climbing -simulated annealing 16. Define iterative deepening search.

Iterative deepening is a strategy that sidesteps the issue of choosing the best depth limit by trying all possible depth limits: first depth 0, then depth 1,then depth 2& so on. 17. Define CSP A constraint satisfaction problem is a special kind of problem satisfies some additional structural properties beyond the basic requirements for problem in general. In a CSP; the states are defined by the values of a set of variables and the goal test specifies a set of constraint that the value must obey. 18. Give the drawback of DFS. The drawback of DFS is that it can get stuck going down the wrong path. Many problems have very deep or even infinite search tree. So dfs will never be able to recover from an unlucky choice at one of the nodes near the top of the tree. So DFS should be avoided for search trees with large or infinite maximum depths. 19. What is called as bidirectional search? The idea behind bidirectional search is to simultaneously search both forward from the initial state & backward from the goal & stop when the two searches meet in the middle. 20. Explain depth limited search. Depth limited avoids the pitfalls of DFS by imposing a cut off of the maximum depth of a path. This cutoff can be implemented by special depth limited search algorithm or by using the general search algorithm with operators that keep track of the depth. 21. Give the procedure of IDA* search. Iterative improvement algorithms keep only a single state in memory, but can get stuck on local maxima. In this alg each iteration is a dfs just as in regular iterative deepening. The depth first search is modified to use an f-cost limit rather than a depth limit. Thus each iteration expands all nodes inside the contour for the current f-cost. 22. What is the advantage of memory bounded search techniques? We can reduce space requirements of A* with memory bounded alg such as IDA* & SMA*. 23. What do you mean by local maxima with respect to search techniques? A local maxima is a peak that is higher than each of its neighboring state but lower than the global maximum.

1. Define a knowledge Base: Knowledge base is the central component of knowledge base agent and it is described as a set of representations of facts about the world.

2. Define a Sentence? Each individual representation of facts is called a sentence. The sentences are expressed in a language called as knowledge representation language. 3. Define an inference procedure An inference procedure reports whether or not a sentence is entiled by knowledge base provided a knowledge base and a sentence. An inference procedure i can be described by the sentences that it can derive. If i can derive from knowledge base, we can write. Alpha is derived from KB or i derives alpha from KB 4. What are the three levels in describing knowledge based agent? Logical level Implementation level Knowledge level or epistemological level 5. Define Syntax. Syntax is the arrangement of words. Syntax of knowledge describes the possible configurations that can constitute sentences. Syntax of the language describes how to make sentences. 6. Define Semantics The semantics of the language defines the truth of each sentence with respect to each possible world. With this semantics, when a particular configuration exists within an agent, the agent believes the corresponding sentence. 7. Define Logic Logic is one which consist of i. A formal system for describing states of affairs, consisting of a) Syntax b)Semantics. ii. Proof Theory a set of rules for deducing the entailment of a set sentences. 8. What is entailment? The relation between sentence is called entailment. 9. Define a Complete inference procedure An inference procedure is complete if it can derive all true conditions from a set of premises. 10. Define Interpretation Interpretation specifies exactly which objects, relations and functions are referred to by the constant predicate, and function symbols. 11. What are the basic Components of propositonal logic? i. Logical Constants (True, False) 12. Define Modus Ponens rule in Propositional logic? The standard patterns of inference that can be applied to derive chains of conclusions that lead to the desired goal is said to be Modus Ponens rule. 13. Define Term.

A term is a logical expression that refers to an object. Constant symbols are therefore terms. 14. Define Atomic sentences. An atomic sentence is formed from a predicate symbol followed by a parenthesized list of terms 15. What is the use of unification? The use of unification to identify appropriate substitutions for variables eliminates the instantiation step in first-order proofs, making the process much more efficient. 16. What is meant by memorization? Backward chaining suffers from redundant inferences and infinite loops; these can be alleviated by memoization. 17. What factor determines the selection of forward or backward reasoning approach for an AI problem? Forward chaining is the form of data-driven reasoning. It can be used with an agent to derive conclusion from incoming percepts, often without a specific query in mind. Backward chaining is a form of goal-directed reasoning. It is useful for answering specific questions such as what shall I do now? & where are my keys?. 18. What are the limitations in using propositional logic to represent the knowledge base? Propositional logic doesnt scale to environments of unbounded size because it lacks the expressive power of deal concisely with time, space,& universal patterns of relationships among objects. 19. What is the need of resolution? The generalized resolution inference rule provides a complete proof system for first order logic, using knowledge bases in conjunctive normal form.

1. What is meant by planning? The task of coming up with a sequence of actions that will achieve a goal is called planning. 2. What are the functions of planning systems? Planning systems are problem-solving algorithms that operate on explicit propositional (or first-order) representations of states and actions. These representations make possible the derivation of effective heuristics and the development of powerful and flexible algorithms for solving problems.

3. Write notes on STRIPS and ADL languages. The STRIPS language describes actions in terms of their preconditions and effects and describes the initial and goal states as conjunctions of positive literals. The ADL language relaxes some of these constraints, allowing disjunction, negation, and quantifiers. 4. What is the need of POP algorithms? Partial-order planning (POP) algorithms explore the space of plans without committing to a totally ordered sequence of actions. They work back from the goal, adding actions to the plan to achieve each subgoal. They are particularly effective on problems amenable to a divide-and-conquer approach 5. Write notes on Planning graph. A planning graph can be constructed incrementially, starting from the initial state. Each layer contains a superset of all the literals or actions that could occur at that time step and encodes mutual exclusion, or mutex, relations among literals or actions that cannot cooccur.Planning graphs yield useful heuristics for state-space and partial-order plannersand can be used directly in the GRAPHPLAN algorithm 6. What is the need of GRAPHPLAN algorithm? The GRAPHPLAN algorithm processes the planning graph, using a backward search to extract a plan. It allows for some partial ordering among actions. 7. What is the function of SATPLAN algorithm? The SATPLAN algorithm translates a planning problem into propositional axioms and applies a satisfiability algorithm to find a model that co~~espontdos a valid plan. Several different propositional representations have been developed, with varying degrees of compactness and efficiency. 8. write notes on HTN planning Hierarchical task network (HTN) planning allows the agent to take advice from the domain designer in the form of decoinposition rules. This makes it feasible to create the very large plans required by many real-world applications. 9. Explain the importance of resources. Many actions consume resources, such as money, gas, or raw materials. It is convenient to treat these resources as numeric measures in a pool rather than try to reason about,say, each individual coin and bill in the world. Actions can generate and consume resources, and it is usually cheap and effective to check partial plans for satisfaction of resource constraints before attempting further refinements. 10. Compare conditional planning and conformant planning. Incomplete information can be dealt with by planning to use sensing actions to obtain the information needed. Conditional plans allow the agent to sense the world during execution to decide what branch of the plan to follow. :[n some cases, sensorless or conformant planning can be used to construct a plan that works without the need for perception. Both sensorless and conditional plans car1 be constructed by search in the space of belief states. 11. what is the use of execution monitoring

Incorrect information results in unsatisfied preconditions for actions and plans. Execution monitoring detects violations of the preconditions for successful completion of the plan. 12. What is a re-planning agent. A replanning agent uses execution monitoring and splices in repairs as needed. 13. What is a continuous planning agent A continuous planning agent creates new goals as it goes and reacts in real time. 14. Compare multi agent planning and multibody planning Multiagent planning is necessary when there another agents in the environment with which to cooperate, compete, or coordinate. Multibody planning constructs joint plans, using an efficient decomposition of joint action descriptions, but must be augmented with some form of coordination if two cooperative agents are to agree on which joint plan to execute. 15.Define partial order planner? Any planning algorithm that can place two actions in to a plan without specifying which comes first. 16.What are the differences and similarities between problem solving and planning? Problem solving and planning involves finding sequences of action that lead to desirable states. But planning is also capable of working back from an explicit goal description to minimize irrelevant actions, possess autonomy and can take advantage of problem decomposition.

1. Why uncertainty arises? Uncertainty arises because of both laziness and ignorance. It is inescapable in complex, dynamic, or inaccessible worlds. 2. What do you mean by uncertainty? Uncertainty means that many of the simplifications that are possible with deductive inference are no longer valid. 3. What does full joint probability distribution specify? The full joint probability distribution specifies the probability of each complete assignment of values to random variables. It is usually too large to create or use in its explicit form. 4. When will an agent behave irrationally? The axioms of probability constrain the possible assignments of probabilities to propositions.An agent that violates the axioms will behave irrationally in some circumstances. 5. What do you mean by absolute independence? Absolute independence between subsets of random variables might allow the full joint

distribution to be factored into smaller joint distributions. This could greatly reduce complexity, but seldom occurs in practice. 6. What is the need for Bayes' rule? Bayes' rule allows unknown probabilities to be computed from known conditional probabilities, usually in the causal direction. Applying Bayes' rule with many pieces of evidence will in general run into the same scaling problems as does the full joint distribution. 7. Define conditional independence? Conditional independence brought about by direct causal relationships in the domain might allow the full joint distribution to be factored into smaller, conditional distributions. 8. How does a nave bayes model work? The naive Bayes model assumes the conditional independence of all effect variables, given a single cause variable, and grows linearly with the number of effects. 9. Define Bayesian networks. A Bayesian network is a directed acyclic graph whose nodes correspond to random variables; each node has a conditional distribution for the node, given its parents. Bayesian networks provide a concise way to represent conditional independence relationships in the domain. 10. What is the use of Bayesian networks ? A Bayesian network specifies a full joint distribution; each joint entry is defined as the product of the corresponding entries in the local conditional distributions. A Bayesian network is often exponentially smaller than the full joint distribution. 11. Write notes on Hybrid Bayesian networks Many conditional distributions can be represented compactly by canonical families of distributions. Hybrid Bayesian networks, which include both discrete and continuous variables, use a variety of canonical distributions. 12. Write notes on variable elimination? Inference in Bayesian networks means computing the probability distribution of a set of query variables, given a set of evidence variables. Exact inference algorithms, such as variable elimination, evaluate sums of products of conditional probabilities as efficiently as possible. 13. Define polytrees. In polytrees (singly connected networks), exact inference takes time linear in the size of the network. In the general case, the problem is intractable. 14. What is the use of Stochastic approximation techniques? Stochastic approximation techniques such as likelihood weighting and Markov chain Monte Carlo can give reasonable estimates of the true posterior probabilities in a network and can cope with much larger networks than can exact algorithms. 15. list down two applications of temporal probabilistic models? Medical diagnosis, tracking economic activity of a nation, speech recognition.

16. Define Dempster- Shafer theory? The dempster shafer theory uses interval-valued degrees of belief to represent an agents knowledge of the probability of a proposition. 17. Write notes on Relational probability models (RPMs). Probability theory can be combined with representational ideas from first-order logic to produce very powerful systems for reasoning under uncertainty. Relational probability models (RPMs) include representational restrictions that guarantee a well-defined probability distribution that can be expressed as an equivalent Bayesian network.

1. How does the process of learning is achieved? Learning takes many forms, depending on the nature of the performance element, the component to be improved, and the available feedback. 2. What are the types of learning supervised Ilearning un supervised Ilearning 3. Define Supervised learning If the available feedback, either from a teacher or from the environment, provides the correct value for the examples, the learning problem is called supervised Ilearning 4. Define classification and regression Learning a discrete-valued function is called classification; learning a continuous function is called regression. 5. What is the use of Ockham's razor. Inductnve learning involves finding a consistent hypothesis that agrees with the examples. Ockham's razor suggests choosing the simplest consistent hypothesis. The difficulty of this task depends on the chosen representation. 6. What is the need of decision trees? Decision trees can represent all Boolean functions. The information gain heuristic provides an efficient method for finding a simple, consistent decision tree. 7. How will you measure the performance of a learning algorithm? The performance of a learning algorithm is measured by the learning curve, which shows the prediction accuracy on the test set as a function of the training set size. 8. What is the need of Computational learning theory? Computational learning theory analyzes the sample complexity and computational complexity of inductive learning. There is a tradeoff between the expressiveness of the hypothesis language and the ease of learning.

9. What are the advantages of Bayesian learning methods? Bayesian learning methods formulate learning as a form of probabilistic inference, using the observations to update a prior distribution over hypotheses. This approach provides a good way to implement Ockham's razor, but quickly becomes intractable for complex hypothesis spaces. 10. What are instance based models? Instance-based models represent a distribution using the collection of training instances.Thus, the number of parameters grows with the training set. Nearest-neighbor methods look at the instances nearest to the point in question, whereas kernel methods form a distance-weighted combination of all the instances. 11. How does Policy search work? Policy search methods operate directly on a representation of the policy, attempting to improve it based on observed performance. The variance in the performance in a stochastic domain is a serious problem; for simulated dlomains this can be overcome by fixing the randomness in advance. 12. What is reinforcement learning? The task of reinforcement learning is to use observed rewards to learn an optimal (or nearly optimal) policy for the environment. 13. What is passive learning? the agent's policy is fixed and the task is to learn the utilities of states (or state-action pairs); this could also involve learning a model of the environment. 14. Explain the concept of learning from example? Learning from example involves learning a function from example of its inputs and outputs. Consider for example an agent training to become a taxi driver. Every time the instructor shouts brake the agent can learn the condition action rule for when to brake. 15. How statistical learning differ from reinforcement learning? Statistical learning involves learning functions and probability models from example, where as reinforcement learning involves learning from success and failure of an agent in the absence of any prior examples. PART B QUESTIONS: IMPORTANT NOTE: All the answers for the given questions are available in the text book Artificial Intelligence - A Modern Approach, by Stuart Russell, Peter Norvig, 2nd Edition, Pearson Education / Prentice Hall of India, 2004. UNIT I 1. Explain in detail about the classification of task environments. (Page No.38) Specifying the task environment

Properties of task environments a. Fully observable vs. partially observable. b. Deterministic vs. stochastic c. Episodic vs. sequential d. Static vs, dynamic e. Discrete vs. continuous f. Single agent vs. multiagent 2. Explain about various agent programs in detail. (Page No.44) Agent programs Definition Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents Learning agents Explanation with neat diagrams 3. Explain in detail about uninformed searching strategies. (Page No.73) Breadth-first search o Uniform-cost search Depth-first search o Depth-limited search o Iterative deepening depth-first search o Bidirectional search Comparing uninformed search strategies 4. Explain in detail about informed / heuristic searching strategies. (Page No.94) o Informed search o Best-first search o Greedy best-first search o A* search o Memory-bounded heuristic search o 5. Explain in detail about Constraint Satisfaction Problems. (Page No.137) Objective Function. CSP Map-Coloring Problem CSP As Incremental Formulation Boolean CSP Constraint Language Cryptarithmetic Puzzles 6. Explain how backtracking search is carried out in CSPs. Explain with examples. (Page No.141) Commutativity Backtracking Search

Search Tree Variable And Value Ordering Propagating Information Through Constraints Forward Checking Constraint Propagation Arc Consistency Handling Special Constraints Intelligent Backtracking: Looking Backward UNIT II

1. Explain in detail about syntax and semantics used in the first order logic. (Page No. 245) Models for first-order logic Symbols and interpretations Terms Atomic sentences Complex sentences Quantifiers i. Universal quantification ii. Existential quantification iii. Nested quantifiers Equality 2. What are the steps involved in knowledge engineering process. Explain with an example. (Page No. 261) The knowledge engineering process The electronic circuits domain 3. What are unification and lifting in first order logic? (Page No.275) Generalized Modus Ponens Unification Storage and retrieval 4. Explain briefly about forward chaining with an example. (Page No. 280) First-order definite clauses Simple forward-chaining algorithm Efficient forward chaining Incremental forward chaining 5. Explain briefly about forward chaining with an example. (Page No. 287) Composition Backward Chaining Algorithm Proof Tree 6. Explain in detail about resolution in first order logic. (Page No. 295)

Completeness Theorem Incompleteness Theorem Conjunctive Normal Form For First-Order Logic The Resolution Inference Rule Example Proofs Completeness Of Resolution UNIT III

1. Explain in detail about Planning with State-Space Search. (Page No. 382) 2. Explain partial order planning with example. (Page No. 387) 3. Explain GRAPHPLAN algorithm in detail. (Page No. 398) 4. Explain the concept of Hierarchical Task Network Planning in detail. (Page No. 422) 5. Explain in detail about Multi-agent planning. (Page No. 449) UNIT IV 1. Explain various axioms of probability. (Page No. 471) 2. Explain in detail about Bayes' Rule and Its Use. (Page No. 479) 3. Explain the Semantics of Bayesian Networks. (Page No. 495) 4. Explain how inference can be achieved in Bayesian Networks. (Page No. 504) 5. Explain in detail about Hidden Markov Models (Page No. 549) UNIT V 1. Explain in detail about Learning Decision Trees. (Page No. 653) Definition Decision trees as performance elements Example Expressiveness of decision trees Inducing decision trees from examples 2. Explain in detail about explanation based learning. (Page No. 690)

Extracting general rules from examples Improving efficiency

3. Explain in detail about how logical formulation of learning is carried out? (Page No.678)

Least-commitment search 4. Write notes on Inductive Logic Programming with an example. (Page No.697) ILP Example Top-Down Inductive Learning Methods Inductive Learning With Inverse Deduction

Examples and hypotheses Current-best-hypothesis search

Making Discoveries With Inductive Logic Programming 5. Write notes on statistical Learning. (Page No.712) 6. Explain briefly about learning with complete data under statistical learning methods. (Page No.716) Maximum-likelihood parameter learning: Discrete models Bayesian network model Naive Bayes models Maximum-likelihood parameter learning: Continuous models Bayesian parameter learning Learning Bayes net structures

7. Explain in detail about learning with hidden variables? (or) Explain about EM
algorithm (Page No. 724) Unsupervised clustering: Learning mixtures of' Gaussians Learning Bayesian networks with hidden variables Learning hidden Markov models The general form of the EM algorithm Learning Bayes net structures with hidden variables 8. Briefly explain about instance based learning. (Page No. 733) Nearest-neighbor models Kernel models 9. Write notes on Neural Networks. (Page No. 736) Neuron Units in neural networks Network structures Single layer feed-forward neural networks / Perceptrons Multilayer feed-forward neural networks

10. Write notes on passive reinforcement learning. (Page No. 765)

Direct utility estimation Adaptive dynamic programming Temporal difference learning

11. Write notes on Active reinforcement learning. (Page No. 771)

Exploration Learning an Action-Value Function

Solved University question papers DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING VI SEMESTER CS1351 ARTIFICIAL INTELLIGENCE Part A Answer all the Questions. 1. What is meant by percept sequence? 2. What are actuators? 3. What are learning agents? 4. What do you mean by relaxed problem? 5. What is meant by effective branching factor? 6. What is a ridge? 7. What are propositional attitudes? 8. What do you mean by reinforcement learning? 9. Define strings. 10.Distinguish between terminal and non terminal symbols. Part B Answer all the Questions. 11.a. Explain in detail about problem solving.
(ANS: Page number-59)



(or) b. Explain various agent program structures in detail. (ANS: Page


12.a. Explain about local search algorithms in detail.

(ANS: Page number-111)

(or) b. Explain in detail about backtracking search. (ANS: Page number141)

13. a. Explain the syntax and semantics of the first order logic. (ANS:
Page number-245)

(or) b. Explain in detail about backward chaining and forward chaining(ANS: Page number-280)

. 14. a. Explain in detail about instance based learning(ANS: Page


. (or) b. Explain the concept of learning decision trees and inductive learning(ANS: Page number-653) . 15. a. Briefly explain about(ANS: Page number-818) i. Ambiguity and disambiguation ii. Discourse understanding. (or) b. Develop a formal grammar for a fragment of English. (ANS:
Page number-715)

1. Define AI.


2. What is an agent? 3. What do you mean by heuristic function? 4. Define A* search. 5. What do you mean by belief state? 6. What is meant by explanation based learning? 7. Define Knowledge Engineering. 8. Define verb categorization. 9. What do you mean by ambiguity? 10.What are the component steps of communication? Part B Answer all the questions. 11. a. Give the Algorithm for BFS and DFS and explain it in detail.
(ANS: Page number-75,77)

(Or) b. Explain the concept of searching with partial information in detail(ANS: Page number-83) . 12. a. Explain in detail about CSPs(ANS: Page number-137) (or) b. Explain in detail about optimal decisions in games(ANS: Page

13. a. Explain in detail about forward and backward chaining? (ANS:

Page number-280)

(or) b. Explain First order predicate logic in detail(ANS: Page number-240) .

14. a. Explain in detail about Generalization in reinforcement learning (ANS: Page number-763) (or) b. Explain in detail about neural networks(ANS: Page number-736) . 15. a. Explain in detail about Probabilistic language processing(ANS:
Page number-834)

(or) b. Explain the concepts of Information retrieval and Information Extraction in detail(ANS: Page number-840) .


BE/B TECH DEGREE EXAMINATION,APRIL/MAY 2008 Sixth Semester (Regulation 2004) Computer Science Engineering CS 1351 -ARTIFICIAL INTELLIGENCE (Common to B E (Part Time )Fifth semester regulation 2005) Time:3 Hours Maximum :100 Marks Answer All Questions PART A-(10*2=20 marks) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. Define artificial intelligence What is the use of heuristic functions? How to improve the effectiveness of a search based problem solving technique? what is a constraint satisfaction problem? what is unification algorithm? how can u represent the resolution of predicate logic? list out the advantages of non monotonic reasoning? Differentiate between JTMS and LTMS. what are frameset and instancxes? list out important concepts of script? PART B-(*16=80 marks) (a) (i) give an example of problem for which breadth first search would work better than depth first search. (ANS: Page number-73) (ii) Explain algorithm for steepest hill climbing. (ANS: Page number111)

Or (b) Explain the following search strategies(ANS: Page number-74) (i) best first search (ii) A* search 12. (a) Explain Min Max procedure (ANS: Page number-165) Or (b) Describe alpha beta pruning and give the other modifications to the min max procedure to improve its performance, (ANS: Page number-167) 13. (a) Illustrate the use of predicate logic to represent the knowl;edge with suitable example. (ANS: Page number-240) Or (b) consider the following sentences john likes all kinds of food apples are food

chicken is food anything anyone isnt killed by is food. Bill eats peanuts and is still alive Sue eats everything bill eats (i) translate these sentences into formulas in predicate logic (ii) prove that johnlikes peanuts using backward chaining (iii) convert the formulas of a part into clause form (iv) prove that john likes peanuts using resolution (ANS: Page number-253) 14 With an example explain the logics of nonmonotonic reasoning(ANS: Page number-358) Or (b) explain how bayesian statistics provides reasoning under various kinds of uncertainty(ANS: Page number-492) 15 (a) (i) construct semantic net representation of the following: Pomepein (marcus),blacksmith(marcus Mary gave green flowered vaste to her favourite cousin (ii) construct partitioned semantic net representations for the following every batter hit a ball all the batters like the pitcher(ANS: Page number-810) or (b) illustrates the learning from examples by induction with suitable examples. (ANS: Page number-651)


(Regulation 2004) Computer Science Engineering CS 1351 -ARTIFICIAL INTELLIGENCE (Common to B E (Part Time )Fifth semester regulation 2005) Time:3 Hours Maximum :100 Marks Answer All Questions PART A-(10*2=20 marks) 1. Give the PEAS description of an interactive English tutor system 2. write an informal description for the general structure tree algorithm 3. what is local minima problem 4. how does alpha beta pruning technique works? 5. Name is referential transparency 6. name the two kinds of synchronous rules that allow deductions 7. list the steps in explanation based learning 8. give an example of linearly non seperable function 9. what are the seven process involved in a communication episode? 10. what are the characteristics of information retrieval system 11. a. what are the four basic steps of agent program in any intelligent system(ANS: Page number-44--53) OR b.explain how did you convert it into learning agents breadth first search uniform cost search depth first search depth limited search(ANS: Page number-73--80) 12. a. i. what are the constraint satisfaction problems? How can u formulate them as search problems? ii. Discuss the various issues associated with backtracking search for are they addressed? (ANS: Page number-137--141)

b. explain the functional local search strategies with examples i. hill climbing ii. genetic algorithms iii. simulated annealing iv. local beam search(ANS: Page number-110--115) 13. a. explain the various steps associated with the knowledge engineering process?. discuss them by applying the steps to real world application of your choice(ANS: Page number-260--262)

OR b. i. what are the various ontologies involved in situation calculus ii. How did you solve the following problems in situation calculus 1. representational frame problems 2. inferential frame problems(ANS: Page number-320) 14. a. explain with proper example how EM algorithm can be used for learning with hidden variables. (ANS: Page number-724) OR b. describe how decision trees could be used for inductive learning.explain its effectieveness with a suitable example. (ANS: Page number-653) 15. a. explain machine translation system with a neat sketch.analyse its learning probabilities(ANS: Page number-850--852) OR b. perform bottom up and top down parsing for the input the wumpus is dead (ANS: Page number-798--800)


(Regulation 2004) Computer Science Engineering CS 1351 -ARTIFICIAL INTELLIGENCE (Common to B E (Part Time )Fifth semester regulation 2005) Time:3 Hours Maximum :100 Marks Answer All Questions PART A-(10*2=20 marks) PART A 1. Define the terms:agent,agent function. 2. Why problem formulation must follow the goal formulation? 3. 4. 5. 6. 7. 8. 9. What is the use of online search agents in unknown environment? Specify the complexity of expectiminimax. How TELL and ASK are used in first-order-logic? What is ontological engineering? State the reasons why the inductive logic programming is popular. What is active and passive reinforcement learning? What is grammar induction?

10. How machine translation systems are implemented? PART-B 11. (a) Discuss on different types of agent program(ANS: Page number-44) (Or) (b) (i) (ii) Explain the following uniformed search strategies(ANS: Page number-73) Depth First Search Iterative deepening depth first search


Bidirectional search

12. (a) (i) Describe A* search and give the proof of optimality of A*.(ANS: Page number-96) (ii) Give the algorithm for solving constraint satisfaction problems by local search. (ANS: Page number-137) (or) (b) Explain Min-MaxAlgorithm and Alpha beta pruning. (ANS: Page number-165,167) 13. (a) Illusrate the use of first-order-logic to represent the knowledge. (ANS: Page number-245) (or) (b) Explain the forward chaining and backward chaining algorithm with an example. (ANS: Page number-280--294) 14. (a) (i) Describe about decision tree learning. (ANS: Page number-653) Explain the explanation based learning(ANS: Page number-690) (or) (b)Discuss on learning with hidden variables. (ANS: Page number-724) 15. (a) (i)Describe the process involved in communication using the example sentence The wumpus is dead. (ANS: Page number-792) (ii) Write short notes on semantic interpretation. (ANS: Page number810) (or) (b) Explain briefly about the following: (ANS: Page number-840-849) (i) Information retrieval. (ii)

(ii) Information extraction.

B.E/B.TECH DEGREE EXAMINATION, MAY/JUNE 2009 Sixth semester (Regulation 2004) CS 1351 ARTIFICIAL INTELLIGENCE (Common to B.E (part time) fifth semester regulation 2005) Time: three hours 1. Define ideal rational agent 2. Define a data type to represent problems and nodes. 3. How does one characterize the quality of heuristic? 4. Formally define game as a kind of search problems. 5. Joe, tom and Sam are brothers-represent using first order logic symbols. 6. List the canonical forms of resolution. 7. What is Q-learning? 8. List the issues that affect the design of a learning element. 9. Give the semantic representation of john loves Mary. 10. Define DCG. PART B (5 x 16 = 80 marks) 11.(a) explain uninformed search strategies.(16) (ANS: Page number-73)
12. (a) describe alpha-beta pruning and its effectiveness.(16) (ANS: Page number-167) (Or) (b) Write in detail about any two informed search strategies. (16) (ANS: Page number-94) 13. (a) elaborate forward and backward chaining.(16) (ANS: Page number-217) (Or) (b) Discuss the general purpose ontology with the following elements(ANS: Page -328): (i) Categories (4) (ii) Measures (4) (iii) Composite objects (4) (iv) Mental events and mental objects.(4) 14. (a) explain with an example learning in decision trees.(16) (ANS: Page number-653) (Or) (b) Describe multilayer feed-forward networks. (16) (ANS: Page number-739) 15.(a) (i) list the component steps of communication.(8) (ii) Write short notes about ambiguity and disambiguation.(8) (ANS: Page -818) (Or) (b) Discuss in detail the syntactic analysis (PARSING). (16) (ANS: Page number-798)

maximum: 100 marks

Answer ALL questions PART A- (10 x 2 = 20 marks)

B.E/B.TECH DEGREE EXAMINATION,APRIL/MAY 2010 Sixth semester (Regulation 2004) CS 1351 ARTIFICIAL INTELLIGENCE Time: three hours
Answer ALL questions PART A- (10 x 2 = 20 marks) 1. Define al rational agent 2. How will you measure the problem solving performance? 3. State the reasons when the hill climbing often gets stuck 4. What is the constraint satisfaction problem? 5. Differentiate between prepositional versus first order logic 6. Define ontological engineering 7. What is explanation- based learning? 8. State the advantages of inductive logic programming. 9. Give the component steps of communication. 10. What are machine translation systems? PART B (5 x 16 = 80 marks)

maximum: 100 marks

11.(a) Discuss in detail the structure of agents with suitable diagrams? (pg - 44) (or) (b) Explain the following uninformed search strategies (pg -73) (i) Iterative deepening depth-first search (ii) Bidirectional search 12. (a) Explain the A* search and give the proof of optimality of A* (pg -96) (Or) (b) Describe the Min-Max Algorithm and Alpha beta Pruning (pg -165 ,167) 13. (a) (i)Describe the general process of knowledge engineering (pg -260) (ii)Discuss the syntax and semantics of first order logic (pg -245) (Or) (b) Describe the forward chaining and backward chaining algorithm with suitable Example ( pg -280) 14. (a)(i) Describe the decision tree learning algorithm (pg -653) (ii) Explain the relevance-based learning (Or) (b) Discuss active and passive reinforcement learning with suitable example 15.(a) (i)Describe the semantic interpretation (ii)Illustrate the grammar induction with suitable example (Or) (b) Discuss on information retrieval systems and information extraction systems

B.E/B.TECH DEGREE EXAMINATION,APRIL/MAY 2011 Sixth semester (Regulation 2008) CS 2351 ARTIFICIAL INTELLIGENCE Time: three hours
Answer ALL questions PART A- (10 x 2 = 20 marks) 1. List down the characteristics of intelligent agent? 2. What do you mean by local maxima with respect to search techniques? 3. What factor determines the selection of forward or backward reasoning approach for

maximum: 100 marks

an AI problem? 4. What are the limitations in using propositional logic to represent the knowledge base? 5 Define partial order planner? 6. What are the differences and similarities between problem solving and planning?
7. Define Dempster- Shafer theory? 8. list down two applications of temporal probabilistic models? 9. Explain the concept of learning from example? 10. How statistical learning differ from reinforcement learning? PART B (5 x 16 = 80 marks)

11.(a) Explain in detail on the characteristics and applications of learning agents? (or) (b) Explain AO* algorithm with example. 12. (a) Explain the unification algorithm used for reasoning under predicate logic with an example. (Or) (b) Describe in detail the steps involved in the knowledge engineering process. 13. (a) Explain the concept of planning with state space search using example (Or) (b) Explain the use of planning graph in providing better heuristic estimate with suitable example. 14. (a)Expalin the method of hidden markov models in speech recognition. (Or) (b) Explain the method of handling approximate inference in Bayesian networks. 15.(a) Explain the concept of learning using decision trees and neural network approach (Or) (b) Write short notes on: (1) Statistical learning (2) Explanation based learning.