Cochin University Exam Papers BE CS 7th Semester Artificial Neural Networks Nov 2010

Cochin University Exam Papers BE CS 7 th Semester

Artificial Neural Networks Nov 2010

 

EB/CS/IT 705(C) Artificial Neural Networks 

(2006 Scheme)

PART-A

(Answer ALL questions)

I. (a) What do you mean by Linear Separability? Explain any one method to overcome the limitations of Linear Seperability.

(b) Draw and explain the McCulloch-Pitts neural net to perform the Logical XOR function.

(c) Draw and explain the architecture of Discrete Hopfield Net.

(d) Write short note on Convex Combination Method.

(e) Describe the basic training steps of the ART networks.

(f) Write a short note on various types of Bi-directional Associative Memory net.

(g) Describe the Architecture of Cognitron.

(h) What is meant by simulated annealing?

PART-B

II.  (a) Realise a Hebb net for the AND function with bipolar inputs and targets.

(b) Develop a perceptron for the AND function with binary inputs and bipolar targets without bias upto 2 epochs. Take first with (0, 0) and next without (0, 0).

OR

III. (a) Using the perceptron learning rule, find the weights required to perform the following classifications. Vectors (1 1 1 1), (-1 1-1-1) and (1-1-1 l)are members of class (having target value 1); Vectors (111-1) and (1-1-1 1) are not members of class (having target value – 1) Use learning rate of 1 and starting weights of 0. Using each of the training and vectors as input, test the response of the net.

(b) How is perception net used in the aspect of linear seperability?

 

IV. (a) Design a Hopfield network for 4 bit bipolar patterns. The training pattern are

I sample SI =[11 -1 1]

II sample S2 = [-1 1 -1 1]

III sample S3 = [-1 -1-11]

Find the weight matrix and the energy for the three input samples. Determine the pattern to which the sample S = [-1 1 -1 -1] associates.

(b) Describe the procedure for solving constrained optimization problems using a

continuous Hopfield net.

OR

V. (a) Consider the following full CPN using input pair X = (l, l) y — (0, l). Perform

first phase of training (one step only). Find the activation of the cluster layer units and update the weights using learning rates 0.3.

(b) What are the problems involved in Back Propagation Training Algorithms?

 

VI. (a) A hetero associative net is trained by Hebb outer product rule for input row vectors S = (xl, x2, x3, x4) to output row vectors t = (/l, t2j. Find the weight matrix.

(b) Describe the Architecture and Training Algorithm of Kohonen SOM.

OR

VII.    (a) Construct a Mexican hat net with seven units. The activation function for the net is 0 if x < 0 x if 0 <= x <= 2 2     if 2 < x

Stop the network performance if the iterations of contrast enhancements exceeds 2.

The external signal is given as (0.0, 0.3, 0.7, 1.0, 0.7,0.3, 0.0). Also the radius of positive reinforcement (Rl) is 1 and the radius of regions of inter connections (R2) is 2. The initial weights are 0.8 and -0.4 respectively,

(b) Describe the Architecture and application procedure of Hamming Net.

 

VIII. (a)Write a note on Support Vector Machine Classifiers.

(b) Explain the ANN technique for image Compression and Restoration using Neural Networks.

OR

IX. (a) Write a note on Nero Fuzzy Hybrids.

(b) Explain the architecture and Application algorithm of Boltzmann Machine.

Leave a Comment