Get your essays here, 33,000 to choose from!

Limited Time Offer at Free College Essays!!!

Supercomputing

18 Pages 4575 Words


ults and other objects in the environment. The learning
mechanisms used by the brain are currently not completely understood.
Artificial neural networks in real-world applications today are usually
trained through some variant of the Backpropagation algorithm (which is
known to be biologically unrealistic). The Backpropagation algorithm
works fine for smallish networks (of up to a few thousand neurons) but it
doesn't scale well. The time it takes to train a network tends to increase
dramatically with the number of neurons it contains. Another limitation of
backpropagation is that it is a form of supervised learning, requiring that
signed error terms for each output neuron are specified during learning. It's
not clear how such detailed performance feedback on the level of
individual neurons could be provided in real-world situations except for
certain well-defined specialized tasks.

A biologically more realistic learning mode is the Hebbian algorithm.
Hebbian learning is unsupervised and it might also have better scaling
properties than Backpropagation. However, it has yet to be explained how
Hebbian learning by itself could produce all the forms of learning and
adaptation of which the human brain is capable (such the storage of
structured representation in long-term memory - Bostrom 1996).
Presumably, Hebb's rule would at least need to be supplemented with
reward-induced learning (Morillo 1992) and maybe with other learning
modes that are yet to be discovered. It does seems plausible, though, to
assume that only a very limited set of different learning rules (maybe as few
as two or three) are operating in the human brain. And we are not very far
from knowing what these rules are.

Creating superintelligence thr...

< Prev Page 2 of 18 Next >

Essays related to Supercomputing

Loading...