Show simple item record

dc.contributor.authorRieger, Chuck
dc.date.accessioned2008-08-26T16:16:00Z
dc.date.available2008-08-26T16:16:00Z
dc.date.issued1975-11
dc.identifier.urihttp://hdl.handle.net/1721.1/41994
dc.descriptionThis is the edited text of the "Computers and Thought Lecture" delivered to the 4th International Conference on Artificial Intelligence, held in Tbilisi, Georgia, USSR, September 1975. Work reported herein was conducted partly at the University of Maryland, under support of a University Research Board grant, and partly at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology research program supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under Contract Number N00014-75-c-0643.en
dc.description.abstractPlan synthesis and language comprehension, or more generally, the act of discovering how one perception relates to others, are two sides of the same coin, because they both rely on a knowledge of cause and effect - algorithmic knowledge about how to do things and how things work. I will describe a new theory of representation for commonsense algorithmic world knowledge, then show how this knowledge can be organized into larger memory structures, as it has been in a LISP implementation of the theory. The large-scale organization of the memory is based on structures called a bypassable causal selection networks. A system of such networks serves to embed thousands of small commonsense algorithm patterns into a larger fabric which is directly usable by both a plan synthesizer and a language comprehender. Because these bypassable networks can adapt to context, so will the plan synthesizer and language comprehender. I will propose that the model is an approximation to the way humans organize and use algorithmic knowledge, and as such, that it suggests approaches not only to problem solving and language comprehension, but also to learning. I'll describe the commonsense algorithm representation, show how the system synthesizes plans using this knowledge, and trace through the process of language comprehension, illustrating how it threads its way through these algorithmic structures.en
dc.description.sponsorshipMIT Artificial Intelligence Laboratory Department of Defense Advanced Research Projects Agency University of Marylanden
dc.language.isoen_USen
dc.publisherMIT Artificial Intelligence Laboratoryen
dc.relation.ispartofseriesMIT Artificial Intelligence Laboratory Working Papers, WP-114;
dc.titleOne System for Two Tasks: A Commonsense Algorithm Memory that Solves Problems and Comprehends Languageen
dc.typeWorking Paperen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record