Research on cognitive architectures varies widely in the degree to which it attempts to match psychological data. ACT-R (Anderson & Lebiere, 1998) an… - Pat Langley

" "

Research on cognitive architectures varies widely in the degree to which it attempts to match psychological data. ACT-R (Anderson & Lebiere, 1998) and EPIC (Kieras & Meyer, 1997) aim for quantitative fits to reaction time and error data, whereas Prodigy (Minton et al., 1989) incorporates selected mechanisms like means-ends analysis but otherwise makes little contact with human behavior. Architectures like Soar (Laird, Newell, & Rosenbloom, 1987; Newell, 1990) and Icarus (Langley & Choi, in press; Langley & Rogers, 2005) take a middle position, drawing on many psychological ideas but also emphasizing their strength as flexible AI systems. What they hold in common is an acknowledgement of their debt to theoretical concepts from cognitive psychology and a concern with the same intellectual abilities as humans.

English
Collect this quote

About Pat Langley

Pat Langley (born May 2, 1953) is an American cognitive scientist and AI researcher, Honorary Professor of Computer Science at the University of Auckland, and Director of the Institute for the Study of Learning and Expertise. He coined the term decision stump and was founding editor of journals Machine Learning and Advances in Cognitive Systems.

Also Known As

Alternative Names: Patrick W. Langley Pat (Patrick) Wyatt Langley
Limited Time Offer

Premium members can get their quote collection automatically imported into their Quotewise collections.

Related quotes. More quotes will automatically load as you scroll down, or you can use the load more buttons.

Additional quotes by Pat Langley

Science is a seamless web: each idea spins out to a new research task, and each research finding suggests a repair or an elaboration of the network of theory. Most of the links connecting the nodes are short, each attaching to its predecessors. Weaving our way through the web, we stop from time to time to rest and survey the view — and to write a paper or a book.

In recent years, researchers have made considerable progress on the of inductive learning tasks, but for theoretical results to have impact on practice, they must deal with the average case. In this paper we present an average-case analysis of a simple algorithm that induces one-level decision trees for concepts defined by a single relevant attribute. Given knowledge about the number of training instances, the number of irrelevant attributes, the amount of class and attribute noise, and the class and attribute distributions, we derive the expected classification accuracy over the entire instance space. We then examine the predictions of this analysis for different settings of these domain parameters, comparing them to experimental results to check our reasoning.

Loading...