Given a sample of data S, a learning algorithm L, and a feature set A, feature xi , is incrementally useful to L with respect to A if the accuracy of… - Pat Langley

" "

Given a sample of data S, a learning algorithm L, and a feature set A, feature xi , is incrementally useful to L with respect to A if the accuracy of the hypothesis that L produces using the feature set {xi} ∪ A is better than the accuracy achieved using just the feature set A.

English
Collect this quote

About Pat Langley

Pat Langley (born May 2, 1953) is an American cognitive scientist and AI researcher, Honorary Professor of Computer Science at the University of Auckland, and Director of the Institute for the Study of Learning and Expertise. He coined the term decision stump and was founding editor of journals Machine Learning and Advances in Cognitive Systems.

Also Known As

Alternative Names: Patrick W. Langley Pat (Patrick) Wyatt Langley
Limited Time Offer

Premium members can get their quote collection automatically imported into their Quotewise collections.

Related quotes. More quotes will automatically load as you scroll down, or you can use the load more buttons.

Additional quotes by Pat Langley

A cognitive architecture specifies aspects of an intelligent system that are stable over time, much as in a building’s architecture. These include the memories that store perceptions, beliefs, and knowledge, the representation of elements that are contained in these memories, the performance mechanisms that use them, and the learning processes that build on them. Such a framework typically comes with a programming language and software environment that supports the efficient construction of knowledge-based systems.

In all of these cases, the error arose from accepting “loose” fits of a law to data, and the later, correct formulation provided a law that fit the data much more closely. If we wished to simulate this phenomenon with BACON, we would only have to set the error allowance generously at the outset, then set stricter limits after an initial law had been found.

In recent years, researchers have made considerable progress on the of inductive learning tasks, but for theoretical results to have impact on practice, they must deal with the average case. In this paper we present an average-case analysis of a simple algorithm that induces one-level decision trees for concepts defined by a single relevant attribute. Given knowledge about the number of training instances, the number of irrelevant attributes, the amount of class and attribute noise, and the class and attribute distributions, we derive the expected classification accuracy over the entire instance space. We then examine the predictions of this analysis for different settings of these domain parameters, comparing them to experimental results to check our reasoning.

Loading...