Change of Representation and Inductive Bias One of the most important emerging concerns of machine learning researchers is the dependence of their learning programs on the underlying representations, especially on the languages used to describe hypotheses. The effectiveness of learning algorithms is very sensitive to this choice of language; choosing too large a language permits too many possible hypotheses for a program to consider, precluding effective learning, but choosing too small a language can prohibit a program from being able to find acceptable hypotheses. This dependence is not just a pitfall, however; it is also an opportunity. The work of Saul Amarel over the past two decades has demonstrated the effectiveness of representational shift as a problem-solving technique. An increasing number of machine learning researchers are building programs that learn to alter their language to improve their effectiveness. At the Fourth Machine Learning Workshop held in June, 1987, at the University of California at Irvine, it became clear that the both the machine learning community and the number of topics it addresses had grown so large that the representation issue could not be discussed in sufficient depth. A number of attendees were particularly interested in the related topics of constructive induction, problem reformulation, representation selection, and multiple levels of abstraction. Rob Holte, Larry Rendell, and I decided to hold a workshop in 1988 to discuss these topics. To keep this workshop small, we decided that participation be by invitation only.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.