This course offers an overview of learnability in constraint-based phonology (classical and stochastic Optimality Theory, classical and stochastic Harmonic Grammar, Maximum Entropy Grammars). The language learning task is broken up into various specific and explicitly stated learning problems. For each learning problem, various learning algorithms are investigated and compared, drawn from both the discrete/combinatorial and the numerical/probabilistic approaches pursued in the current literature. The focus is on analytical guarantees, rather than sheer simulation results.
Week 1 will focus on the problem of learning efficiently a grammar consistent with a set of training linguistic data. We will look at batch and error-driven algorithms for ranking and weighting, exploring the learnability implications of different modes of constraint interaction. Week 2 will focus on the problem of learning a restrictive grammar. We will compare the classical approach to restrictiveness based on the Subset Principle with the maximum likelihood approach in MaxEnt. Week 3 will focus on stochastic methods. We will look at stochastic error-driven learning (such as the GLA) and its applications to the RIP approach to the problem of hidden structure. Week 4 will focus on the problem of learning underlying forms. We will focus on the issue of efficient exploration of lexicons and discuss the classical inconsistency detection method.