Home NewsPublicationsTeachingSoftwareAwardsOther

Lars Kotthoff

Lars Kotthoff

Assistant Professor

larsko@uwyo.edu

EN 4071A
Department of Computer Science
University of Wyoming
Dept 3315, 1000 E University Ave
Laramie, WY 82071-2000

My research combines artificial intelligence and machine learning to build robust systems with state-of-the-art performance. I develop techniques to induce models of how algorithms for solving computationally difficult problems behave in practice. Such models allow to select the best algorithm and choose the best parameter configuration for solving a given problem. I lead the Meta-Algorithmics, Learning and Large-scale Empirical Testing (MALLET) lab and have acquired more than $400K in external funding to date. I am directing the Artificially Intelligent Manufacturing center (AIM) at the University of Wyoming.

More broadly, I am interested in innovative ways of modelling and solving challenging problems and applying such approaches to the real world. Part of this is making cutting edge research available to and usable by non-experts. Machine learning often plays a crucial role in this, and I am also working on making machine learning more accessible and easier to use.

Interested in coming to beautiful Wyoming and joining MALLET? There are several funded PhD positions available. Please drop me an email or, if you are already here, come by my office. I also have Master's projects and projects for undergraduates seeking research experience available. There are usually several of them posted on the board opposite of my office.

Follow me on Twitter.

News

Publications

For citation numbers, please see my Google Scholar page.

2019

  • Schwarz, Hannes, Lars Kotthoff, Holger Hoos, Wolf Fichtner, and Valentin Bertsch. “Improving the Computational Efficiency of Stochastic Programs Using Automated Algorithm Configuration: an Application to Decentralized Energy Systems.” Annals of Operations Research, January 2019. https://doi.org/10.1007/s10479-018-3122-6. preprint PDF bibTeX abstract

    The optimization of decentralized energy systems is an important practical problem that can be modeled using stochastic programs and solved via their large-scale, deterministic-equivalent formulations. Unfortunately, using this approach, even when leveraging a high degree of parallelism on large high-performance computing systems, finding close-to-optimal solutions still requires substantial computational effort. In this work, we present a procedure to reduce this computational effort substantially, using a state-of-the-art automated algorithm configuration method. We apply this procedure to a well-known example of a residential quarter with photovoltaic systems and storage units, modeled as a two-stage stochastic mixed-integer linear program. We demonstrate that the computing time and costs can be substantially reduced by up to 50\% by use of our procedure. Our methodology can be applied to other, similarly-modeled energy systems.
  • Lindauer, Marius, Jan N. van Rijn, and Lars Kotthoff. “The Algorithm Selection Competitions 2015 and 2017.” Artificial Intelligence 272 (2019): 86–100. https://doi.org/https://doi.org/10.1016/j.artint.2018.10.004. preprint PDF bibTeX abstract

    The algorithm selection problem is to choose the most suitable algorithm for solving a given problem instance. It leverages the complementarity between different approaches that is present in many areas of AI. We report on the state of the art in algorithm selection, as defined by the Algorithm Selection competitions in 2015 and 2017. The results of these competitions show how the state of the art improved over the years. We show that although performance in some cases is very good, there is still room for improvement in other cases. Finally, we provide insights into why some scenarios are hard, and pose challenges to the community on how to advance the current state of the art.

2018

  • Kotthoff, Lars, Alexandre Fréchette, Tomasz P. Michalak, Talal Rahwan, Holger H. Hoos, and Kevin Leyton-Brown. “Quantifying Algorithmic Improvements over Time.” In 27th International Joint Conference on Artificial Intelligence (IJCAI) Special Track on the Evolution of the Contours of AI, 2018. preprint PDF bibTeX abstract

    Assessing the progress made in AI and contribu- tions to the state of the art is of major concern to the community. Recently, Frechette et al. [2016] advocated performing such analysis via the Shapley value, a concept from coalitional game theory. In this paper, we argue that while this general idea is sound, it unfairly penalizes older algorithms that advanced the state of the art when introduced, but were then outperformed by modern counterparts. Driven by this observation, we introduce the tem- poral Shapley value, a measure that addresses this problem while maintaining the desirable properties of the (classical) Shapley value. We use the tempo- ral Shapley value to analyze the progress made in (i) the different versions of the Quicksort algorithm; (ii) the annual SAT competitions 2007–2014; (iii) an annual competition of Constraint Programming, namely the MiniZinc challenge 2014–2016. Our analysis reveals novel insights into the development made in these important areas of research over time.
  • Degroote, Hans, Patrick De Causmaecker, Bernd Bischl, and Lars Kotthoff. “A Regression-Based Methodology for Online Algorithm Selection.” In 11th International Symposium on Combinatorial Search (SoCS), 37–45, 2018. preprint PDF bibTeX abstract

    Algorithm selection approaches have achieved impressive performance improvements in many areas of AI. Most of the literature considers the offline algorithm selection problem, where the initial selection model is never updated after training. However, new data from running algorithms on instances becomes available when algorithms are selected and run. We investigate how this online data can be used to improve the selection model over time. This is especially relevant when insufficient training instances were used, but potentially improves the performance of algorithm selection in all cases. We formally define the online algorithm selection problem and model it as a contextual multi-armed bandit problem, propose a methodology for solving it, and empirically demonstrate performance improvements. We also show that our online algorithm selection method can be used when no training data whatsoever is available, a setting where offline algorithm selection cannot be used. Our experiments indicate that a simple greedy approach achieves the best performance.
  • Bhuiyan, Faisal H., Lars Kotthoff, and Ray S. Fertig. “A Machine Learning Technique to Predict Static Multi-Axial Failure Envelope of Laminated Composites.” In American Society for Composites 33rd Annual Technical Conference, 2018. preprint PDF bibTeX abstract

    A machine learning technique was used to predict static, failure envelopes of unidirectional composite laminas under combined normal (longitudinal or transverse) and shear loading at different biaxial ratios. An artificial neural network was chosen for this purpose due to their superior computational efficiency and ability to handle nonlinear relationships between inputs and outputs. Training and test data for the neural network were taken from the experimental composite failure data for glass- and carbon-fiber reinforced epoxies provided by the world-wide failure exercise (WWFE) program. A quadratic, stress interactive Tsai-Wu failure theory was calibrated based on the reported strength values, as well as optimized from the experimental failure data points. The prediction made by the neural network was compared against the Tsai-Wu failure criterion predictions and it was observed that the trained neural network provides a better representation of the experimental data.
See all

Software

  • Maintainer of the FSelector R package.

  • Author and maintainer of LLAMA, an R package to simplify common algorithm selection tasks such as training a classifier as portfolio selector.

  • Core contributor to the mlr R package (Github) for all things machine learning in R.

  • Leading the Auto-WEKA project, which brings automated machine learning to WEKA.

Teaching

  • I am teaching COSC 3020 (Algorithms and Data Structures) this semester. Lecture materials, assignments, announcements, etc. are available on WyoCourses.
  • I am teaching a practical machine learning course using mlr. The slides are available here.
  • If you are interested in the AI reading group, check out the list of proposed papers here.

Awards

Other

Apart from my main affiliation, I am a research associate with the Maya Research Program. If I'm not in the office, it's possible that you can find me in the jungle of Belize excavating and/or mapping Maya ruins. Check out the interactive map.

I am also involved with the OpenML project project and a core contributor to ASlib, the benchmark library for algorithm selection.

While you're here, have a look at my overview of the Algorithm Selection literature. For something more visual, have a look at my pictures on Flickr.