Home NewsPublicationsTeachingSoftwareAwardsOther

Lars Kotthoff

Lars Kotthoff

Assistant Professor

larsko@uwyo.edu

EN 4071A
Department of Computer Science
University of Wyoming
Dept 3315, 1000 E University Ave
Laramie, WY 82071-2000

My research combines artificial intelligence and machine learning to build robust systems with state-of-the-art performance. I develop techniques to induce models of how algorithms for solving computationally difficult problems behave in practice. Such models allow to select the best algorithm and choose the best parameter configuration for solving a given problem. I lead the Meta-Algorithmics, Learning and Large-scale Empirical Testing (MALLET) lab and have acquired more than $400K in external funding to date. I am directing the Artificially Intelligent Manufacturing center (AIM) at the University of Wyoming.

More broadly, I am interested in innovative ways of modelling and solving challenging problems and applying such approaches to the real world. Part of this is making cutting edge research available to and usable by non-experts. Machine learning often plays a crucial role in this, and I am also working on making machine learning more accessible and easier to use.

Interested in coming to beautiful Wyoming and joining MALLET? There are several funded PhD positions available. Please drop me an email or, if you are already here, come by my office. I also have Master's projects and projects for undergraduates seeking research experience available. There are usually several of them posted on the board opposite of my office.

Follow me on Twitter.

News

Publications

For citation numbers, please see my Google Scholar page.

2019

  • Hutter, Frank, Lars Kotthoff, and Joaquin Vanschoren, eds. Automated Machine Learning: Methods, Systems, Challenges. Springer, 2019. bibTeX

  • Schwarz, Hannes, Lars Kotthoff, Holger Hoos, Wolf Fichtner, and Valentin Bertsch. “Improving the Computational Efficiency of Stochastic Programs Using Automated Algorithm Configuration: an Application to Decentralized Energy Systems.” Annals of Operations Research, January 2019. https://doi.org/10.1007/s10479-018-3122-6. preprint PDF bibTeX abstract

    The optimization of decentralized energy systems is an important practical problem that can be modeled using stochastic programs and solved via their large-scale, deterministic-equivalent formulations. Unfortunately, using this approach, even when leveraging a high degree of parallelism on large high-performance computing systems, finding close-to-optimal solutions still requires substantial computational effort. In this work, we present a procedure to reduce this computational effort substantially, using a state-of-the-art automated algorithm configuration method. We apply this procedure to a well-known example of a residential quarter with photovoltaic systems and storage units, modeled as a two-stage stochastic mixed-integer linear program. We demonstrate that the computing time and costs can be substantially reduced by up to 50\% by use of our procedure. Our methodology can be applied to other, similarly-modeled energy systems.
  • Lindauer, Marius, Jan N. van Rijn, and Lars Kotthoff. “The Algorithm Selection Competitions 2015 and 2017.” Artificial Intelligence 272 (2019): 86–100. https://doi.org/https://doi.org/10.1016/j.artint.2018.10.004. preprint PDF bibTeX abstract

    The algorithm selection problem is to choose the most suitable algorithm for solving a given problem instance. It leverages the complementarity between different approaches that is present in many areas of AI. We report on the state of the art in algorithm selection, as defined by the Algorithm Selection competitions in 2015 and 2017. The results of these competitions show how the state of the art improved over the years. We show that although performance in some cases is very good, there is still room for improvement in other cases. Finally, we provide insights into why some scenarios are hard, and pose challenges to the community on how to advance the current state of the art.
  • Iqbal, Md Shahriar, Lars Kotthoff, and Pooyan Jamshidi. “Transfer Learning for Performance Modeling of Deep Neural Network Systems.” In USENIX Conference on Operational Machine Learning. Santa Clara, CA: USENIX Association, 2019. preprint PDF bibTeX abstract

    Modern deep neural network (DNN) systems are highly configurable with large a number of options that significantly affect their non-functional behavior, for example inference time and energy consumption. Performance models allow to understand and predict the effects of such configuration options on system behavior, but are costly to build because of large configuration spaces. Performance models from one environment cannot be transferred directly to another; usually models are rebuilt from scratch for different environments, for example different hardware. Recently, transfer learning methods have been applied to reuse knowledge from performance models trained in one environment in another. In this paper, we perform an empirical study to understand the effectiveness of different transfer learning strategies for building performance models of DNN systems. Our results show that transferring information on the most influential configuration options and their interactions is an effective way of reducing the cost to build performance models in new environments.

2018

  • Kotthoff, Lars, Alexandre Fréchette, Tomasz P. Michalak, Talal Rahwan, Holger H. Hoos, and Kevin Leyton-Brown. “Quantifying Algorithmic Improvements over Time.” In 27th International Joint Conference on Artificial Intelligence (IJCAI) Special Track on the Evolution of the Contours of AI, 2018. preprint PDF bibTeX abstract

    Assessing the progress made in AI and contribu- tions to the state of the art is of major concern to the community. Recently, Frechette et al. [2016] advocated performing such analysis via the Shapley value, a concept from coalitional game theory. In this paper, we argue that while this general idea is sound, it unfairly penalizes older algorithms that advanced the state of the art when introduced, but were then outperformed by modern counterparts. Driven by this observation, we introduce the tem- poral Shapley value, a measure that addresses this problem while maintaining the desirable properties of the (classical) Shapley value. We use the tempo- ral Shapley value to analyze the progress made in (i) the different versions of the Quicksort algorithm; (ii) the annual SAT competitions 2007–2014; (iii) an annual competition of Constraint Programming, namely the MiniZinc challenge 2014–2016. Our analysis reveals novel insights into the development made in these important areas of research over time.
See all

Software

  • Maintainer of the FSelector R package.

  • Author and maintainer of LLAMA, an R package to simplify common algorithm selection tasks such as training a classifier as portfolio selector.

  • Core contributor to the mlr R package (Github) for all things machine learning in R.

  • Leading the Auto-WEKA project, which brings automated machine learning to WEKA.

Teaching

  • I am teaching COSC 3020 (Algorithms and Data Structures) this semester. Lecture materials, assignments, announcements, etc. are available on WyoCourses.
  • I am teaching a practical machine learning course using mlr. The slides are available here.
  • If you are interested in the AI reading group, check out the list of proposed papers here.

Awards

Other

Apart from my main affiliation, I am a research associate with the Maya Research Program. If I'm not in the office, it's possible that you can find me in the jungle of Belize excavating and/or mapping Maya ruins. Check out the interactive map.

I am also involved with the OpenML project project and a core contributor to ASlib, the benchmark library for algorithm selection.

While you're here, have a look at my overview of the Algorithm Selection literature. For something more visual, have a look at my pictures on Flickr.