Coping with NP-hardness

20 Feb
Add to Calendar
Tuesday, 02/20/2018 4:00pm to 5:00pm
Computer Science Building, Room 151
Seminar

Abstract: The vast majority of optimization problems arising in practical applications are NP-hard. Thus, it is difficult to over-state the importance of understanding how to handle NP-hard problems algorithmically. When a problem is NP-hard, this means that (unless P=NP) there cannot exist an algorithm that solves all instances of the problem optimally using time polynomial in the size of the instance. However, this does not mean that algorithms stand powerless in the face of NP-hardness. Indeed, an NP-hard problem may still admit algorithms of the following kind.

  • An algorithm that solves all instances of the problem optimally in time O(1.001^n), or even O(2^sqrt(n)), where n is the size of the input instance. Such algorithms are studied in the field of exact exponential time algorithms, and may still be useful even for moderate size worst-case input instances.
  • An algorithm that solves all instances of the problem within a factor 1.01 of optimality using time polynomial in the size of the instance. In many cases, being just 1% worse than the optimal solution is acceptable. Such algorithms are studied in the field of approximation algorithms.
  • An algorithm that solves structurally simple instances of the problem optimally using time polynomial in the size of the instance. This is very useful if we know that the instances that we actually want to solve are structurally simple. Algorithms of this kind are studied in the fields of restricted input and parameterized algorithms.
  • An algorithm that works in polynomial time and reduces possibly large yet structurally simple instances to equivalent small instances. This can be very useful provided that the output instance is so small that a solution can be found by brute force. The performance of such algorithms is studied in the field of kernelization.

In this talk, Daniel will give a sample of each of the above fields, viewed through the lens of his contributions.

Bio: Daniel Lokshtanov received his PhD in Computer Science (2009), from the University of Bergen. Lokshtanov spent two years (2010-2012) as a Simons Postdoctoral Fellow at University of California at San Diego, and is now a professor at the Department of Informatics at the University of Bergen. His main research interests are in graph algorithms, parameterized algorithms and complexity. He is a recipient of the Meltzer prize, the Bergen Research Foundation young researcher grant, and of an ERC starting grant on parameterized algorithms. He is a co-author of the recent Parameterized Algorithms book [http://parameterized-algorithms.mimuw.edu.pl/].

Faculty Host
: