Shlemiel gets a job as a street painter, painting the dotted lines down the middle of the road. On the first day he takes a can of paint out to the road and finishes 300 yards of the road. "That’s pretty good!" says his boss, "you’re a fast worker!" and pays him a kopeck.

The next day Shlemiel only gets 150 yards done. "Well, that’s not nearly as good as yesterday, but you’re still a fast worker. 150 yards is respectable," and pays him a kopeck.

The next day Shlemiel paints 30 yards of the road. "Only 30!" shouts his boss. "That’s unacceptable! On the first day you did ten times that much work! What’s going on?"

"I can’t help it," says Shlemiel. "Every day I get farther and farther away from the paint can!"


The other day I decided to dig out one of my old university assignments. In my second year we had an AI sub-module, in which the programming assignment was to solve a number of instances of the Travelling Salesman Problem.

The Travelling Saleman problem

The full background can be found on the Wikipedia page, but essentially, the Travelling Salesman Problem is:

Given a set of cities and the distances between them, find the shortest tour that visits all of the cities exactly once.

This turns out to be a hard problem. An exact solution can only be found in exponential time. However, there are a number of approximations which can be made to produce very good solutions much more quickly. The assignment was about choosing some of these approximations to find solutions to a number of instances of the problem.

Digging out the code

The code was previously stored as an Eclipse project on my local machine. It was originally written in 2008 by Past Me, who was young and stupid. As such there are a number of issues with the code:

  • Past Me was only dimly aware of source control and believed it required a server, so there is no version history.
  • Past Me had no idea how to lay out a project, so the folder structure makes no sense.

The first step was to reorganise the code and sort out the folder structure in preparation for committing to Github. I set up a Maven module and moved the files around accordingly. The code can now be found on Github in repo [1].

This rest of this exercise will involve cleaning up the Greedy Best First Search implementation.

The Greedy Best First Search algorithm works thusly:

  • Choose a starting city
  • Iterate through the Graph, choosing the shortest edge to a city that has yet to be visited.
  • Finish up in the starting city

This process is repeated for each starting city so that the choice of starting point does not over influence the final tour.

This algorithm visits each of n cities, scans n edges at each city, and repeats this for each of n cities, so a naive implementation should run in O(n3).

Dusting it off

Past Me had a terrible coding style, as he had not yet discovered IntelliJ IDEA and its code inspections. The next job was to use the automated refactorings to cleanup some small issues:

  • Fixing access levels
  • Removing dead code
  • Adding final declarations where appropriate (mutable state isn’t your friend)
  • Updating the code from Java 5 to Java 8 to allow the use of (among other things) the diamond notation
  • Using interface types instead of implementations where appropriate

Running the code

Having given the project a cursory polish, it was time to run it. As a first attempt I ran the code without arguments, to be greeted by the following message:

Some sort of error occurred.  Check the file was formatted correctly

Thanks Past Me. Sure enough:

    
    @Main
    public static void main(String[] args){
      Parser parser = null;
      try{
        parser = new Parser(args[0]);
      }
      catch(Exception e){
        System.out.println("Some sort of error occurred.  Check the file was formatted correctly");
        return;
      }
      // remaining code omitted
    
    

Past Me clearly knew about Exceptions, but didn’t always know when not to catch them. Here it would be clearer to just throw the Exception so that useful information about the problem is printed out.

Clearing this up (and a similar issue with writing the output at the end), and running the code with the files I had previously resulted in the following running times (I’ve included my times from 10 years ago as a reference):

Filename 2006 Macbook /s 2017 Macbook Pro /s
file12 0 0
file17 0 0
file21 0 0
file26 0 0
file42 0 0
file48 0 0
file58 1 0
file175 89 8
file180 99 25
file535 9401 1385

Modern hardware has clearly gotten much faster.

Modernising the code

Having run the code, it was time to look for improvements. In the intervening 10 years I’ve become much more interested in a functional programming style, so I planned to look through the code and identify places where those changes could be made.

Past Me didn’t know about Unit Testing, so unfortunately there were no tests to verify that correctness had been maintained, but re-running the code and checking the output didn’t change would be good enough.

I chose to process the files in execution order. Past Me did a reasonable job breaking up the code into modules.

  • Parser.java contains code for reading the input files
  • Graph.java contains a graph data structure for processing the input. There are also Node.java and Arc.java containing helper data structures
  • Path.java contains code for managing a solution or part thereof
  • Writer.java contains code for writing the final solution to a file

Some of the APIs were questionable, but the high level structure was broadly sound. By proceeding through each file I turn, I would be able to improve the code without major breakages.

Parser.java

The initial version of the Parser constructor looked like this:

    @SuppressWarnings("unchecked")
    public Parser(final String filename) throws IOException {
      final BufferedReader reader = new BufferedReader(new FileReader(filename));
      String line = reader.readLine();
      String data = "";
      while(line != null){
        data+=line;
        line = reader.readLine();
      }
      final String[] parts = data.split(",");
      for(int i = 0;i<parts.length;i++){
        parts[i] = parts[i].trim();
      }

      final String[] nameParts = parts[0].split("=");
      name = nameParts[1].trim();

      final String[] sizeParts = parts[1].split("=");
      final String sizeString = sizeParts[1].trim();
      size = Integer.parseInt(sizeString);

      final HashSet<Node<Integer>> nodes = new HashSet<>();
      final Node<Integer>[] nodeArray = new Node[size];
      for(int i = 0;i<size;i++){
        nodeArray[i] = new Node<>(i + 1);
      }
      Collections.addAll(nodes, nodeArray);

      final HashSet<Arc<Integer>> arcs = new HashSet<>();
      int j = 2;
      for(int k = 0;k<size;k++){
        for(int l = k+1;l<size;l++){
          final int weight = Integer.parseInt(parts[j]);
          arcs.add(new Arc<>(nodeArray[k], nodeArray[l], weight));
          arcs.add(new Arc<>(nodeArray[l], nodeArray[k], weight));
          j++;
        }
      }

      graph = new Graph<>(nodes, arcs);
    }

The unclosed BufferedReader aside, this is not brilliant, but not completely terrible by Java 6 standards. However, Java 8 introduced Streams, which give us the tools to clean up things like this.

The reading of the input file can be replaced with Files.lines, which is both shorter and fixes the unclosed Reader issue.

    final String data = Files.lines(Paths.get(filename))
          .collect(Collectors.joining());

The extraction of the values and the generation of the List<Node> can be replaced with a couple of simple map operations.

    final List<String> parts = Arrays.stream(data.split(","))
            .map(String::trim)
            .collect(Collectors.toList());

    // code omitted

    final List<Node<Integer>> nodes = IntStream.range(1, size + 1)
            .boxed()
            .map(Node::new)
            .collect(Collectors.toList());

The double loop at the end can be replaced by an Iterator to generate the Node labels, and a pair of nested flatMap operations. Determining if this is better or worse is left as an exercise for the reader.

    final Iterator<Integer> nodeIndices = parts.subList(2, parts.size()).stream()
            .map(Integer::parseInt)
            .iterator();

    final Set<Arc<Integer>> arcs = IntStream.range(0, size)
            .boxed()
            .flatMap(k -> IntStream.range(k + 1, size)
                    .boxed()
                    .flatMap(l -> {
                        final int weight = nodeIndices.next();
                        return Stream.of(
                                new Arc<>(nodes.get(k), nodes.get(l), weight),
                                new Arc<>(nodes.get(l), nodes.get(k), weight)
                        );
                    }))
            .collect(Collectors.toSet());

None of these changes are likely to boost performance, but they are more functional.

Graph.java

The first observation was that the Node class added no value - it was merely a wrapper around an instance of type T. The first step was to expunge this class from the program.

Most of the code in Graph was only used by an experimental subclass, which was never finished. Removing that and the associated methods shortened Graph considerably.

    public class Graph<T> {

    private final Set<T> nodes;
    protected final Set<Arc<T>> arcs;

    public Graph(final Set<T> nodes, final Set<Arc<T>> arcs) {
        this.nodes = nodes;
        this.arcs = arcs;
    }

    public Set<T> getNodes() {
        return nodes;
    }

    public Set<Arc<T>> getArcs() {
        return arcs;
    }

    public Set<Arc<T>> startingAt(final T node) {
        final HashSet<Arc<T>> startingAt = new HashSet<>();
        for (final Arc<T> arc : arcs) {
            if (arc.isStart(node)) {
                startingAt.add(arc);
            }
        }
        return startingAt;
    }

    public int distance(final T n1, final T n2) throws IllegalArgumentException {
        if (succeeds(n1, n2)) {
            final Iterable<Arc<T>> arcs = startingAt(n1);
            int distance = 0;
            for (final Arc<T> arc : arcs) {
                if (arc.isEnd(n2)) {
                    distance = arc.getWeight();
                }
            }
            return distance;
        } else {
            throw new IllegalArgumentException("Nodes not adjacent");
        }
    }
}

The Graph is an immutable data structure (well done Past Me), but doesn’t take full advantage of this. The startingAt method naively runs through the entire graph, which is n2 edges. This is Shlemiel the Painter’s algorithm. This drives the time complexity of the program up to O(n4).

One solution here is build a cache, a Map<T, Set<Arc<T>> mapping a node to the arcs starting from it, which can be built when the graph is constructed. This requires O(n2) time, but only has to be done once, so it is no worse than the complexity of the program as a whole. The startingAt method then runs in constant time.

The distance method also runs in O(n2) by virtue of calling startingAt. It also called succeeds, which also iterated through the whole graph. These performance issues make the fact that it pointlessly continues looking at arcs once it has found the one required (Past Me obviously hadn’t discovered break) a minor issue. This can be replaced with a filter and a findFirst to short circuit when the correct edge is found. The absent case of the Optional is irrelevant because Travelling Salesman Problem graphs are always connected.

The terrible successors method can then be removed.

    private final Map<T, Set<Arc<T>>> table = new HashMap<>();

    private final Set<T> nodes;

    public Graph(final Set<T> nodes, final Set<Arc<T>> arcs) {
        this.nodes = nodes;

        for (final Arc<T> arc : arcs) {
            if (!table.containsKey(arc.getStart())) {
                table.put(arc.getStart(), new HashSet<>());
            }

            table.get(arc.getStart()).add(arc);
        }
    }

    public Set<Arc<T>> startingAt(final T node) {
        return table.get(node);
    }

    public int distance(final T n1, final T n2) throws IllegalArgumentException {
        final Optional<Integer> weight = startingAt(n1).stream()
                .filter(a -> a.getEnd().equals(n2))
                .findFirst()
                .map(Arc::getWeight);

        return weight.orElseThrow(() -> new IllegalArgumentException("Nodes not adjacent"));
}

GreedyBestFirstSearch.java

Now we come to the algorithm itself.

    class GreedyBestFirstSearch {

        public static void main(final String[] args) throws IOException {
            final Parser parser = new Parser(args[0]);

            final Collection<Path> paths = new HashSet<>();

            for(final Integer node : parser.getGraph().getNodes()){
                final Path path = new Path();
                final int i = node;
                path.add(i,0);

                while(path.size()<parser.getSize()){
                    final Integer n = path.getLast();
                    final Set<Arc<Integer>> arcs = parser.getGraph().startingAt(n);
                    boolean added = false;

                    while(!added){
                        final Arc<Integer> arc = Collections.min(arcs);

                        if(!path.contains(arc.getEnd())){
                            path.add(arc.getEnd(), arc.getWeight());
                            added = true;
                        }
                        else{
                            arcs.remove(arc);
                        }
                    }
                }

                final Integer n = path.getLast();
                final Iterable<Arc<Integer>> arcs = parser.getGraph().startingAt(n);

                for(final Arc<Integer> arc : arcs){
                    if(arc.isEnd(node)){
                        path.add(i, arc.getWeight());
                    }
                }

                paths.add(path);
            }

            Writer.writeToFile(parser.getName(), parser.getSize(), Collections.min(paths));
        }
    }

The program itself is not bad, aside from its dependence on slow code. That said, there are a couple of improvements that can be made here.

If the Graph is modified to use a TreeSet as the basis of the cache, the code for finding the shortest edge can be simplified to the following.

    final Optional<Arc<Integer>> shortestEdge = arcs.stream()
            .filter(arc -> !path.contains(arc.getEnd()))
            .findFirst();

This will find the shortest edge from the cache in O(log n) time, which allows us to bring down the complexity of the program to O(n2log n).

We can also replace the code for completing the tour with a single call to the distance method on the graph.

Putting it all together

Having modernised and optimised the code, I ran it again to see if there was any improvement.

Filename 2017 Macbook Pro /s 2017 Macbook Pro (optimised) /s
file12 0 0
file17 0 0
file21 0 0
file26 0 0
file42 0 0
file48 0 0
file58 0 0
file175 8 0
file180 25 0
file535 1385 3

This is a speed up of three orders of magnitude on the largest file. This demonstrates the importance of profiling for bottlenecks.