Project Euler #18 and #67

I’m hoping to get back to work on the regular expressions filter soon – should have some time over the Christmas/New Year break – but in the meantime, I took a look at the Euler problems related to number triangles. Both problems ask for the maximal path through adjacent points in the triangle – from problem 18:

By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.

3
7 4
2 4 6
8 5 9 3

That is, 3 + 7 + 4 + 9 = 23.

Find the maximum total from top to bottom of the triangle below:

75
95 64
17 47 82
18 35 87 10
20 04 82 47 65
19 01 23 75 03 34
88 02 77 73 07 63 67
99 65 04 28 06 16 70 92
41 41 26 56 83 40 80 70 33
41 48 72 33 47 32 37 16 94 29
53 71 44 65 25 43 91 52 97 51 14
70 11 33 28 77 73 17 78 39 68 17 57
91 71 52 38 17 14 91 43 58 50 27 29 48
63 66 04 68 89 53 67 30 73 16 69 87 40 31
04 62 98 27 23 09 70 98 73 93 38 53 60 04 23

It is of course possible to work out the sum of every single path, by starting at the top and trying every combination of left and right choices to get to the bottom, but for a triangle with n rows this works out at 2n-1 sums to calculate. This makes the brute-force approach impractical for the larger triangle in problem 67 (100 rows); it also involves a whole lot of unnecessary calculation.

The key to seeing this is understanding that (1) every point in the triangle above the bottom row is itself the apex of a sub-triangle, that has its own maximal path, and (2) where any sub-triangles intersect, this intersection contains another sub-triangle – so working out all the paths through the two intersecting triangles involves working out every path through the intersection twice. For example, in the triangle

3
7 4
2 4 6
8 5 9 3

the two sub-triangles with apexes at 7 and 4 both contain the sub-triangle

4
5 9

with two possible paths. So calculating every possible path through the larger triangle involves traversing both these paths twice, once for each sub-triangle containing them. It’s easy to see that this gets worse quickly as the triangle gets larger.

A smarter approach is what might be termed a ‘bottom-up’ method. Working from the base upwards, calculate the maximal path for every point in each row. Once you have the maxima for two adjacent points in a row, the maximum for the point above both these points is simply the sum of this point and the maximum of these two values. The advantage of this method is that the maximum for any point in the triangle is calculated once and only once. Here’s how this looks in Python.

First, a helper function get_triangle to read the string in and get a list of rows of integers making up the triangle:

def get_triangle(triangle_string):
    string_list=triangle_string.strip().split('\n')
    #reverse to work from the base up
    string_list.reverse()
    return [[int(x) for x in row.split()] \
        for row in string_list]

Then, the main routine max_path works through each row, progressively replacing the list of maxima at each point in the previous row with the same for the current row:

def max_path(triangle_string):
    triangle = get_triangle(triangle_string)
    #every point in the base is its own maximum
    max_list = triangle[0]
    for row in range(1, len(triangle)):
        #add the greater of the two adjacent maxima to
        #each point in the row
        max_list = \
            [triangle[row][x] + max(max_list[x], max_list[x+1]) \
             for x in range(len(triangle[row]))]
    return max_list[0]

That’s all for now – time permitting I will be posting the next part of the regular expression filter sometime in the next couple of days. Until then, hoping you all had a restful and happy Christmas and are taking some time off to be with family. Take care…

Functional Programming in Python

Before I get started I should acknowledge a number of sources that form the basis for this post. First and foremost, Guido van Rossum and Fred Drake, Jr.’s excellent ‘Functional Programming HOWTO’ for Python 2.6.2, and secondly David Mertz’s book ‘Text Processing in Python’, which both served to convince me of the benefits of functional programming. Check them out, they’re freely available and both extremely well written (although neither is really suitable for beginners). Also the Abelson and Sussman text ‘Structure and Interpretation of Computer Programs’ from which I borrow a couple of examples.

The programming languages I started out learning were mostly procedural: in C, a program solves a problem by having the computer execute a set of instructions in a particular order. I then moved on to object-oriented programming in languages like VB, Ruby and Python – solving a problem is accomplished by designing, reading and changing the internal state of objects which model the problem. Both of these models are imperative, in that they are characterised by telling the computer to do something. There is typically a lot of assignment to the different variables which make up the state machine of the program. For a very simple example, the function sum_to below finds the sum of integers up to the argument n (the nth trianglular number):

def sum_to(n):
  a = 0
  for x in range(n + 1):
    a += x
  return a

>>> sum_to(10)
55

This relies on the repeated assignment of new values to the variable a, which holds the value which is eventually returned. The state of the variables x and a is maintained throughout the program’s execution.

Functional programming, as the name suggests, is centred on functions – a program written in a functional style relies on the evaluation of functions, which turn input into output without the need to waste code on maintaining internal state. The sum_to function could be rewritten in a functional style as follows:

def sum_to(n):
  return sum(range(n+1))

>>> sum_to(10)
55

Python fully supports writing functional code, although as the examples above illustrate, you don’t have to. So why would you? No doubt there is some advantage to not having to maintain a potentially complex internal state machine, which does make functional code easier to write IMO; I guess potentially also there is some performance advantage to be gained by not having so much clutter in the namespace. However, I find the two best reasons for using functional code are

  1. It encourages functional decomposition. This is a good thing. Rather than make the classic mistake of trying to solve a complex problem with a single monolithic procedure (which is of course usually possible but which will probably have those who maintain your code cursing your name for years to come), the program flow will ideally go from input through a series of small, easily debugged functions to eventual output. This is an approach which I think appeals to me at least in part because it’s similar to what I do when I construct a spreadsheet model. In fact, worksheet functions in any spreadsheet are a very good example of purely functional programming – each function simply takes input and produces output. It also appeals because it improves code reusability and composability.
  2. Higher-order functions. These are like function constructors: functions which take functions as arguments and return functions as outputs. This is an extremely convenient abstraction, as it enables you to concentrate on the concepts involved in the problem, rather than getting lost in the detail of how the concepts are handled.

Python has a lot of builtin tools to support higher-order functions: generator expressions and list comprehensions provide concise statements for large structures, and the reduce() function iteratively applies a function to an sequence. To take an example from SICP, a useful functional abstraction is the summation of a series of terms. We can construct a ‘function factory’ for the general series, which takes as  arguments the function used to construct the terms, and the iterator that supplies the argument to that function:

def sum_terms(term, objIter):
  '''
  sum_terms: function constructor
  Usage: term is a function taking a single argument
         objIter is an iterable object returning the arguments for term
  Returns: a function to calculate a sum of terms in a series
  '''
  return sum([term(x) for x in objIter])

Because this abstracts out the process of summation, the problem of constructing a large series is decomposed nicely into two smaller sub-problems. So, for instance, I can construct the triangular sum function from earlier by splitting the task into the construction of the term (the identity function) and the range to apply it to (for an argument n, the range of integers from 1 to n):

>>> identity = lambda x: x
>>> inc = lambda n: range(1, n + 1)
>>> sum_triangle = lambda x: sum_terms(identity, inc(x))
>>> sum_triangle(10)
55

I can also create a function for summing a series of cubes: \sum_{i=1}^n i^3

>>> cube = lambda x: x**3
>>> sum_cubes = lambda x: sum_terms(cube, inc(x))
>>> sum_cubes(10)
3025

A slightly more complex series can give an approximate numerical solution for the definite integral of a function over an interval:

\int_a^b f(x) \, dx = \delta*\sum_{k=1}^{(b-a)/\delta} f(a+\delta/2 +k*\delta)

In this case, the arguments a and b determine the length of the interval and dx sets the width of the subintervals used:

def integral(func, a, b, dx):
  def rng(a, b, dx): 
    return ((a + dx/2 + k*dx) for k in range(int((b-a)/dx)))
  return dx*sum_terms(func, rng(a, b, dx))

This then returns an integration function which can be applied to any function. Testing it out on the cube function:

>>> integral(cube, 0, 1, 0.01)
0.24998750000000006

which evaluates exactly to 1/4.

Why I’d like Python at work

One of the main reasons I like Python as a scripting language is its brevity – in particular I enjoy using things like list comprehensions, which enable you to escape the use of for loops when dealing with lists. Lists in Python are powerful and when used in combination with map and filter, allow you to do a lot more with a lot less code. This is one of the things I really miss when I write code in VBA/VBScript, where you’re often stuck dealing with either arrays or collections from the object model. Dive Into Python offers an interesting perspective on this.

Dick Kusleika posted a VBA sub to open the newest file in a folder over at Daily Dose of Excel today, which made use of the Scripting library to iterate over .csv files in a folder, find the newest and open it. Dick mentioned wanting an easier way to find the newest than looping over all of them, which got me thinking. If the Scripting.Folder object had a .NewestFile property, then that would obviously do it, but failing that, I could not see any other way than to loop through all files to examine the create dates.

In Python, however, without looping, the exercise becomes a two-line function:

import os, os.path, sys

def newest(ext, path=os.getcwd()):
  """
  pre: path a valid absolute filepath or None, ext a file 
  extension (e.g. '.py')
  returns: newest (last modified on *nix system, last created 
  on Windows) file from path
  """
  file_dict = dict([(fname, \
    os.path.getctime(os.path.join(path, fname))) \
    for fname in os.listdir(path) \
    if os.path.splitext(fname)[1] == ext])
  return max(file_dict, key=lambda x: file_dict[x])

I’m not sure at all whether there would be any performance benefit from doing it this way as opposed to looping over the files, but the real benefit of this is that it frees you from the worry of how to construct a for loop over the list of filenames returned from os.listdir(). All you need to worry about is which one of them you want.