Thursday, September 18, 2008

The Limits of Proving Programs Correct

My university required senior computer science students to complete a capstone project before they graduated. The idea was to have every person perform a research project or internship that brought together all that he or she had learned over the last four years.

My capstone project was of the independent research variety. I'm very interested in formal logic and proof systems, and wanted to examine parallels between generating a formal proof and writing a computer program.

What I ended up doing was creating a system that started out with a formal program specification written in Z-notation, and simultaneously generated a working Spark-Ada program and a proof of that program's conformity to the Z spec.

(Since each line of code transforms the program from one state to another, you can figure pre- and post- conditions of individual lines of code. So, the problem becomes a search for a list of statements whose pre- and post- conditions chain together to take the Z-specified preconditions to the Z-specified post conditions. The sequence of statements make up the working code, and the chain of line-by-line pre- and post- conditions are the correctness proof.)

The programs it could generate were very simple (they could only use a ridiculously small subset of Ada). It could write methods that did things like swapping the values of two variables. But it worked for the most part. And it was a lot of fun to work on, and I learned a lot.

The biggest thing I learned was the limited utility of proving code correct.

Something a fellow named Benjamin Russell said on the Haskell mailing list today got me thinking about software proofs (and inspired this post). He said that, "[b]asically, if you can prove that the program is correct, then you don't need to test it."

I disagree.

What I learned doing my senior capstone is that when you prove a program correct, what you're actually proving is that the program conforms to some specification. Maybe it's a specification formally defined in a language like Z, or maybe it's a non-formal specification that resides in your head.

But really, a specification is prone to error just like program code. As your specification gets more detailed, it starts to become another programming language. With my capstone, all I really did is write a Z-to-Spark compiler. I'd shifted the opportunity for human error up a level -- now, instead of making mistakes in Ada code, people were free to make mistakes in their Z spec.

It's like saying you can write provably correct assembly code that matches a C specification. Maybe C or Z is an easier language to reason in, but you still have to take a lot of time to examine your specification, to make sure everything in the system meshes together -- really, the formal specification becomes your new coding language.

On the flip side of the coin, as your specification becomes less formal, that opportunity for error shifts back toward the coding process. Less constraints in the specification mean more freedom in writing the code, which means more opportunity for mistakes in your actual program.

I guess what I'm saying is that in this approach:

1) come up with an over-arching specification for everything
2) write code that matches the specification

being able to prove code correct (prove it meets some spec) only gets you step two. Step one is just has hard (or harder). Creating a really formal specification means you have a new programming language to deal with, and a less-formal spec means you can have inconsistencies or missing information that bleed into your code base.

There is some essential complexity in developing software that cannot be eliminated by correctness proofs -- you're just moving opportunity for error from the "coding" side of the balance to the "specification" side.

All of that being said, if you're doing something that has to be right, like air traffic control or data encryption, and you're willing to spend a lot of time thinking about your formal specification, then I think provably correct software is the right goal. But correctness proofs are just one part of the process. Even if everything is proved correct, you still have to really test your software. If not to test the code, then to test the spec.

I think this is one of the reasons that test-driven development is such a practical idea for "normal" software development. It recognizes that your specifications are just as incomplete and subject-to-change as your code. With TDD, your formal specification is your suite of tests, and proving your code meets the spec is easy -- just run the tests.

As far as program specifications go, unit tests are a nice balance between a formal approach (a machine checks conformity) and an informal approach (the tests are always changing and never complete).

For most programming, I think this balance is the best we have.

No comments: