Friday, September 28, 2007

assert(Useful);

I've gotten into arguments with other developers on more than one occasion about the value of program assertions.

(By the way, I'm talking about the use of "assert" macros/functions/pragmas in the Ada, C++, and Java programming languages. These languages pay my salary, so while I know about Haskell, Erlang, Eiffel, OCaml, Ruby, etc. I have no call to use any of them. So...YMMV on some of the details.)

Let me try to summarize the anti-assertion position of the last heated discussion I had about this, which was with a very talented C++ programmer:
  1. Assertions are almost always compiled out of the released code, so they're USELESS!
  2. If you're asserting something because you think it might break, there should be an explicit check for it in the code, so assertions are USELESS!
Too many programmers can't think beyond the code, so assertions are thought of as just a coding practice of questionable utility. After all, they're normally not going to be in the released version (see 1), and if you do decide to compile them into the release version, there's nothing to be done if they happen to trigger (2), so what's the point?


You need to step back from thinking of program assertions as just code.

The purpose of effective program assertion practice is embedding encoded information about requirements, design, and implementation assumptions into the code, as code.

Assertions are not to be used to error-check code execution, they're meant to be used to help verify that the software is implemented in accordance with its requirements.

What is being asserted in the following code?

assert(hnd != NULL);

That a pointer is not null, right?

Wrong!

I'm asserting a part of the design specification, the part that specifies that my function will only ever be called with a valid handle. I encode it as a null pointer check, but the purpose is not to check the pointer, it's to verify my code (and my caller's code) as its pertains to the design spec.

Conscientiously crafting meaningful assertions embeds a portion of my understanding of the functionality of that code...in the code.

The assertions themselves can then be reviewed by inspectors or peer reviewers, or the system engineers or architects or whatever. This gives them a means to verify that I have a correct understanding of what this piece of code is supposed to do. The assertions encode requirements and design aspects in the medium of code, and those notations of my understanding can be checked for accuracy. Do the asserted ranges for guaranteed inputs match the spec? Do my assertions reveal that I'm not allowing for the full range of allowable inputs? Are constraints being asserted that I should in fact be explicitly checking?

Embedding design-oriented assertions into the code aids the inspectors or peer reviewers in verifying that the implementation matches the design.

Embedding implementation assumptions as assertions helps inspectors verify that the code corresponds to the requirements, design, and the implementation assumptions.

Assertions are intended to capture the developer's understanding of what a given piece of code is supposed to do--they're not a coding thing, they're a correctness thing.

Tuesday, September 25, 2007

"Siriusly" Drunk Frog

The Night Blooming Sirius (or Cereus) is a straggly looking plant with a big white, heavily perfumed white flower. They bloom exactly once a year, at night, usually opening up between 9 and 10 pm, for a few hours.















We had five go off last night. Here's a view of one with an added audience member (look closely at the center of the pic, above the bloom). These things have a seriously strong perfume, so we're talkin' Grateful Dead ambiance here.

Saturday, September 22, 2007

Confessions of a Terrible Programmer

I’m a terrible programmer. You would think that having done this for nearly 25 years that I should be pretty good at it by now. But nope, I still put bugs in my code, use the wrong types for variables or subprogram arguments, forget to make the right library calls, mess up the expression in an if statement, and on and on and on now for 25 years.

Sometimes the code won’t compile, and I can fix those bugs right away, but the rest of the time I have to actually run the program to be made aware of my failings. If I’m “lucky”, the program will start up and then crash in a blaze of error message glory that I can then pick through and use to figure out what I did wrong. If I’m not so lucky, the program will start and then almost immediately exit (assuming that wasn’t what it was supposed to do), or go off into la-la land, neither doing what it’s supposed to do, nor exiting. Or worse, look like it’s working just fine, and only when I’ve worked my way to some corner of the program’s functionality will something go subtly awry—occasionally so subtly that it’s not even noticed for awhile. Then of course there’s the non-deterministic bugs that manifest themselves only when a sequence of events (which may originate externally to my program) occurs in a particular order over a particular set of time intervals. The program may run right 999 times out of 1000, but that 1000th run may lock up the entire system. (Been there, done that, found the vendor library’s bug.)

The odd thing though about the programs I design and write is that once they’re delivered to the customer or as a baseline release they usually work like they’re supposed to, which would seem to be surprising given that such a terrible programmer wrote them. I mean I usually deliver the system on time, with the required functionality, and with very few bugs.

So what’s my secret?

Well, it’s one of these pseudo-Zen made-up Ancient Software Master’s secrets:

You will never become a Great Programmer until you acknowledge that you will always be a Terrible Programmer.

And as you saw in the very first sentence, I fully concede that I am a terrible programmer. So since I love programming and this is my chosen career, I have to deal with it.

Fortunately there’s a lot of ways I can cover up the fact that I’m a terrible programmer:

1) If I get to pick the programming language for a particular program, my “go to” language is Ada. Yeah, I know it’s pretty much unheard of today outside of the defense industry, and even there it's struggling, but it has few peers when it comes to pointing out to a terrible programmer just how bad they really are. With all the language’s typing and run-time checks, it’s little wonder that programmers didn’t like Ada, since it was constantly pointing out bugs in their code!

If I can’t use Ada, well, then Java is the next best choice, since it’s got a lot of the same built-in bug detection capabilities.

But please, please don’t make me use C++ any more, which is definitely the enabler of terrible programmer egos. It lets most everything through, so a) I feel good about getting my code clean compiled and “done”, and b) I get a huge ego boost after the Jolt-fueled marathon debugging session that's needed to successfully get the program mostly working.

2) I use programming assertions, a lot. If some variable should never be null, I assert that it’s not null, even if Grady Booch has sworn on a stack of UML standards that it will never be null. Here’s how paranoid I am about being found out: even when I control both sides of an interface, I’ll assert the validity of what the data provider is providing, and assert what the receiver is receiving. Why? Because I’ve been known to change one side of an interface and forgotten to alter the other, so the assertions catch those for me right away.

3) Be a testing masochist, i.e., I ruthlessly try to break my own code. No programmer likes doing this, we’re usually happy to just get the damn thing running the way it’s supposed to with the test case data, so why go looking for trouble? I admit that it’s a mental challenge to intentionally attempt to break one’s own code, but if I don’t, somebody else will, and my secret terrible programmer identity would be revealed.

4) Have other programmers inspect my code. Despite everything I do, as a terrible programmer there’s still going to be bugs in my code. So I need other programmers, especially other Great Programmers, to look at my code and find those bugs. And I’m serious about these inspections, I don’t want some cursory glance that simply makes sure I’ve included the right file header information, I want an inspection, dammit! I had to pound on one of my inspectors once because he was “doing me a favor” by not recording all the defects he found, because he didn’t want me to “feel bad”. Look, I have no self-image as a programmer, so “feeling bad” isn’t something I get hung up on. I need to end up with Great Code, so as a Terrible Programmer I need all the help I can get!

With these and other terrible-programmer-obfuscating techniques, my code often does come out looking like it was written by a great programmer. And I’ve done well by those techniques over the years, during my career I’ve had the opportunity to work on the real-time executive and aero data analysis for flight simulations, did a clean sheet redesign of a major defense system’s command and control planning system, taken sole responsibility for rehosting a large command and control system, done data mining and event correlation in an XML framework for another large defense system, and replaced a legacy wargaming operator console with a net-centric ready version. Outside of my day job I write open source software (in Ada!) and some of it has been picked up and used by a company supporting the European Space Agency. (I’m goin’ to the moon! Helping, anyway.)

So I’ve been successful in hiding the fact that I’m a terrible programmer up to this point, and I need to make sure that I can continue in this career path until my retirement. Heeding Han Solo’s admonition ("Great kid, don't get cocky."), we come to the second fake Zen Old Software Master italicized aphorism:

You will remain a Great Programmer for only as long as you acknowledge that you are still a Terrible Programmer.

Over the years I’ve learned more ways to hide my programming failings. One technique: let the program crash—This works, just stay with me. I’d joined a development project where the original developers had so many problems with run-time exceptions that they simply starting including “null exception handlers”, i.e., catch all unexpected exceptions, suppress them, and keep on going. Needless to say, the system would run fine for awhile, sometimes hours, and then slowly start to...hmm..."veer off".

When I got the chance to redesign the system, one thing I immediately forbade was those null exception handlers (though handlers for recognized exceptional conditions were permitted). If an exception occurred, I wanted it to propagate up until it took the application down so that we would know about it and could fix it right then and there. I got my wish, but was almost found out in the process. The program manager got wind that we were seeing a lot of system crashes during testing, and he wanted to know why the redesigned system was crashing so much more often than the one it was replacing. Explaining that we were uncovering errors and getting them fixed now, and fixed right, didn’t really give him the warm fuzzy he was looking for, as there were deadlines approaching and the reported error rate wasn’t declining as fast as he liked. The team stuck with this practice, though, and when we were done (on time and on budget) the system ran reliably and correctly for days under both light and heavy loads.

For me, this cemented my self-image as a terrible programmer. If I could bring this kind of a system to fruition, meeting all its requirements and budget and schedule constraints, just by conscientiously applying a slew of techniques for uncovering my programming failings before they got into the final product, then I knew that I would be able to fake it for the rest of my career.

Now if you look at my code you might find it to be pretty bad, and that’s normal, pretty much any programmer that looks at almost any other programmer’s code will judge it to be a pile of offal and then proclaim that the only way to fix it is to rewrite it. (I wonder why this is?)

But if you want to sit down with me and go over my code, and tell me what’s wrong with it, then I think we can work together to fix those problems and make it truly great code. And who knows, maybe you’ll end up being as terrible a programmer as I am.

Thursday, September 20, 2007

Choose Your Caulk Wisely

You know you've gotten way too much experience with home maintenance when you can not only distinguish between different types of caulk, you also acquire a preference:




















GE Silicone II Window and Door Caulk.

Works indoors and out, goes on smooth, and cleans up nicely.

Recommended.

Wednesday, September 19, 2007

"Big T" Technology vs technology

It's certainly no news flash that new technologies get hyped, witness Object Oriented Programming, "Write Once, Run Everywhere" Java, and the Iridium satellite phone system (launched November 1, 1998, went into bankruptcy August 13, 1999). While all of these particular technologies are now in use, what's expected of them has all been severely cut back to realistic levels.

When technologists and developers fall in love with a technology, what they're really getting enamored of is a "Big T" Technology, which encompasses their vision of what a technology can provide and how it can change the world. And not surprisingly the passion of this attachment can both blind its promoter and, while perhaps admitting a surface awareness that not everything about the Technology is perfect, be utterly convinced that it can change and result in a perfect marriage.

(I don't want to convey the impression that all big-T Technologies are overwrought and destined for disappointment, the World Wide Web is certainly such a Technology, and it has changed the world.)

In the POET framework, the 'T' that's being addressed is, if not a world-changing Technology, at least intended to be a business-changing one.

My concern about this is that the small-t technology that's the foundation of the big-T Technology gets insufficient attention paid to it.

As an illustration, business executives doing strategic planning may decide that they need to provide (big T) Web Services employing a Subscription Model for their customers' Mission Critical Data. There, now go work the Politics, Operations, and Economics. The tech'll be purchased off the shelf and we'll have their consultants work with our people to roll out the whole thing. Obviously without the "tech" none of this will work, and Joel Spolsky of Joel On Software provides some insight as to what it really takes to "roll out the whole thing". See also "The Price of Performance" in ACM Queue for another look at how an insufficiently broad view of technology would have had costly real-world business impacts to Google.

In these two references, insufficient awareness and addressing of low-level technical issues would have had significant business impacts. But you might argue that these were caught by the POE process, so everything worked out as it should. However, Joel Spolsky is a tech-savvy (note the small 't') CEO that still writes code for his company, and Google is, well, Google.

Other companies do their due diligence to find the right Technology to implement their business processes and system architectures and still wind up with Service and System failures because the software implementing the Technology is full of bugs.

The Technology aspect might be asserted to be "important", but it's still last in priority, meaning that to those working the project management issues..it's not important, and the result of this is that the technology gets treated as a commodity, to be selected primarily based on features and price (and marketing).

I'm not saying that a corporate VP needs to ask what programming language or networking protocol a given product is using (though it would certainly catch the vendor's attention if they did), but someone in the technology evaluation hierarchy should. Along with asking for a description of the vendor's development process, their QA (Quality Assurance) and CM (Configuration Management) practices, and their customer bug reporting and fixing process. It doesn't need to be CMMI or ISO certified, but it should certainly be coherent and plausibly capable of resulting in the production of good software.

It's more than getting past the marketing hype of Big-T Technology, most everyone tries to do that now. It's a matter of respecting, understanding, and acknowledging the criticality of the foundational technology that underlies the Technologies you're going to use, not treating it as a commodity, and feeding that into your decision making process.

And as I posted previously, I'm concerned about what's happening to the quality of the foundational technologies as they're continuing to evolve.

Friday, September 14, 2007

The "Surge", the "Dollar Auction", and a Useful Concept

Oliver Goodenough, a law professor at Vermont Law School, talks about a standard game economics professors use to "demonstrate how apparently rational decisions can create a disastrous result." The game is called "The Dollar Auction". I won't go into its details, since the Wikipedia link has a nice, succinct explanation.

Goodenough uses the Dollar Auction to illustrate how we got into such a bad situation in Iraq, and how the Surge is effectively just another bid in a now irrational auction.

A Dollar Auction is easy to understand, and seems be a useful conceptual tool for understanding how certain awful outcomes can result from seemingly (or genuinely!) rational actions.

Tuesday, September 11, 2007

Wednesday, September 5, 2007

Viable Fusion in our Lifetime?

Here's hopin'. Unfortunately you can't help but take a wait-and-see attitude :-(

70% of the Software You Build is Wasted

Hey, I'm just quoting the title of the article.

He certainly seems to be on the right rant to me, and Dan, I suspect you'll agree as well.

The Fundamental Theory of C

Whenever I describe to someone what I believe to be the Fundamental Theory of C I always have to first caveat it that I am not denigrating the language.

So, that the Fundamental Theory of C is "Portable Assembly Language" is not a knock on C. Now let's move on.

There are some tasks where a very low-level programming language really does make a lot of sense, e.g. operating system kernels, device drivers and graphics buffer drawing primitives. You're working at a very low-level, right on top of the hardware where performance is critical, and you're trying to layer just a facade of software across the hardware to provide that first level of software glue that higher level abstractions can then build on. If this bottom layer employs too much abstraction, it becomes difficult to verify correctness because the gap between software and hardware functionality begins to blur and it becomes more difficult to map software functions to hardware operation. (Of course this blurring is a good thing as you move up in layers of abstractions out to distributed systems, Web services, and SOA.)

So in a way the ideal choice of language for this kind of low-level programming of hardware is assembly language, which provides complete, detailed, total control of the hardware. The problem with this, of course, is that assembly language is hardware specific and therefore inherently non-portable.

So this is where the C programming language comes in. Written well, C is highly portable, as far as the language itself is concerned. Yet you have full access to the underlying hardware's interfaces, with very little compilation needed between the software and the hardware, giving you straightforward traceability from C to assembly code to hardware. Yet you're not constrained (much) by the specifics of the hardware's architecture--you don't have to work with registers and offsets and address modes, so in a way you have the ability to define your own architectural approach for interacting with the hardware (the Linux kernel obviously being the biggest and best example of this).

C is a computer programming language, meaning that it is optimized for programming computers in terms of their computational components. C has built-in primitives for direct memory access, bit shifting and rotation, increment, decrement, indirection, multiple indirection, unsigned arthmetic, etc. When you program in C you're programming a computer, to tell the computer what to do. Again, see Linux.

Understanding that the Fundamental Theory of C is that it is a portable assembly language, and what that means, directs the developer to the kinds of tasks for which the language is best employed and the developer mindset to have in place when writing the code. C is appropriate for tasks where hardware interaction and low-level "bit-twiddling" are called for, but the very characteristics that make it ideal for those kinds of tasks seriously detract from it when dealing with abstracted entities that have little "computerness" about them.

Higher level languages involve writing programs that manipulate classes, records, data structures, components, services, and suchlike abstract entities. Such programs run on computers, but you're not programming the computer for them.

Using C for programming a computer, having consciously selected that language as a portable assembly language suitable for a specific and appropriate set of computer-oriented tasks, maximizes the benefits C provides to the developer, and the correctness and success of the computational foundation upon which higher-level components and systems, using higher level languages, can then be built.

Rudy? Really? When Democrats and Republicans agree

And not just Democrats and Republicans, but New York Democrats and Republicans...

This should give one pause: "A Giuliani presidency would be ..."

Z-Big Raps Political

I was too young at the time to know whether Zbigniew Brzezinski made a good National Security Advisor, but I took pride in not only being able to spell his name, but being able to pronounce it.

He couldn't have been too bad at his job, since he keeps showing up on news shows as a foreign policy expert. Here's his take on how American foreign policy needs to be changed by the next president.



Hint: We'll need more than just a rollback of what will have been 8 disastrous years.

Your Daily Dollop of Escher and the Doctor

From some guys that have too much time on their hands, but spend it well:


Here's the source posting I saw for this.

This reminds of one of my favorite Dr. Who episodes, Castrovalva. Here's the work that particular episode used for its locale:


Lots of M.C. Escher stuff here.

Tuesday, September 4, 2007

"Picking the Right Tool" is a Tautology

On any given system development project, the individual or team charged with "tool selection" is always going to pick "the right tool". No one is ever going to think "I'm intentionally picking the wrong tool for this job." They might knowingly opt to pick one that is technically inferior to others, but the justification for doing that comes as a result of considering the Political, Operational, Economic, and Technology (POET) factors, making the overall selection the "right" one.

(In hindsight it might be discovered that a choice was "wrong", but that's because some aspect of P, O, E, or T wasn't correctly understood, i.e. "it seemed like a good idea at the time". And it's also possible that those not in on the selection process may assert that a wrong choice was made, but well, then, make the case in the POET framework.)

My concern is that over the last several years the underlying technology is getting more and more precarious, more and more ad hoc, and getting less and less respect.

Many years ago I read an article espousing a similar concern. I don't remember the exact paragraph wording, but the gist of it was that there are people that constantly worry a great deal and think very deeply about very complicated and arcane things going on in the bowels of a data center whose failure could bring down the company, or even a chunk of the economy. What they do is not sexy, or cool, or cutting edge, and their work doesn't lead to startups and high-demand IPOs. But without their skills and commitment to do what they do the whole thing starts to fall apart.

That kind of work is at the very bottom of the POET stack, it's "sub-T" in fact. Someone is writing OSes and compilers and JIT JVMs and JDKs and web servers and fault-tolerant middleware and Perl interpreters and JavaScript engines and XML parsers and on and on. Everything in the technology stack, and the system development stack, rests on them.

And "picking the right tool" blithely assumes that these technologies are commodities and it's simply a matter of performing a suitably comprehensive trade study.

But I don't like what I'm seeing down here at the bottom of the stack.

I see programming languages getting hacked up to include new features that show little regard for maintaining a consistent and coherent language design, I see security being "patched in" to systems rather than being built in, and I see more and more features being packed into products that make them increasingly brittle. The systems are arguably "better" than what they were before, but I believe it is becoming a matter of plugging holes in dikes and building McMansions on landfills.

As Dan mentioned, technologists and developers almost always naively put their 'T' at the top of the stack, which sets them up for failure in a business goal oriented environment. Unfortunately, by kicking 'T' down the stack the acknowledgment of its criticality got demoted as well, so much so that technology evaluation becomes a matter of feature and price rather than quality engineering.

If you don't recognize the need to truly engineer a technology and acknowledge that technology choice actually does matter, and that tech is not a commodity, but is the foundation on which all your Service Oriented Architectures and Business Processes and Customer Relationship Management is based, your system is going to end up in pretty "POE" shape.

Monday, September 3, 2007

POET - Politics, Operations, Economics, and Technology

One of the best lessons I have learned over the past 7 years as an enterprise architect with Northrop Grumman is the POET acronym. It is the notion of the order of things in the business world: 1) Politics -- getting people what they want, 2) Operations -- making the business run like a well-oiled machine, 3) Economics -- getting the most bang for the buck, 4) Technology -- using machines to make an organization more efficient and accurate. As a technologist, architect, and software engineer, I used to spell this as TOEP, where Technology was #1 followed by Operations, then Economics, then, dead-last, Politics. Following this TOEP priority system working for the government, I set myself and my team up for failure. We were technologists pushing hot technologies in an organization that could care less. We were "techology" in search of a solution.

Putting the POET acronym in order is a good recipe for success. Politics forces you to look at what is driving the business decisions. Operations forces you to focus on efficient well-targeted business processes. Economics forces you to maximize return on investment and lower the total cost of ownership for a system. Technology is dead last. It's a servant. I have been well-served by becoming technology agnostic, meaning, technology services an organization's politics, operations, and economic constraints.

In my experience, this is not trivial, it's monumental. It pervades many work spaces and many fields of battle. It is like holding your friends close and your enemies even closer. Many techologists are in love with their favorite technologies. They could care less about the business world that surrounds them and pays them to build excellent systems. I used to be one of these people. POET is a wake-up call to put first things first, a recipe for success.