Code That Makes Me Mad: chained function calls

I frequently get called in as a second pair of eyes when strange problems crop up in programs. With disheartening frequency, I run into an exception coming from a line of code where the return value of the first function is used to call a second function, which is used to call a third function, and so on. You know, like this:

function1(arg1a, arg1b).function2(arg2a,arg2b).function3(arg3a,arg3b).function4(arg4a,arg4b);

This is a pain in the neck to troubleshoot, because the exception could have come from any of those functions. When you get an error from a line like this, you have a lot of troubleshooting to do. Even if you have a good debugger available (which is usually the case now in the J2EE world, but isn’t always the case if you’re using an older (or much newer) environment with poor debugging support), your situation isn’t necessarily better. If you don’t have source code for all the libraries in the chain available, you won’t be able to trace below this line.

Instead of the “chain of functions” approach, I do it like this:

retval1 = function1(arg1a, arg1b)

retval2 = retval1.function2(arg2a, arg2b)

retval3 = retval2.function3(arg3a,arg3b)

retval4 = retval3.function4(arg4a, arg4b)

This way, if you get a runtime exception, your stack trace will tell you exactly which line generated the problem. If you run the program in a debugger, you can see the values of all the objects as you step through. You do a bit more typing, but if you type as fast as most programmers, the difference is negligible. I’d rather spend 30 extra seconds typing than have to spend hours or days bug-hunting on a problem in a production system. 

 

New Feature: Code That Makes Me Mad

I’ve been in a lot of meetings about coding standards. I really don’t care about what indentation style you use, or how many spaces to indent them, or much at all about how your code is formatted. It’s 2014; I can reformat it in Eclipse to look any way I want.

But there are some things I see in code that make me mad. Generally, things that make me mad in code fall into a number of categories:

  1. Stuff that makes the code hard to read.
  2. Stuff that makes the code hard to debug.
  3. Stuff that makes the code less reliable

In this series of posts, I’ll rant a bit about some things that make me grind my teeth when I see them.

Programming 101 — linkage and runtime execution

I did some Basic and Pascal programming in high school and college, but most of my early programming career was spent programming in C on MS-DOS and Windows. C has been described as “combining the speed and power of assembler with the readability of assembler.” Back in those days (DOS 3.3, then Windows 3.1), there really wasn’t much functionality available in the operating system, so you often had to resort to assembler to get things done.

I gather that programmers today tend to start off with higher-level languages, and don’t really need to get into assembler at all. For folks that haven’t had to deal with assembler, but are mildly curious about lower-level functionality, this should be an interesting article.

http://samwho.co.uk/blog/2013/04/13/the-birth-and-death-of-a-running-program/

It walks through what happens when you compile, link, and run a simple C program.

Client-side software development in 2014 is a chaotic mess

Tags

, , ,

I’ve had the feeling that client-side software development is a mess for some time now, but since I’m mostly a server-side guy, I’d dismissed my feelings as being due to unfamiliarity with the tools and practices. But I guess it’s not just me after all.

This article
http://www.itworld.com/software/402484/state-software-2014

led me to this article

https://www.tbray.org/ongoing/When/201x/2014/01/01/Software-in-2014

These articles trace through the various options available for client side/user interface programming in these days (mobile and Javascript/HTML) and points out that they’re all pretty awful for software development.

Why is this? We all know how crucial the user experience is. So why is it that in 2014, after all these years of software engineering theory and practice, that it’s such an incredible pain in the neck to build user interface components.

I don’t really have an answer. I thought that Java applets were a pretty good answer; they could be run without cient-side installation and provided a pretty rich set of controls. But they’ve fallen out of favor, and the ground is littered with the corpses of alternatives that attempted to provide the same functionality; Silverlight, Flash, JavaFX, etc.

I think that this type of architecture (most of the business logic on the server, with a lightweight but rich client communicating back to the server) is the right answer, but the implementations seem incapable of garnering market share.

Javascript libraries like JQuery provide similar functionality, but they are all somewhat hackish, and I wonder how forward-compatible they will be over time. Perhaps the Java/HTML approach will converge on a standard approach, but IMHO things are still pretty chaotic. You can build simple demo apps with the Javascript approach, but the thought of having to build a large enterprise software system with it fills me with dread. There’s just too much code devoted to things that have nothing to do with the user’s functional requirements.

As an enterprise software developer, I don’t need clever hacks. I need a nice, boring environment that lets me pull stuff out of a database, shove it onto a screen, let the user fiddle with it, and put it back into the database, with as little coding required as possible. Given that we’re 20 years into the web software development paradigm, shouldn’t we be there by now?

Who’s driving?

I was reading through another retrospective article on the Ballmer years at Microsoft.

http://www.businessinsider.com/steve-ballmer-interview-2013-12

This quote from Steve Jobs jumped out at me:

 

I have my own theory about why decline happens at companies like IBM or Microsoft. The company does a great job, innovates and becomes a monopoly or close to it in some field, and then the quality of the product becomes less important. The company starts valuing the great salesmen, because they’re the ones who can move the needle on revenues, not the product engineers and designers. So the salespeople end up running the company. John Akers at IBM was a smart, eloquent, fantastic salesperson, but he didn’t know anything about product. The same thing happened at Xerox. When the sales guys run the company, the product guys don’t matter so much, and a lot of them just turn off. It happened at Apple when Sculley came in, which was my fault, and it happened when Ballmer took over at Microsoft. Apple was lucky and it rebounded, but I don’t think anything will change at Microsoft as long as Ballmer is running it.

I think that Steve Jobs had earned the right to have an opinion on this subject, and his opinion seems to match with the reality of a lot of tech companies. The initial success of tech companies is built on innovation. Over time, the success spawns a huge management infrastructure. In many cases, the management infrastructure ends up suffocating the innovation, and the company dies. 

This is not to say that management is bad, or unimportant, or that companies should simply pursue impractical ideas with no thought to business realities. But if a company becomes so focused on MBA concerns that they Ignore PhD concerns, they have begun the process of dying. If you’re in the innovation business and you’re not innovating, you’re doomed.

Null considered harmful

I ran across this article recently:

http://peat.org/2013/12/07/null-considered-harmful/

The author points out that attempts to dereference null values are a frequent source of errors in modern programming languages, and proposes that languages should not support null references. He draws a parallel to memory management — languages that require manual memory management (C, for example) have more memory-related issues than languages that incorporate memory management. 

It’s a compelling idea. Most of the Java runtime issues I’ve seen over the years have been NullPointerExceptions; a language that prevented the occurrence of these exceptions would not have allowed these errors to occur. 

Of course, there’s a gap between theory and practice, and I’d have to see how such a language would work for real-world development. If a 100% prohibition is impractical, perhaps a “nullable” modifier could be introduced in order to flag fields that can potentially have null values; variables without this flag would be prohibited from holding null values. 

Healthcare.gov 1

I’ve resisted commenting on the healthcare.gov situation for a few reasons

  1. I don’t want to get dragged into partisanship.
  2. I don’t have any specific knowledge or involvement in the project beyond what I’ve read in the papers.

That said, I think that the healthcare.gov situation is a teachable moment that could potentially be used to begin a larger discussion of the state of the practice of software engineering.

For starters, here’s a link to an NPR story about the project:

http://www.npr.org/blogs/alltechconsidered/2013/11/19/246132770/this-slide-shows-why-healthcare-gov-wouldnt-work-at-launch

The slide shows a lot of problems

  1. First, a waterfall approach was used. Instead of setting up a series of small releases, each of which implements a specific function, the plan was to deliver the whole system in one iteration. This approach appeals to managers because the project plan is simple to understand, has definite milestones, has a fixed duration, and so forth. But as we’ve seen in this project, these presumed “advantages” can turn out to be illusory.
  2. Second, requirements continued to change throughout the life of the project. This is a common occurrence, but it can easily derail a “waterfall” project. Late changes to requirements mean that the design must change, and the implementation must change as well. So instead of doing an implementation once, you end up reimplementing the same functionality over and over as requirements change. So you end up with a de facto iterative approach, except that instead of implementing a new function with each iteration, you end up reworking the same function over and over. In this situation, it’s possible to keep churning indefinitely and still never end up with a completed project.
  3. They stuck to their end date, even though the requirements changes had continued past the original period when development was planned to be complete. This is another recurring problem in the industry. I’ll bet that there were many, many meetings where the phrase “Failure is not an option” was said. Well, failure is always an option. You recognize that it could happen, you look for things that cause it, and you mitigate those causes. If you don’t do these things, you’re inviting failure. If a project is well behind schedule and the delivery date is looming, the only workable answer is to do triage. Figure out what you’ve got working, realistically figure out what you can get built by the date in question, and defer the rest to another release. The more common approach is to double down — say that all features will be delivered on the original date, and “just work harder.”
  4. They did little testing. This is another classic blunder with waterfall projects that slip their schedule. If your plan is to complete the development process before you  start testing, and your schedule slips, your window for testing gets smaller and smaller, and may even vanish completely. You’re already doing “just work harder.” You’re putting in 14 hour days. Everyone is in a state of barely-suppressed panic. The due date is rapidly approaching, and you still haven’t implemented all the requirements changes. So what happens to testing? More often than not, you just hope for the best and release what you’ve got on the due date.

If you work in this industry, you’ve seen these things happen over and over again. More often than not, they lead to a bad end. So why do we still keep doing these things?

Writer’s block

No, I haven’t really had writer’s block, but it’s been a while since I’ve posted, so I thought I’d talk about writer’s block.

I believe that there are 2 different things that can be referred to as “writer’s block.” The first type is when you find yourself in the midst of a long stretch of writing, where you’ve drawn a blank and nothing is coming to you. I’ve found that the best answer to this is to take a break. Get up, go somewhere else, and do something that takes at least 10-15 minutes. When I do that, I usually find that I’ve come up with a way forward by the time I get back into the chair again.

The other kind of writer’s block is where you’re in a long stretch where you’ve got no ideas, no inspiration, and nothing is coming to you. It seems to me that the only way out of that is to simply write something, even if it’s dumb, and keep writing, and eventually you’ll come up with something good.

I can’t recall the specifics of the story, but there was once an author with writer’s block. He was broke, and asked his literary agent for an advance. The agent agreed, but with the stipulation that the author had to come into the agent’s office and sit in front of a typewriter for a full business day until the author’s book was complete. Soon enough, the author found the block had gone, and the work was completed.

So, if you’re feeling “big-picture writer’s block”, maybe the right answer is to do what the author’s agent got him to do — get rid of your distractions, put yourself in front of a blank page, and write something.

Doing battle with chaos

That last post was pretty discouraging, I suppose. So, given that we have to at least try to plan things, what do we do? Some ideas:

  1. Iterate. Start with a rough idea and refine it as you go. In the meantime, keep track of your actual durations as you implement function points. This way, you gather momentum as you go: you understand the problem space better, you have better data for future estimates, and your customers get more chances to look at your deliverable.
  2. Plan anyway, but expect uncertainty. Don’t expect to lock yourself in a room for a week, write the perfect plan, and then follow it exactly as is to completion.
  3. Have contingency plans in the event that something goes wrong. They don’t necessarily have to be finely detailed, but you should try to come up with likely risks and come up with a way to mitigate each one. IMHO, the plan itself is less important than the fact that you have acknowledged the existence of the risks. The worst thing that can possibly happen in a software project is to proceed happily along toward a risk with no idea that it’s even there.
  4. when you estimate, attach a confidence level to it. In the early stages of the project, your confidence will be low; as you go on, it should get higher.
  5. Separate out the “boring” parts of the project from the “risky” parts. That is, in there _are_ some elements of the project that you can estimate with high confidence. There may also be some areas where you are engaging in pure research. If you can keep the critical path to mostly predictable and boring stuff, and keep the R&D stuff off the critical path, you are less likely to be derailed by unforeseen issues.
  6. Attack the biggest unknowns early. Fail early, if you’re going to fail. Ideally,  if you have your team in place for the whole project, the architecture and development team can work on prototyping and solving any R&D-type questions in the very early stages of development. By the time that you have a solid set of use cases and are ready to get into full-on development, most of your techincal issues should be solved.

Freakonomics on “The folly of prediction”

“Freakonomics” is a podcast where economists apply economic principles to social phenomena. This week’s podcast is on “prediction,” across a wide variety of domains — politics, sports, and even witchcraft.

http://www.wnyc.org/articles/freakonomics-podcast/2013/aug/22/folly-prediction-rebroadcast

The short version: while there are some things that are predictable, most of the things that we attempt to make predictions about (sports, stock prices, politics, the economy) are not predictable. 

What does this have to do with software? It should be pretty clear. Developers are asked to make predictions (estimates) all the time, with minimal information, in the presence of high levels of uncertainty and unpredictability. History has shown that we’re not very good at it. How can we get better? A better question is, can we get better? That is, is our chosen field a linear system, or a non-linear system? Are there elements of linearity within the chaos that we can nail down? Is there a “fudge factor” of chaos that we can factor in that will make our estimates better? 

In my opinion, if you’re in a shop that grinds out the same sort of application over and over again, using the same set of tools, and you have a good set of historic data of actual task durations, you’re probably going to be able to come up with estimates that will be pretty close to reality.

On the other hand, if you’re in a research and development shop, trying to do something that’s never been done before, you’re almost certain to run into issues that you haven’t thought of, and thus haven’t accounted for in your estimate.

In either case, user requirements inject an element of chaos as well. The developer walks a tricky line here. On the one hand, if you build a system that doesn’t do what the user needs it to do, it won’t be used. On the other hand, if your project is subject to a continual stream of requirements changes, you will find yourself rewriting the same code over and over again; in this scenario, it’s entirely possible that you’ll never get to a release.