Larry Tesler: What did you do?

January 7, 2012

To tell you the truth, I never even heard of Larry Tesler until a few minutes ago when I looked up Cut and Paste in Wikipedia. Tesler, it turns out, is the guy who popularized Cut and Paste for text editing way back in his Xerox days in the mid-seventies.

I’m sure he meant well, and it probably seemed like a good idea at the time. Back in the seventies, after all, not many people used computers to edit text. Many of them were computer programmers, and you could trust them with powerful tools like Cut and Paste because they bore the brunt of the pain from careless use. Also, in the mid-seventies, mice were still confined to laboratories, although perhaps Larry knew about them.

Nearly forty years later, would he uninvent it?

On the face of it, cut and paste looks like a productivity enhancer. As an author, I can copy a piece of text over and over and change it to quickly generate my content. Using cut and paste, I can produce reams of documentation in the blink of an eye.

The problem is that then someone has to read it. In fact many more people will read it than will write it. All those readers will need to differentiate between blocks of text that may only differ subtly.

One of my favorite examples is the block of text that we currently use to document use cases. This is almost a page long, and tends to get copied then changed ever so slightly from one use case to the next. Sometimes the changes are so subtle I find myself flipping back and forth in a kind of Captain Underpants animation to try to discern the differences!

Now, would anyone compose their information so sparsely if they had to rewrite every word? I doubt it: writing like this is the result of cut and paste. If you had to write every word, you’d find a much more compact expression.

More compact documents would mean a lot less waste. Less waste of time for the half dozen readers of the document and less waste of trees for the printers.

I’m all for banning cut and paste, or at least licensing its use. Who’s with me?

Inspired by Atlassian’s Fedex Day

December 21, 2011

My team has been after me for years to implement something like the Google 20.  Well, I’ve never felt I can afford to work only four days a week on delivering what the business asks for — 6 would be preferable.  So, we never did it.  However, Atlassian came up with the idea for Fedex days a few years ago, and this seemed a much more sellable idea, especially if we did it in December when things are starting to slow down a bit.  This year we tried it out.

We changed it a little from their format, but looking at their FAQ, there are some things we should adopt, like grabbing screenshots on the last morning in case the project breaks a few minutes before the deadline.  We also made it a two-day event, rather than just 24 hours.  Our environment is complex, and it could easily take a day just to get started.

Noon on Wednesday hit and the energy on the development floor went through the roof! Suddenly little teams of two or three formed all over the place, laptops emerged so people could work at each others’ desks, developers were huddling. Work continued into the wee hours of the morning both days. It was great!

Being the director, I decided to lead by example, and came up with my own project.  Part of my time was eaten by meetings that I couldn’t avoid, but for much of those two days I managed to roll up my sleeves and do some development.  True to form, I decided to start by learning a new language and development environment, and implemented my project in Grails.

By the end of Wednesday afternoon, I’d gone through all the tutorials I felt I needed and started on my actual project, which was to call the Innotas API to create a tool to simplify accepting timesheets.  That’s more or less when I found out that Grails is not all that much help for calling web services.  Oh well, I persevered, and thanks to Adrian Brennan, who was working on another integration with Innotas, I got my application to talk to Innotas by the time I went home, around 3 AM.

The Innotas API is a poster child for the worst API ever.  To do remarkably simple things, you need to cross the network tens of times.  It’s like traversing an XML document one node at a time over the Internet.  But I digress.

Thursday dawned earlier than expected and some of the teams were starting to struggle, including me.  I had more than half the day devoted to meetings that I couldn’t avoid.  Worse, there were no good blocks of time to get in the zone.  I was experiencing first-hand the difficulty with context-switching that my developers go through every day.  Indeed, I only got about two hours of productive time during the day, and came back in the evening.  When I left at 2 AM, I wasn’t the last to leave, and I suspect there were more working from home.

Friday morning flew by, and some of the organizational items that I’d left until the last minute became minor crises – mental note for next year!  However, I managed to get a partial demo working, which meant that at least I wouldn’t embarrass myself in the afternoon.

Suddenly it was noon, and a mountain of pizza was being delivered to our largest meeting room, which attracted the whole team very effectively.  Everyone grabbed some pizza and we called into the conference bridge for the handful of remote workers.  The afternoon would be long.

Atlassian limits their demos to three minutes.  We didn’t limit the demos this year, but next year we will.  A couple of people chose to show documents or presentations that they’d worked on, which I feel is counter to the spirit of the event.  We won’t accept those next year either.

One of the things I’d left until the last minute was figuring out exactly how we would finagle our way into the development VLAN from the conference room.  The challenges of seeing demos on various developer machines while simultaneously using join.me or gotomeeting ate up too much time.  So next year we’ll do a little practice in the week before, and we’ll get two computers going so we don’t have to wait for each demo to set up.  Well, lessons learned.

I hoped for team engagement, skills development and demonstration, and we got those in spades.  I thought we might perhaps get a product idea or two, but I was completely blown away by the number of projects that resulted in something that is almost usable in our products. We got way more value out of this initiative than I expected, and I fully several projects to graduate into our products after a little refinement.

If you’ve thought about Fedex Days for your organization, I heartily recommend finding a quiet time of the year and going for it.

The Myth of Governance

December 5, 2011

In the previous post regarding requirements, it is tempting to think that you could avoid prescriptive or unnecessary requirements with a proper governance structure in place.  In fact, that is the fashionable reaction when any project artifact is found to have deviated from the path.  If only we had a proper review and sign-off procedure, everything would stay the course.

Now, anyone knows that review and signoff takes time.  If you want my time to review a 50 page document, you’ll be waiting 3 days to a week. If it’s 100 pages, I’ll get it back to you in at least a week.

The requirements document in the previous post was about 200 pages long.  Think about that.  200 pages is the length of a novel. Except if you picked up a novel that had the same character development and story arc as a typical work document, you’d put it down after reading the fist chapter. The quality of the attention you’re able to give the work drops off significantly after about 40 pages.

Even the author can’t pay attention past page 40. That’s why it’s common to find documents that contradict themselves.

This, along with a desire to parallelize writing and reviewing is why we often see these big documents released in chapters.  But then we gain opportunities to miss whole aspects of the subject. The document really needs to be considered as a whole.

So, governance in the form of review and sign-off is slow and error-prone.  You might be able to compensate for the errors and inattention by slowing down further.  Give me more time to review, and maybe I’ll be more careful and won’t miss things.

The real problem, however, is that review-based governance doesn’t scale. If the overall direction sits with one person, and they must review every decision, then the organization is limited to that reviewer’s capacity.

Well, obviously you scale by adding more reviewers.  But how do you ensure that the reviewers all agree on the same direction and vision?  Even if they all think they agree on the direction and vision, they will have to interpret it and apply it in specific circumstances.  Who watches the watchers?

In the end, we introduce documentation and review because we don’t know how else to ensure that our staff are producing what we expect.  However, if we think we’re going to actually ensure they produce what we expect through review, we’re dreaming.

What we really want is self-government, and I think a few organizations have done this well.  With self-government, the leadership clearly communicate a broader vision or path toward the future, and then motivate their staff to work toward the shared goal.  If you can sufficiently communicate the idea, and convince everyone to support it, then you should not need governance.

Most Requirements Aren’t

November 27, 2011

To my ultimate embarrassment, we’re still largely a waterfall development organization. So, I read a lot of requirement definitions, and I’ve come to conclude that there are three types:

1 Requirements that are actually necessary for the product’s success. There is probably a relationship between these and the minimum viable product, but that is another post.

2 Requirements that are desired by someone, but not actually necessary.

3.Prescriptive requirements that define how something should be done rather than what should be done.

 

Now everyone knows (with the apparent exception of our business analysts) that the third type has no business being stated as a requirement.  Personally, I knew this early on from having it beaten into me by someone who had had it beaten into him in the early days of his career. It was part of the oral tradition of software development.  Maybe it had something to do with job protection.

It turns out that it actually has nothing to do with job security and there are at least two good reasons why proscriptive requirements are dangerous: one for each of the major audience groups — developers and testers.  For developers, a prescriptive requirement says “You must implement it this way.”  Even if the developer can see a better way, they must still implement the lame design that has been defined and signed off.  Moreover, the proscribed design is difficult to iterate, and we all know that your first idea is rarely the best one.

The best design comes from working through all the scenarios.  We used to think we could do this on paper, but the reality is there is always an unanticipated scenario that turns the design on its head. Test Driven Development and Refactoring are as much an acknowledgement of this reality as they are of changing requirements.  What’s the developer to do if they still have to satisfy the original design requirement?  That’s right, they hack it, and we take on technical debt.

For testers, prescriptive requirements are impossible to test.  How can you tell how something was built without looking deep inside and verifying that it works that way?  At best, they can have the developer place an inspection point in the process somewhere to verify that certain steps comply with the requirements, but why are they doing that?

Last year, we completed a project with a strong prescriptive requirement: files received in one region would be combined into a single file and sent to another region where they would be burst and fed into a processing system.  Not only did the requirements document state that this would happen, but it also defined the format of the combined text file!  The intent was to satisfy a non-functional requirement that 100% of the received files be fed into the processing system.  There was a huge review team for this requirements document, most of whom could not have understood the implications of one design choice or another with regard to this file, and the developers were forced to take on an inefficient design for the non-functional requirement.   The testers were able to verify that the stated requirement was met by inspecting the file; interestingly they never did verify the actual non-functional requirement, because it was never stated.  Altogether, the amount of time wasted by the business, analysts and testers reviewing what should have been implementation details is truly staggering.

Fortunately, while the developers had to create this huge text file, they had tools on the AS400 that make text file manipulation a doddle.  Unfortunately, we were already planning to shut down the AS400.  The result?  We may need to rewrite this component that combines the files (depending on the order in which a few things fall).  If we do indeed rewrite it, we will have incurred 100% interest on the technical debt in one year.  Awesome.

Technical Debt and Interest

August 9, 2011

Since installing Sonar over a year ago, we’ve been working to reduce our technical debt.  In some of our applications, which have been around for nigh on a decade, we have accumulated huge amounts of technical debt.  I don’t hold much faith in the numbers produced by Sonar in absolute terms, but it is encouraging to see the numbers go down little by little.

Our product management team seems to have grabbed onto the notion of technical debt.  Being from a financial institution they even get the notion that bad code isn’t so much a debt as an un-hedged call option, but they also recognize that it’s much easier to explain (and say) “technical debt” than “technical unhedged call option.”  They get this idea, and like it, but the natural question they should be asking is, “How much interest should they expect to pay should we take on some amount of technical debt?”

In the real world, debt upon which we pay no interest is like free money: you could take that loan and invest it in a sure-win investment, and repay your debt later, pocketing whatever growth you were able to get from the investment.  It’s the same with code: technical debt on which you pay no interest was probably incurred to get the code out faster, leaving budget and time for other money-making features.

How do we calculate interest, then?  The interest is a measure of how much longer it takes to maintain the code than it would if the code were idealized.  If the debt itself, the principal as it were, corresponds to the amount of time it would take to rectify the bad code, the interest is only slightly related to the principal.  And thus you see, product management’s question is difficult to answer.

Probably the easiest technical debt and interest to understand is that from duplicate code.  The principal for duplicate code is the time it would take to extract a method and replace both duplicates with a call to the method.  The interest is the time it takes to determine that duplicate code exists and replicate and test the fix in both places.  The tough part is determining that the duplicate code exists, and this may not happen until testing or even production.  Of course, if we never have to change the duplicate code, then there is no effort for fixing it, and so, in that case, the interest is zero.

So, I propose that the technical interest is something like

Technical Interest = Cost of Maintaining Bad Code * Probability that Maintenance is Required

You quickly realize then that it’s not enough to talk about the total debt in the system; indeed, it’s useless to talk about the total debt as some of it is a zero-interest, no down-payment type of loan.  What is much more interesting is to talk about the total interest payments being made on the system, and for that, you really need to decompose the source code into modules and analyze which modules incur the most change.

It’s also useful to look at the different types of debt and decide which of them are incurring the most interest.  Duplicate code in a quickly changing codebase, for example, is probably incurring more interest than even an empty catch block in the same codebase.  However, they both take about the same amount of time to fix.  Which should you fix first?  Because the interest on technical debt compounds, you should always pay off the high-interest loan first.

Magazines on Prezi?

January 15, 2011

This week Prezi announced their iPad app.  If you’re not familiar with Prezi, go check out their innovative approach to presentations.  I’ve used it for a couple of presentations so far, and I have to say I love it.  My audience, of course, didn’t know what was going on until they asked a question and I quickly zoomed out and in again to find the part of my Prezi that spoke to their question.  Try doing that in PowerPoint, and you’ll find yourself fumbling.

Last year, Wired produced the first decent magazine for the iPad.  This largely fulfilled the vision proposed by BERG earlier last year and of course it’s mighty sweet.

But frankly, it’s still the same experience of paging through a document, except now you get to do it by swiping and there are a few bells and whistles.  It’s not quite a reimagining of the reading experience.  That’s where I think Prezi could come in with their new app.  What if, instead of swiping through a document you zoomed and panned across a map.  You could explore interwoven topics, zoom in to understand detail and zoom out to get see the big picture.

Maybe that’s the plan.  Prezi has a few things to add before they’re competitive with Adobe, but I’m looking forward to the real future of magazines.

Joshua Bloch on API Design

December 11, 2010

I was looking for some direction for what people have found works well for API and SPI documentation, when I happened across this great Google Tech Talk on API Design by Joshua Bloch.  Within a couple of minutes I’d started to take notes – which is challenging when you’re trying to eat soup.   It was that good.

To save you (who is likely me as nobody else reads this blog!) from watching the whole 60 minutes again, here were the main things I took away.

The first two were the principles that Joshua wanted to ensure everyone took with them

  • The API should be as small as possible, but no smaller.  “When in doubt leave it out.”
  • Don’t make the client do anything the module could do.  This causes boilerplate code, full of errors.

I like those two principles, as well as the others he highlighted (names matter, strive for symmetry, document religiously, design for performance without warping the API, coexist peacefully with the platform, minimize mutability, design and document for inheritence or prohibit it, keep the Principle of Least Astonishment, fail early, fail at compile time, provide programmatic access to data available in strings, use consistent parameter ordering, avoid long parameter lists, avoid returning a value that requires exceptional processing).  However, what I thought were really interesting were some of the ideas he suggested for the approach to API design.

  • Start with a one-page version of the specification The idea here is that as the specification gets larger and more fully fleshed out, it gets harder to change, and you want to be able to take on feedback from your clients to improve the design as much as possible.  I think it would have a couple of fabulous side effects as well, that make me wonder if you shouldn’t strive for a single-page specification all the time.  First, a single page constrains the size of the API, ensuring that the module does one thing and does it well.  The second benefit might be that keeping a specification on a single page might force the designer to really concentrate on getting the naming right so that the behavior of the API is apparent without so much documentation.
  • Try coding to the spec before implementing it There is nothing like using a system to expose its usability, and for APIs, usage is about code.  As with the single-page specification, this idea has the effect of catching problems while they are easy to fix.  I like the way this principle could tie in nicely with test-driven development, as well.  You know, if you wrote all the tests against a stubbed version of the interface, and they were all failing to start, that would be okay, and you’d have an excellent sense of how to use the interface.
  • For service provider interfaces, write three implementations before publishing For us, service provider interfaces are in fact far more important than application programming interfaces, and so, I wish I’d thought of this myself.  That you need to build something three times before it is reusable is one of those well-known tenets of programming, but I’d never put it together with service provider interfaces before.  Now it seems to make perfect sense.  If you write a single example implementation of the SPI, you will design an SPI that supposes that implementation, two is better, three is really good.  There are diminishing marginal returns after three.

Well, as I say, it was a great talk, well worth watching, and the points I took out of it are well worth remembering.

 

The Problem with Templates

March 17, 2010

As technical teams mature, one of the remedies for the many ills that come from growth is the addition of process.  These processes call for documentation, and someone generally kicks off a template to make these documents easier to produce.  As we learn more, we add sections to the templates to ensure we don’t repeat mistakes, or at least remember to consider the factors in subsequent initiatives.

So far so good.  The organization is learning and improving with every project.

Unfortunately, document templates often wind up looking a lot like forms.  That makes people want to fill in all the sections (often improperly), and that leads to bloated documents that don’t even fulfill their purpose.

Take, for example, a fairly typical waterfall model of software development.  There is a requirements document, followed by a design document.  Often the design document template will include a section called something like, “architecturally significant use cases.”  It is tempting to simply grab all the use cases from the requirements document and paste them into this section, especially when there are sections on logical, physical, deployment, data and code architecture yet to write.

Apart from the obvious problem with cut and paste, the inclusion of all the use cases fails at the most basic level to communicate the significant use cases.   The document fails.

I don’t have a good answer to this unless it is to provide only a high-level template for much of the document along with a description of how the document should work.

For example, that design document starts with architecturally significant use cases that drive the choice of logical components.  The logical components find places to live in executables and libraries, which are documented in the physical architecture section and those executables find homes in the deployment architecture.  In order to write a sensible design document, an author has to understand this flow; and seeing the headings in the template isn’t going to help.

In most cases, the document template is not the place to learn.  It should stay high-level, and force its authors to think through the process of writing the document.  We still need a place to ensure projects can impart their wisdom to subsequent projects, but the place to do this is in a checklist, not in a document template.

So, if you’re thinking of creating a template, think about creating a short (!) explanation of how a document of this type should be organized so that it communicates.  Add a checklist to the explanation, and do it all in a wiki so that those who come after you can help the organization learn.

The New Way to Scale

March 13, 2010

Scalability is one of those poorly understood concepts in the computer world, and it is understood even worse in the business world.  Briefly, it is the ability of a system’s capacity to grow by adding more resources to it.  A desirable property is that growing the capacity by a unit should never require a single huge investment.  If it is the case that each additional unit of capacity requires a constant amount of additional resources, we say the system scales linearly.

Most companies today do not scale

Think about car assembly lines: they represent a huge infrastructure investment of hundreds of millions of dollars, optimized to produce a single model of car at a particular rate of completion.  The line is most efficient when working 24×7, producing a steady stream of new cars.  If the demand does not meet that capacity, then the margin per unit is less than planned, and possibly the car is not viable.  The company’s response is to shutter the plant and throw everyone out of work.

On the other hand, if the demand is greater than the planned capacity, there is no way for the company to meet that demand without taking the risk that the second plant may never run at full capacity, and the product will not be viable.  This risk is especially strong for your typical fad product – the wildly successful Tickle-me-Elmos whose demand outstrip supply for a few short weeks.  Because the manufacturing takes place on the other side of the planet, in a factory of fixed capacity, there is no way to get more Elmos on a ship before Christmas, let alone in stores.  They arrive, instead, after their moment and we get an Elmo glut in January.

Actually, I have no idea if Elmo production failed to scale, but certainly the weeks it takes to ship from China mean that this type of manufacturing fails to scale quickly.

When we scale computer systems, we talk about scaling out or scaling up.  To simplify, scaling up means you buy a bigger computer; if your single processor machine no longer serves, you get a four-processor machine, and so on.  The problem with this strategy is that the first computer is now surplus, and the investment to scale bigger and bigger gets bigger.  Ultimately, you hit a brick wall, and nobody can supply enough processors in a single frame for your needs (that numbers crazy high today, but CPU is only one resource, and other resources hit the wall earlier).

Scaling out is generally favored over scaling up.  When we scale out a system, we start with one low-powered machine, and when demand outstrips its capacity, we add a second identical machine, then a third and so on.   Theoretically, this can go on forever, although practically, you hit design limits on a resource that only scales up; database servers, for example, are often designed to scale up because, well, it’s hard to come up with a database architecture that scales out.

How can we take the idea of scaling out to manufacturing, and how does the small batch platform enable it?

Suppose you are a small-batch manufacturer, creating Elmo dolls and selling them into your community.  Suddenly they become wildly popular.  Your neighbours start complaining about the lineup out the door of your small shop; you’re working all night to craft more Elmos on your single set of machines, and they are snapped up each day before noon.  Customers camp out the night before to ensure being first in line for Elmo in the morning and your neighbours complain even more.

You need a way to scale.  Fortunately, your manufacturing platform consists of flexible machines – CNC mills and lathes, 3D printers, laser cutters, and the like.  So, production is easy to replicate.  You could set up a bigger factory, but there’s no room in your small shop, and so, that would take time.

What you need to do is find a partner with a similar capability and license them to produce and sell your Elmo.  They do this in their own community, getting their own lineup of customers out the door, and pay you a license fee for your design.  Both of you are happy.

Not only are you happy, but your customers are happier.  They no longer have to line up outside your small shop to get an Elmo, but can visit your partner whose shop is closer to them.  The planet benefits from this proximity.

What’s more, you and your partners are ramping Elmo production up and down as demand increases and ultimately wanes.  This means there is no post-yuletide Elmo glut, and again the planet is happier.

Scaling up production by scaling out, and then scaling down again is one of the ways that new companies will be more efficient than old-style manufacturing.  And that is one of the reasons why new-style companies are going to eat old-style companies for lunch.

Future Corporations and the Community

March 1, 2010

This week I’ve noticed a number of writers who identified deep personal schisms for executives of our modern global companies.  As always, it’s interesting to think about how the world will be better one day.

One of my favorite bloggers, Umair Haque, who has been foretelling the end of business as we know it for years, wrote about a crisis of nihilism.  As with almost every post, he points out that the real problem with companies is that they have lost their way by constantly focusing only on the near term bottom line, shareholder value.  Successful corporations of the future will be the ones that value something of deeper meaning, and in this week’s post he posits that this is culture.  He writes, “That means, of course, that tomorrow’s organizations must do more than just sell stuff. They must not be economically full but culturally empty. They must culturally reboot the communities and societies which they’re part of, helping them thrive and prosper in human terms.”

Haque identifies a new breed of CEO who does “not listen to the beancounting consultants who advise him to offshore — and upskill his workers instead…”  They probably all talk to one another over at HBR, and so, there is another interesting post from Roger Martin about the inauthentic community of modern executives.  This post points to the erosion authenticity in executives’ communities.  Martin suggests that in the past, a company operated within a geographic context, and ownership often came from within that community; the shareholders’ values and the corporation’s values were thus naturally aligned by a common interest in their community.  Today’s corporations, on the other hand, are owned – usually indirectly – by shareholders all over the world, and consequently, the only value they share is shareholder value – the bottom line.   While Martin doesn’t go this far, there must surely be a toxic internal conflict for these executives as they rationalize these often conflicting values.

So you’d expect that some executives have pulled out and re-focused on their own values.  Architects and builders, who mostly operate locally anyway, are at the leading edge of this shift, as this short interview with JC Scotts illustrates.  Being small companies, perhaps partnerships or sole proprietors, these companies, should have a natural alignment of values with operations, provided their owners are self-aware.

There is reason, then, for optimism as manufacturing becomes increasingly local, distributed and democratic.  Small batchers, with their lower capital needs,should have better cohesion between personal values and corporate value s.  Because people in my community share some of my values, maybe that means one day I will be able to choose products that have a positive impact on the things I love.