Archive for the ‘Business’ Category

A Mobile Usability Testing Filming Rig

August 12, 2015

Yesterday, a couple of my interaction design folks came to my office with a webcam and a cheap light from Ikea.  They had had the brilliant idea of mounting the webcam on the light so they could film usability testing on our mobile app.  The masking tape version they had assembled worked fine for internal testing, but tomorrow they’re heading to a branch at VanCity to test with real members, and they wanted something a little more professional-looking.

It turns out this Ikea lamp is made to be hacked with the niceEshop webcam.  All we had to do was take the reflector out, along with the socket, switch and bulb.  Then it was easy to thread the webcam wire through the hole where the lamp switch had been. The webcam wire has a little rheostat along its length to adjust the light brightness, and this needed to be taken apart and reassembled to make it through the hole in the lamp.

I took the reflector home last night to expose it to my hack saw and Dremel tool for fifteen minutes to get rid of the parabolic part of the reflector and to make a place where we can reach the camera on-off button.  Then this morning, I re-installed the reflector with some Blu-Tak to keep the camera from moving around.  If I wanted to be professional about it, I might have used some black silicone, but nobody will see the Blu-Tak anyway.

Don’t get me wrong, I love managing a team of developers, designers and testers.  But occasionally I get to play MacGyver, and that is really fun.

Structural Quality and the Cost of Maintenance

November 19, 2012

title=”Structural Quality and the Cost of Maintenance”>Structural Quality and the Cost of Maintenance

This short (4:40) interview with a couple of Cap Gemini execs largely speaks to the value of measuring structural quality of code.  CG is using a tool called CAST, while here at Central 1, we use SonarGraph, but I expect they accomplish more or less the same thing.  Right at the end of the interview they propose the idea of using CAST to help them predict the cost of maintenance of an application.

This is an interesting idea, and it speaks to structural problems being the more costly type of technical debt.  That is, it is the debt on which we pay the most interest: working within a code base that is poorly designed is slow and error-prone.

North Shore Outlook Investigates Declining Enrollment

March 9, 2012

The Outlook published an article today that investigates the declining enrollment in North Vancouver schools.

(Thanks to Norwood Queens CA for pointing it out)

 

Larry Tesler: What did you do?

January 7, 2012

To tell you the truth, I never even heard of Larry Tesler until a few minutes ago when I looked up Cut and Paste in Wikipedia. Tesler, it turns out, is the guy who popularized Cut and Paste for text editing way back in his Xerox days in the mid-seventies.

I’m sure he meant well, and it probably seemed like a good idea at the time. Back in the seventies, after all, not many people used computers to edit text. Many of them were computer programmers, and you could trust them with powerful tools like Cut and Paste because they bore the brunt of the pain from careless use. Also, in the mid-seventies, mice were still confined to laboratories, although perhaps Larry knew about them.

Nearly forty years later, would he uninvent it?

On the face of it, cut and paste looks like a productivity enhancer. As an author, I can copy a piece of text over and over and change it to quickly generate my content. Using cut and paste, I can produce reams of documentation in the blink of an eye.

The problem is that then someone has to read it. In fact many more people will read it than will write it. All those readers will need to differentiate between blocks of text that may only differ subtly.

One of my favorite examples is the block of text that we currently use to document use cases. This is almost a page long, and tends to get copied then changed ever so slightly from one use case to the next. Sometimes the changes are so subtle I find myself flipping back and forth in a kind of Captain Underpants animation to try to discern the differences!

Now, would anyone compose their information so sparsely if they had to rewrite every word? I doubt it: writing like this is the result of cut and paste. If you had to write every word, you’d find a much more compact expression.

More compact documents would mean a lot less waste. Less waste of time for the half dozen readers of the document and less waste of trees for the printers.

I’m all for banning cut and paste, or at least licensing its use. Who’s with me?

Inspired by Atlassian’s Fedex Day

December 21, 2011

My team has been after me for years to implement something like the Google 20.  Well, I’ve never felt I can afford to work only four days a week on delivering what the business asks for — 6 would be preferable.  So, we never did it.  However, Atlassian came up with the idea for Fedex days a few years ago, and this seemed a much more sellable idea, especially if we did it in December when things are starting to slow down a bit.  This year we tried it out.

We changed it a little from their format, but looking at their FAQ, there are some things we should adopt, like grabbing screenshots on the last morning in case the project breaks a few minutes before the deadline.  We also made it a two-day event, rather than just 24 hours.  Our environment is complex, and it could easily take a day just to get started.

Noon on Wednesday hit and the energy on the development floor went through the roof! Suddenly little teams of two or three formed all over the place, laptops emerged so people could work at each others’ desks, developers were huddling. Work continued into the wee hours of the morning both days. It was great!

Being the director, I decided to lead by example, and came up with my own project.  Part of my time was eaten by meetings that I couldn’t avoid, but for much of those two days I managed to roll up my sleeves and do some development.  True to form, I decided to start by learning a new language and development environment, and implemented my project in Grails.

By the end of Wednesday afternoon, I’d gone through all the tutorials I felt I needed and started on my actual project, which was to call the Innotas API to create a tool to simplify accepting timesheets.  That’s more or less when I found out that Grails is not all that much help for calling web services.  Oh well, I persevered, and thanks to Adrian Brennan, who was working on another integration with Innotas, I got my application to talk to Innotas by the time I went home, around 3 AM.

The Innotas API is a poster child for the worst API ever.  To do remarkably simple things, you need to cross the network tens of times.  It’s like traversing an XML document one node at a time over the Internet.  But I digress.

Thursday dawned earlier than expected and some of the teams were starting to struggle, including me.  I had more than half the day devoted to meetings that I couldn’t avoid.  Worse, there were no good blocks of time to get in the zone.  I was experiencing first-hand the difficulty with context-switching that my developers go through every day.  Indeed, I only got about two hours of productive time during the day, and came back in the evening.  When I left at 2 AM, I wasn’t the last to leave, and I suspect there were more working from home.

Friday morning flew by, and some of the organizational items that I’d left until the last minute became minor crises – mental note for next year!  However, I managed to get a partial demo working, which meant that at least I wouldn’t embarrass myself in the afternoon.

Suddenly it was noon, and a mountain of pizza was being delivered to our largest meeting room, which attracted the whole team very effectively.  Everyone grabbed some pizza and we called into the conference bridge for the handful of remote workers.  The afternoon would be long.

Atlassian limits their demos to three minutes.  We didn’t limit the demos this year, but next year we will.  A couple of people chose to show documents or presentations that they’d worked on, which I feel is counter to the spirit of the event.  We won’t accept those next year either.

One of the things I’d left until the last minute was figuring out exactly how we would finagle our way into the development VLAN from the conference room.  The challenges of seeing demos on various developer machines while simultaneously using join.me or gotomeeting ate up too much time.  So next year we’ll do a little practice in the week before, and we’ll get two computers going so we don’t have to wait for each demo to set up.  Well, lessons learned.

I hoped for team engagement, skills development and demonstration, and we got those in spades.  I thought we might perhaps get a product idea or two, but I was completely blown away by the number of projects that resulted in something that is almost usable in our products. We got way more value out of this initiative than I expected, and I fully several projects to graduate into our products after a little refinement.

If you’ve thought about Fedex Days for your organization, I heartily recommend finding a quiet time of the year and going for it.

Most Requirements Aren’t

November 27, 2011

To my ultimate embarrassment, we’re still largely a waterfall development organization. So, I read a lot of requirement definitions, and I’ve come to conclude that there are three types:

1 Requirements that are actually necessary for the product’s success. There is probably a relationship between these and the minimum viable product, but that is another post.

2 Requirements that are desired by someone, but not actually necessary.

3.Prescriptive requirements that define how something should be done rather than what should be done.

 

Now everyone knows (with the apparent exception of our business analysts) that the third type has no business being stated as a requirement.  Personally, I knew this early on from having it beaten into me by someone who had had it beaten into him in the early days of his career. It was part of the oral tradition of software development.  Maybe it had something to do with job protection.

It turns out that it actually has nothing to do with job security and there are at least two good reasons why proscriptive requirements are dangerous: one for each of the major audience groups — developers and testers.  For developers, a prescriptive requirement says “You must implement it this way.”  Even if the developer can see a better way, they must still implement the lame design that has been defined and signed off.  Moreover, the proscribed design is difficult to iterate, and we all know that your first idea is rarely the best one.

The best design comes from working through all the scenarios.  We used to think we could do this on paper, but the reality is there is always an unanticipated scenario that turns the design on its head. Test Driven Development and Refactoring are as much an acknowledgement of this reality as they are of changing requirements.  What’s the developer to do if they still have to satisfy the original design requirement?  That’s right, they hack it, and we take on technical debt.

For testers, prescriptive requirements are impossible to test.  How can you tell how something was built without looking deep inside and verifying that it works that way?  At best, they can have the developer place an inspection point in the process somewhere to verify that certain steps comply with the requirements, but why are they doing that?

Last year, we completed a project with a strong prescriptive requirement: files received in one region would be combined into a single file and sent to another region where they would be burst and fed into a processing system.  Not only did the requirements document state that this would happen, but it also defined the format of the combined text file!  The intent was to satisfy a non-functional requirement that 100% of the received files be fed into the processing system.  There was a huge review team for this requirements document, most of whom could not have understood the implications of one design choice or another with regard to this file, and the developers were forced to take on an inefficient design for the non-functional requirement.   The testers were able to verify that the stated requirement was met by inspecting the file; interestingly they never did verify the actual non-functional requirement, because it was never stated.  Altogether, the amount of time wasted by the business, analysts and testers reviewing what should have been implementation details is truly staggering.

Fortunately, while the developers had to create this huge text file, they had tools on the AS400 that make text file manipulation a doddle.  Unfortunately, we were already planning to shut down the AS400.  The result?  We may need to rewrite this component that combines the files (depending on the order in which a few things fall).  If we do indeed rewrite it, we will have incurred 100% interest on the technical debt in one year.  Awesome.

Technical Debt and Interest

August 9, 2011

Since installing Sonar over a year ago, we’ve been working to reduce our technical debt.  In some of our applications, which have been around for nigh on a decade, we have accumulated huge amounts of technical debt.  I don’t hold much faith in the numbers produced by Sonar in absolute terms, but it is encouraging to see the numbers go down little by little.

Our product management team seems to have grabbed onto the notion of technical debt.  Being from a financial institution they even get the notion that bad code isn’t so much a debt as an un-hedged call option, but they also recognize that it’s much easier to explain (and say) “technical debt” than “technical unhedged call option.”  They get this idea, and like it, but the natural question they should be asking is, “How much interest should they expect to pay should we take on some amount of technical debt?”

In the real world, debt upon which we pay no interest is like free money: you could take that loan and invest it in a sure-win investment, and repay your debt later, pocketing whatever growth you were able to get from the investment.  It’s the same with code: technical debt on which you pay no interest was probably incurred to get the code out faster, leaving budget and time for other money-making features.

How do we calculate interest, then?  The interest is a measure of how much longer it takes to maintain the code than it would if the code were idealized.  If the debt itself, the principal as it were, corresponds to the amount of time it would take to rectify the bad code, the interest is only slightly related to the principal.  And thus you see, product management’s question is difficult to answer.

Probably the easiest technical debt and interest to understand is that from duplicate code.  The principal for duplicate code is the time it would take to extract a method and replace both duplicates with a call to the method.  The interest is the time it takes to determine that duplicate code exists and replicate and test the fix in both places.  The tough part is determining that the duplicate code exists, and this may not happen until testing or even production.  Of course, if we never have to change the duplicate code, then there is no effort for fixing it, and so, in that case, the interest is zero.

So, I propose that the technical interest is something like

Technical Interest = Cost of Maintaining Bad Code * Probability that Maintenance is Required

You quickly realize then that it’s not enough to talk about the total debt in the system; indeed, it’s useless to talk about the total debt as some of it is a zero-interest, no down-payment type of loan.  What is much more interesting is to talk about the total interest payments being made on the system, and for that, you really need to decompose the source code into modules and analyze which modules incur the most change.

It’s also useful to look at the different types of debt and decide which of them are incurring the most interest.  Duplicate code in a quickly changing codebase, for example, is probably incurring more interest than even an empty catch block in the same codebase.  However, they both take about the same amount of time to fix.  Which should you fix first?  Because the interest on technical debt compounds, you should always pay off the high-interest loan first.

Magazines on Prezi?

January 15, 2011

This week Prezi announced their iPad app.  If you’re not familiar with Prezi, go check out their innovative approach to presentations.  I’ve used it for a couple of presentations so far, and I have to say I love it.  My audience, of course, didn’t know what was going on until they asked a question and I quickly zoomed out and in again to find the part of my Prezi that spoke to their question.  Try doing that in PowerPoint, and you’ll find yourself fumbling.

Last year, Wired produced the first decent magazine for the iPad.  This largely fulfilled the vision proposed by BERG earlier last year and of course it’s mighty sweet.

But frankly, it’s still the same experience of paging through a document, except now you get to do it by swiping and there are a few bells and whistles.  It’s not quite a reimagining of the reading experience.  That’s where I think Prezi could come in with their new app.  What if, instead of swiping through a document you zoomed and panned across a map.  You could explore interwoven topics, zoom in to understand detail and zoom out to get see the big picture.

Maybe that’s the plan.  Prezi has a few things to add before they’re competitive with Adobe, but I’m looking forward to the real future of magazines.

Joshua Bloch on API Design

December 11, 2010

I was looking for some direction for what people have found works well for API and SPI documentation, when I happened across this great Google Tech Talk on API Design by Joshua Bloch.  Within a couple of minutes I’d started to take notes – which is challenging when you’re trying to eat soup.   It was that good.

To save you (who is likely me as nobody else reads this blog!) from watching the whole 60 minutes again, here were the main things I took away.

The first two were the principles that Joshua wanted to ensure everyone took with them

  • The API should be as small as possible, but no smaller.  “When in doubt leave it out.”
  • Don’t make the client do anything the module could do.  This causes boilerplate code, full of errors.

I like those two principles, as well as the others he highlighted (names matter, strive for symmetry, document religiously, design for performance without warping the API, coexist peacefully with the platform, minimize mutability, design and document for inheritence or prohibit it, keep the Principle of Least Astonishment, fail early, fail at compile time, provide programmatic access to data available in strings, use consistent parameter ordering, avoid long parameter lists, avoid returning a value that requires exceptional processing).  However, what I thought were really interesting were some of the ideas he suggested for the approach to API design.

  • Start with a one-page version of the specification The idea here is that as the specification gets larger and more fully fleshed out, it gets harder to change, and you want to be able to take on feedback from your clients to improve the design as much as possible.  I think it would have a couple of fabulous side effects as well, that make me wonder if you shouldn’t strive for a single-page specification all the time.  First, a single page constrains the size of the API, ensuring that the module does one thing and does it well.  The second benefit might be that keeping a specification on a single page might force the designer to really concentrate on getting the naming right so that the behavior of the API is apparent without so much documentation.
  • Try coding to the spec before implementing it There is nothing like using a system to expose its usability, and for APIs, usage is about code.  As with the single-page specification, this idea has the effect of catching problems while they are easy to fix.  I like the way this principle could tie in nicely with test-driven development, as well.  You know, if you wrote all the tests against a stubbed version of the interface, and they were all failing to start, that would be okay, and you’d have an excellent sense of how to use the interface.
  • For service provider interfaces, write three implementations before publishing For us, service provider interfaces are in fact far more important than application programming interfaces, and so, I wish I’d thought of this myself.  That you need to build something three times before it is reusable is one of those well-known tenets of programming, but I’d never put it together with service provider interfaces before.  Now it seems to make perfect sense.  If you write a single example implementation of the SPI, you will design an SPI that supposes that implementation, two is better, three is really good.  There are diminishing marginal returns after three.

Well, as I say, it was a great talk, well worth watching, and the points I took out of it are well worth remembering.

 

The New Way to Scale

March 13, 2010

Scalability is one of those poorly understood concepts in the computer world, and it is understood even worse in the business world.  Briefly, it is the ability of a system’s capacity to grow by adding more resources to it.  A desirable property is that growing the capacity by a unit should never require a single huge investment.  If it is the case that each additional unit of capacity requires a constant amount of additional resources, we say the system scales linearly.

Most companies today do not scale

Think about car assembly lines: they represent a huge infrastructure investment of hundreds of millions of dollars, optimized to produce a single model of car at a particular rate of completion.  The line is most efficient when working 24×7, producing a steady stream of new cars.  If the demand does not meet that capacity, then the margin per unit is less than planned, and possibly the car is not viable.  The company’s response is to shutter the plant and throw everyone out of work.

On the other hand, if the demand is greater than the planned capacity, there is no way for the company to meet that demand without taking the risk that the second plant may never run at full capacity, and the product will not be viable.  This risk is especially strong for your typical fad product – the wildly successful Tickle-me-Elmos whose demand outstrip supply for a few short weeks.  Because the manufacturing takes place on the other side of the planet, in a factory of fixed capacity, there is no way to get more Elmos on a ship before Christmas, let alone in stores.  They arrive, instead, after their moment and we get an Elmo glut in January.

Actually, I have no idea if Elmo production failed to scale, but certainly the weeks it takes to ship from China mean that this type of manufacturing fails to scale quickly.

When we scale computer systems, we talk about scaling out or scaling up.  To simplify, scaling up means you buy a bigger computer; if your single processor machine no longer serves, you get a four-processor machine, and so on.  The problem with this strategy is that the first computer is now surplus, and the investment to scale bigger and bigger gets bigger.  Ultimately, you hit a brick wall, and nobody can supply enough processors in a single frame for your needs (that numbers crazy high today, but CPU is only one resource, and other resources hit the wall earlier).

Scaling out is generally favored over scaling up.  When we scale out a system, we start with one low-powered machine, and when demand outstrips its capacity, we add a second identical machine, then a third and so on.   Theoretically, this can go on forever, although practically, you hit design limits on a resource that only scales up; database servers, for example, are often designed to scale up because, well, it’s hard to come up with a database architecture that scales out.

How can we take the idea of scaling out to manufacturing, and how does the small batch platform enable it?

Suppose you are a small-batch manufacturer, creating Elmo dolls and selling them into your community.  Suddenly they become wildly popular.  Your neighbours start complaining about the lineup out the door of your small shop; you’re working all night to craft more Elmos on your single set of machines, and they are snapped up each day before noon.  Customers camp out the night before to ensure being first in line for Elmo in the morning and your neighbours complain even more.

You need a way to scale.  Fortunately, your manufacturing platform consists of flexible machines – CNC mills and lathes, 3D printers, laser cutters, and the like.  So, production is easy to replicate.  You could set up a bigger factory, but there’s no room in your small shop, and so, that would take time.

What you need to do is find a partner with a similar capability and license them to produce and sell your Elmo.  They do this in their own community, getting their own lineup of customers out the door, and pay you a license fee for your design.  Both of you are happy.

Not only are you happy, but your customers are happier.  They no longer have to line up outside your small shop to get an Elmo, but can visit your partner whose shop is closer to them.  The planet benefits from this proximity.

What’s more, you and your partners are ramping Elmo production up and down as demand increases and ultimately wanes.  This means there is no post-yuletide Elmo glut, and again the planet is happier.

Scaling up production by scaling out, and then scaling down again is one of the ways that new companies will be more efficient than old-style manufacturing.  And that is one of the reasons why new-style companies are going to eat old-style companies for lunch.