Early error detection is paramount

Is early error detection, while developing software, really all that important?

This question has probably been asked and answered a thousand times over in the last 50 years.  Let me try to rehash the argument in my own words.

Catch the error while the cement is wet

Construction, of the brick and mortar variety, sometimes provides an apt analogy for software development, so I am going to give that a whirl.

See if you can correlate the story I lay out below, to the various stages in a software development task – DEV, QA, Bug Fixing, and the Aftermath.

Say you decide to build a house. You create some kind of design, and completely construct your house.

After the construction is all finished, and only after this, you call in an inspector to see if the construction is up to code. The inspector finds that your electrical wiring is the wrong gauge.  You must change it.

The wiring is inside the walls.  You have to tear into the walls to get to the wiring.

You have already spent a lot of money and time on the construction.  You want to move in already.  The extra expense is a burden on the pocket book, and the mind.  You are not at your patient, nor enthusiastic best.

The builder had other projects scheduled.  He wants to be done with your house yesterday. He can no longer giving his best attention to your problems.

Some of the folks that worked on this house are needed on the county commissioner’s lake side cottage. The builder brings in some temporary help to make the fixes. These are snot-nosed college kids, who don’t know many of the small, but consequential technical decisions that went into your house’s construction.  They are going to trip over these and make mistakes. Worse, these kids know they will never see you and your house after this summer.

After the wiring is changed, you notice that a couple of windows do not close well. The lights in the stairwell flicker randomly, but noticeably.  The re-painting of the walls in your guest room is not the right shade.  By this time you have no other place to live, so you suck it up, and move in.

After you move in you notice that your water heater does not work well with the new wiring.  You have to replace the water heater.   More aggravation; more time wasted; more expense; send in the plumber.

There is worse.  While monkeying with the walls and the wiring, a construction worker accidentally rammed a 100 pound sander into a load bearing beam.  It now has a crack it in.  No one notices.

Let’s see how this correlates to software development.

DEV

Say you decide to build a house. You create some kind of design, and completely construct your house.

Listen pilgrim, I just write code. I don’t test.

QA

After the construction is all finished, and only after this, you call in an inspector to see if the construction is up to code. The inspector finds that your electrical wiring is the wrong gauge. You must change it.

This happens all the time in enterprise software development. Developers do not test their code effectively. Infrastructure, which enables developers to adequately test the code they write, often does not exist. Everyone, including management, is happy to leave serious testing till after all of the code is turned in.

The bug fixing

The wiring is inside the walls.  You have to tear into the walls to get to the wiring.

The bug is buried somewhere deep in a few thousand lines of code that you blithely turned in. You go digging, make a change some place, with little knowledge of everything else that might now be affected by your change. It is hard to know, there is too much code, code that you don’t even remember exists. The bug fix is a risk.

You have already spent a lot of money and time on the construction.  You want to move in already.  The extra expense is a burden on the pocket book, and the mind.  You are not at your patient nor enthusiastic best.

The builder had other projects scheduled.  He wants to be done with your house yesterday. He is no longer giving his best attention to your problems.

I’ve seen this in just about every large project I have been part of. Developers use all of a sprint to write code and turn it in. Testing of this code happens in the next sprint, when both the stakeholders and the developers are also assigned to the development tasks scheduled for this second sprint. Nobody is able to bring their best selves to the bug fixing.

Some of the folks that worked on this house are needed on the county commissioner’s lake side cottage. The builder brings in some temporary help to make the fixes. These are snot-nosed college kids, who don’t know many of the small, but consequential technical decisions that went into your house’s construction.  This is going to trip them up, and they will make mistakes.   Worse, these kids know they will never see you nor your house after this summer.

You designed and wrote the original code. However bugs are assigned to someone else, who knows little of the business requirements, the design decisions that went into the solution, and the contours of the code base you created.

Sometimes this someone else is a consultant. And we know consultants can be a mixed blessing, don’t we?

The new and not so improved aftermath

After the wiring is changed, you notice that a couple of windows do not close well. The lights in the stairwell flicker randomly, but noticeably.  The re-painting of the walls in your guest room is not the right shade.  By this time you have no other place to live, so you suck it up, and move in.

A bug fix can fundamentally improve the solution you constructed. Or it may just be a jerry-rigged whatchamacallit that Rube Goldberg would look down his nose at. Often it is the latter, which gets you past today’s problem, and sows the seeds for several others.

But you have no choice. The show must go on.

After you move in you notice that your water heater does not work well with the new wiring.  You have to replace the water heater.   More aggravation; more time wasted; more expense; send in the plumber.

There is worse.  While monkeying with the walls and the wiring, a construction worker accidentally rammed a 100 pound sander into a load bearing beam.  It now has a crack it in.  No one notices.

Like I mentioned earlier, you often have no idea what damage you did while making your bug fix.

Lesson Learned

You want to catch the wiring issue in DEV:

  • Before a whole bunch of stuff was built around it
  • Before you build a whole bunch of stuff that depends on the error
  • While the construction crew was focused exclusively on this task
  • When the folks who made the error are available to rectify the error
  • When you have the least risk if you make another mistake

Knowing a tool vs knowing software development

Give an enterprise developer a top notch table saw, and a 500 dollar power drill.

Give that developer all the training he wants on those tools.

Ask the developer to build a chair.

His chair will come out with 4 legs of different lengths, cracks in the seat, and a couple of nails sticking out.

The developer knows how to use his fancy tools, but he does not know how to build a chair.

That is the difference between knowing how to use a tool, and knowing how to do software development.

They are two different bodies of knowledge, two different sets of skills.

By the way, that chair has business value.  If you must sit, and that chair is all you have, you will sit on that chair, carefully, and with a few choice curses.  I am willing to bet that this is how business folks view most enterprise software that they are saddled with.

Bloody-minded software development

I’ve been pre-occupied lately, with the familiar notion that ‘action‘ has its moment.  How much planning did the chicken do, before crossing the road?

I have seen two kinds of software development outfits.

One was all chaotic action, with apparently little method, yet who always managed to deliver something. Some thing went into production.   Some thing of some value was up and running.  The plan, if one can be said to exist, was often brutal but effective, like clearing a minefield by having your platoon walk across it.  Quality was an alien concept, little more than a pretty thought.

The other development shop is all talk, with little to show for it.  Good people, talented, even competent individuals, who collectively can’t seem to program their way out of a paper bag.  They have worked two, three years on something, nothing of which is in production.  The waste is heart-breaking. Some thing, some essential bloody-mindedness, is missing.

If I were a business person, I would have to pick the former team every time.

Delivery is an essential, like the food you put in front of a starving man. Quality provides long term value, with long term benefits, and requires constant application – it is putting healthy food in front of a starving man, and continuing to feed him healthy food, even after he stops starving.

 

 

Who verifies the blueprint?

You have a business problem (sometimes called a ‘business requirement’).   Someone devises a solution.  You create a blueprint (sometimes called a ‘specification‘) of the solution.   Then you construct the solution that the blueprint specifies.  This workflow suggests that there are at least two things to verify.

  • Does my construction adhere to the blueprint?
  • Is the blueprint correct in the first place?

Here is an example.

Business Requirement

To determine the monies that we may have to refund a customer (say an auto-insurance holder), perform the following calculation.

The monies that the customer owes at the moment
minus
the monies that the customer has already paid us

If the customer has paid us more than she owes at the moment, the customer is due a refund.

However, we cannot count all of the payments that we have received from the customer.  There are rules that tell us which payments must be ignored while calculating refunds.  Here is one of them.

The ‘My Dog Died’ rule

Customers that are dis-enrolled because they failed to pay their premiums, can ask for and receive a sort of ‘grace period’, of usually 2 to 3 months, in which they can catch up (or perhaps even pay ahead).  If they hit all the payment targets within this period, the customer is re-instated.  Let’s call this the ‘My Dog Died, So Have Pity On Me‘ rule.

Specification of the solution – The Blueprint

So how will we satisfy the business requirement?  What are we going to build?

The brain trust (the business analyst, the architect, the DBA, the resident loudmouth), go off into their huddle, and produce these instructions for the construction crew (a developer, and a tester).

  • Using already known methods, determine if the customer is dis-enrolled because of failure to pay premiums.
  • Determine if the customer was granted a ‘My Dog Died‘ grace period, and if so how long that period is for.  In particular, look for the following attributes.
    • An attribute called ‘GriefStricken‘.  Its ‘Effective Date’ is the start of the ‘My Dog Died’ period.
    • An attribute called ‘GriefStrickenExpiration‘,  Its ‘Effective Date’ is the end of the ‘My Dog Died’ period.
  • When calculating refunds, ignore all payments received during the ‘My Dog Died’ period.

Verification

Does construction match blueprint

As I mentioned earlier, in my environment the construction crew consists of a developer, and a tester.

The developer writes computer code that implements the blueprint.  The developer and the tester verify that the computer code does indeed do what the blueprint lays out.

They discover bugs – mismatches between the construction and the blueprint.  The developer fixes the bugs – removes the mismatches.  At some reasonable point, the construction crew turns in the solution.

But.

Who verifies the blueprint

Yea, you guessed it – the blueprint was wrong.

It turns out that the ‘My Dog Died’ period is stipulated to start on the day that the customer is dis-enrolled.

What we thought was the start date, the ‘EffectiveDate’ of the ‘GriefStricken’ attribute, is only the day on which the customer was approved for the ‘My Dog Died’ grace process, which is often several days after the dis-enrollment.

There was no way for the developer to know this.  The tester did not know this either.  The construction crew only knows what is in the specification of the solution.

Yet, this is a significant bug.

The Enterprise Programmer Blues

Others, like Bob Martin, have made the point, that coding, in fact, is writing.  So I was not surprised when I found that the venerable writing guide, “The Elements of Style“, which has been around for about a 100 years now, had something to say about computer programming.

Before beginning to compose something, gauge the nature and extent of the enterprise and work from a suitable design.  Design informs the simplest structure, whether of brick and steel or of prose. You raise a pup tent from one sort of vision, a cathedral from another.  This does not mean you must sit with a blueprint always in front of you, merely that you had best anticipate what you are getting into.  To compose a laundry list, you can work directly from a pile of soiled garments, ticking them off one by one. But to write a biography, you will need at least a rough scheme; you cannot plunge in blindly and start ticking off fact after fact about your subject, lest you miss the forest for the trees and there be no end to your labors.

I write computer code in the enterprise.  There have been no cathedrals in my past, much less a solid hut.   The only promise that the big bad enterprise makes to me is spaghetti, or with poetry now, ‘soiled garments’.  How many clumsy hands have roughed you up, you poor code?

Ever wonder what ‘analysis’ is, which is merely the door that leads to ‘design’?  I haven’t heard a more melancholy answer than this, “… lest you miss the forest for the trees and there be no end to your labors“.   I’ve labored, Lord, how I’ve labored.   

In enterprise programming, very little is complex.  Dissect a bug, and 9 times out of 10, you will find, to borrow the Captain’s words from Cool Hand Luke, “a failure to communicate”.   Bugs have made me laugh. They have made me want to pull my hair out in frustration, and angry enough to challenge the miscreant to a duel. But I have never felt like crying, till I saw “16. Be Clear”.

Muddiness is not merely a disturber of prose, it is also a destroyer of life, of hope: death on the highway caused by a badly worded sign, heartbreak among lovers caused by a misplaced phrase in a well-intentioned letter, anguish of a traveller expecting to be met at a railroad station and not being met because of a slipshod telegram.  Think of the tragedies that are rooted in ambiguity, and be clear! When you say something, make sure you have said it.  The chances of your having said it are only fair.

Lovely.

Christmas, when broke

Say your purchasing power disappears.  Or say they pass a law that forbids store bought Christmas gifts.  Would you be able to fill that space under the tree?  I couldn’t.  It seems that I cannot make, or give anything, without burning cash.

Perhaps I can give people “time”.   Stick an IOU under the tree, saying something like, “Mom, I promise to spend fifteen minutes every day of the next year, cleaning the bathroom”. That is a useful gift, but rather boring.  You want gifts with a little whimsy.

I could cook.  I recently learned to bake a potato; a thrilling business.  I could wake up at 3:00 AM on Christmas morning, go out and dig fresh potatoes out of the hard winter ground, stick them in the fireplace, take them out just before the family wakes, wrap them individually in brightly colored leaves I saved from the fall … this is crazy.

The simple truth is that, if I were broke, I would not know how to participate in Christmas.

However, there may be another angle to this story.  Stipulate that no material goods must exchange hands.  Have all money changers shutdown shop for the season.  What will remain of Christmas?  Only Christ, I think.

A Christmas that is strictly a religious event, presents a different kind of challenge to folks like me, agnostic to my dying day, and multicultural, citizen of the world types, like my roommate/cousin who is arguably more religious than me (you should see him at the temple), yet is enthusiastically considering a Charlie Brown Christmas tree this year.

Interestingly, I am more comfortable with this prospect.  I don’t have to belong to your club.  If it is important to you, it is important to me, because, well, you are important to me.  I’ll wish you Merry Christmas, and happily help you celebrate Christmas.  I have some practice with this kind of thing.  Why, only a few days ago, I schlepped all the way to New Jersey for a cousin’s baby shower. Every cultural element central to a baby shower is foreign to me – motherhood, fatherhood, child birth.   What business do I have at a baby shower?   Yet, I show up, lapping up the welcome that people give me, eating the glorious food, staying out of trouble, and greasing the wheels in any way I can.  And it works.  People keep inviting me back.  I can do exactly that at Christmas, or Eid, or Hanukkah, for that matter.

Are you packing lunch?

This sammich is turning out to have staying power.

Start with whole wheat bread, from Costco.

On one slice, first lather on a generous glop of plain hummus, from Costco.

Then, cut cherry tomatoes (from Costco) in half and artfully place the halves, flat face down, all across the hummus coat.

Next, slice into thin strips, cold, red (must be red. I mean, if you are going to splurge on a Mustang, it must be yellow. Same reason), peppers. These go on, and around the tomato halves.

A teaspoon of extra virgin Spanish olive oil, from a barrel in the Strip district, sprinkled on the veggies.

Top this with crushed red pepper.

Did I mention that the peppers came from Costco?

Almost there.

Cover this edifice with exactly two slices of cheese. I used Swiss, and cheddar. Swiss is good, because it is hard, and gives you an element of crunch. Crunch is important. Close it off with another slice of bread.

The home stretch.

Wrap the whole thing in a crisp sheet of wax paper. Into a sandwich bag, and the fridge, for a few hours.

Eat it cold.

Wait. Forgot two things.

Throw in some fresh cilantro. And the cheese, wax paper, and sandwich bags came from Costco.

You probably guessed by now , I have a thing for Costco. Their stuff gives me an amazing warm and fuzzy – unfailingly decent quality, and with extra warranty. I wish I was from Costco.

Loose the debugger

Loose the debugger

My ideal development team would not use step-through debuggers.  If I am responsible for mentoring newbie programmers, my first rule would be – no step-through debuggers.  My ideal IDE would include all the editing, analysis, navigation, and refactoring features, that modern incarnations like Eclipse and IntelliJ have, but without the step-through debugger.

Does this make any sense at all?

What is a debugger used for?  It is used to understand what a piece of code is doing.  The code might be doing something wrong, aka a bug.   The debugger can help us understand how a bug came to be.  If there were no debugger, what can you do?

Well, read the code.

 

Less Code

If reading code was the only way to understand the code, wouldn’t you write less code?  You will.  This will act as a disincentive to proliferation by copy/paste.    This will be an incentive to learn to ‘not repeat yourself‘ (the DRY principle).

 

Intelligible Code

If reading code was the only way to understand the code, wouldn’t you write code that is easy to understand?   You will learn the difference between the code, and the intent of the code.  You look at code, and ask yourself, WTF, why was this code written?  That.

Is there any need for code to be a puzzle?  Here is an example.

protected <I, O> O mapIO(Class<O> clo, I in) {
   try {
       Class<?> cli = in.getClass();
       O out = clo.newInstance();

       for (Method mo : clo.getMethods()) {
           if ((mo.getName().startsWith("set")) && 
               (mo.getParameterTypes().length == 1)) {
               Class<?> pr = mo.getParameterTypes()[0];
               Method mi = null;
               Object ob = null;
               if ((BaseList.class.isAssignableFrom(pr)) || 
                   (Collection.class.isAssignableFrom(pr))) {
                   mi = null;
               } else {
                    try {
                        mi = cli.getMethod("get" + mo.getName().substring(3));
                    } catch (NoSuchMethodException e) {
                          try {
                              mi = cli.getMethod("is" + mo.getName().substring(3));
                          } catch (NoSuchMethodException e2) {
                    }
               }
               if (mi != null && pr.isAssignableFrom(mi.etReturnType())) {
                   ob = mi.invoke(in);  
               } else {
                   try {
                       ob = pr.getConstructor().newInstance();
                   } catch (NoSuchMethodException e)  {
                       ob = null;
                   }
               }
               mo.invoke(out, ob); 
           }
       }
       return out;
   }
   catch (Exception e) {
       ......
   }
}

Any idea what the above code does?   Right.  I looked at it and my eyes started to swim.   It is in fact, a decent method.  It is short, and once you decipher it, you see that it does one simple thing.  The rub, of course, is having to decipher it.  I had to spend some time digging into it, doing a little archaeology as it were, to discover the intent of the code.  So what does this code do anyway?

Given an input object, and the class of the output, instantiate, and 
initialize an output object, in the following manner.  

Scalar properties, which exist in the input object, are copied over. 

All other properties - properties that do not exist in the input object, 
and vector properties (collections), are initialized with the 
default no-arg constructor.  

That's it.  

This is the 'intent' of the code.   This is what the code is supposed to 
accomplish.  This is why the code exists.

Now, why can’t the code just say what it means?  Something like this.

protected <I, O> O prepareOutput(Class<O> outputClass, I input) {
   try {
       List<Field> propertiesToBeCopied = 
                     getMatchingScalarProperties(outputClass, input);

       List<Field> propertiesToBeInitialized = 
                     getRestOfTheProperties(outputClass, propertiesToBeCopied);

       O output = outputClass.newInstance();
       copyProperties(output, input, propertiesToBeCopied);
       initializeProperties(output, propertiesToBeInitialized);

       return output;
   }
   catch (Exception e) {
       ......
   }
}

This alternative code is definitely less efficient than the original version.  On the other hand, what this code is about, is fairly obvious.  Even if it turns out that I must optimize this code, I will have more confidence in that attempt, because I start with a better view of what the code is supposed to accomplish.

Wait, I see how we can make this more efficient, without sacrificing the clarity we are heading towards.

protected <I, O> O prepareOutput(Class<O> outputClass, I input) {
   try {

       O output = outputClass.newInstance();
       for (Field field : clo.getDeclaredFields()) {
           if (isMatchingScalarProperty(output, input, field) {
               copyProperty(output, input, field);
           else {
               initializeProperty(output, field);
           }
       }

       return out;
   } catch (Exception e) { 
      ...... 
   }

}

As a friend of mine says, whaddyathink?  Notice, this version looks a lot like the English description of the intent of the code.  I just translated English to Java.   Give me code like this, and I don’t need a step-through debugger.

This is also a good example of the oldest truth in design (or writing, for that matter) – you almost never get it right the first time.

Also, see the point Martin Fowler makes about about code that requires comments.  It is at the end of the ‘Bad Smells in Code‘ chapter, of his refactoring book.

 

What you see is what you get

Bob Martin, in his book, “Clean Code: A handbook of Agile Software Craftsmanship“, quotes Ward Cunningham’s notion of clean code – “You know you are working on clean code when each routine you read turns out to be pretty much what you expected“.

You loose the step-through debugger, you get this.

Haven’t you had to deal with a method that had some simple name like getDriversLicense, but went on to do everything from the groceries to changing your baby’s diaper, and in some obscure corner, almost as an after thought, it retrieved your driver’s license.  If the method, getDriversLicense, did just that and nothing else, you could skip reading the content of that method.

The more you are forced to read code, the more you will write methods that do one small thing, just the thing that the method’s signature suggests.

 

Developer tested

Of course that cleanly written, getDriversLicense method, could have bugs.   How do you increase your confidence in the getDriversLicense method?   As you read the code, you read a call to getDriversLicense, and say, okay, great, I know that works, and move on.  You don’t want to have to also read that method’s definition.

You know the answer.  How do you produce code that folks can implicitly trust?  You test the daylights out of the code that you deliver.   Automated developer tests.

Loose the debugger, and you will learn to hate the lack of developer testing.

 

Log your way out of trouble

Regardless of how clearly your code is written, there will be times when you will want hard evidence of what the code is doing to the data.   In the absence of a step-through debugger, you will necessarily have to rely on logging. You can understand what your code is doing by logging inputs, outputs, and execution paths, which trace the code’s work.

Any enterprise system worth its salt must have good tactical logging anyway.  Clear, and configurable logs, is useful for system maintenance, and business monitoring.

Loose the debugger, and you will be forced to nail down your application’s logging.

 

And so,

Do any of these alternatives to step-through debugging sound like a bad thing?  No.  Taken at face value, each of these alternatives, and in fact all of them together, add a lot of value, which the step-through debugger does not.

Think about it another way.

Why do you need the step-through debugger.  9 times out of 10, you need it to negotiate bad code.   If you are starting from scratch, if you do not have to deal with legacy code, stay away from the debugger.  This will force you to learn to write cleaner code.

Reading code makes you feel the pain caused by poor code.  Using a step-through debugger helps you turn a blind eye to poor code.  At its worst, the step-through debugger enables poor code.

 

A benchmark?

Say I am building a new software team, my own outfit even.   I almost think that the missing debugger can separate folks that I want to rely on, from folks that I am sort of forced to rely on.  At the minimum, I want developers that can learn to be productive without the step-through debugger.  If you cannot live without that crutch,  hmm, well, ….. I don’t know.

Do ‘meaningful use’ standards include pushing lab results to patients?

I went in for some medical tests early yesterday morning.  It has been over a day since then, and there are several questions I wish I had answers for.

  • My insurance changed recently.  Has the change registered correctly at the hospital?  Will there be any mishaps with payments?
  • I left my urine sample in a little cupboard.  The cupboard already had another sample, which had not been picked up yet.   Did someone pick the samples up, all right?  Have the tests been performed yet, and are the results in hand?
  • Have the results been sent to the physician who ordered the tests?
  • Will the lab send me the results?  I would like an electronic copy of the results.   They are my tests dammit.

Events in the workflow

Each of these is an event in the workflow that was my tests.  Notifications about the disposition of these events would set my mind at ease.  I understand some folks might not want this much information. Well, make these notifications optional; let me choose which ones I want to receive.

When a business process is carrying out a customer’s business, some events in the process will be of interest to the customer.  Are you able let the customer choose the events they want to know about, and how they would like to be notified?  I would not mind a tweet.  Others might like an email, or a text message.   It is not hard to see that this need is a natural part of any enterprise system, which is carrying out a customer’s business.  This is ‘user experience‘, which to my mind is also, often synonymous with ‘customer service‘.

It occurs to me that either the lab where the tests were performed, or the physician that ordered the tests could keep me informed.  Of course, no joy from either party at the moment.   Some day ….

I dug through the EHR certification requirements for Stage 2 of ‘meaningful use’, and found only a reference to facilities of the ‘pull’ variety.   The EHR must allow MU data (lab results included) to be viewed, downloaded, and transmitted.

   § 170.314(e)(1):  View, Download, and Transmit to 3rd Party

However, these documents are hard to read, and I might have missed something.

The computer programmer in me is a little bit happy I think.  As everyone keeps saying, a lot of work remains to be done.

I haven’t met a physician yet, who likes EMR software

I only have anecdotal evidence, but every physician I talk to, hates the EMR software he has to deal with.

Ye old Enterprise IT

One particularly tech savvy young resident in an area hospital, says, “… too many clicks, too many clicks“. These are folks that live in their iPhones and iPads every spare moment they have.

As he did book-keeping on his laptop, I peaked over his shoulder at the EMR program he was using.  Also, a couple of months ago, I had occasion to spend a few days at Johns Hopkins, helping take care of a family member, and I took every chance to watch the staff at work at their monitors, which now seem to be stashed into every available corner (each patient’s room had a desktop, monitor, keyboard, and mouse).  I was a little taken aback to see that they were all using Windows desktop applications.  Seen through eyes drunk on modern touch-screen mobile devices, these apps look old-fashioned – poor, tired, windows, boxes, lists and buttons, huddled together in an unappealing jumble.

My doctor friend said his EMR was cumbersome to use, and it took too long to enter all the data that he is prompted for.  He seemed to unconsciously separate information that is vital to patient care, from information that is just “for billing“. Often, he enters only the patient care information that he thinks is necessary, and ignores what he called, “fluff“.

The word “fluff” struck a chord.  Design for mobile first.  That forces you to identify the “fluff“, and drop it.

Essentially, what I saw was classic Enterprise IT interfaces.  They serve some bare business purpose, with little thought to ease of use.  Users, doctors, and nurses, in the midst of their high-stress workdays, just deal with it, because, well, they have to.

There were more tell-tale signs of Enterprise IT.

"Yea, they improve things, but everybody hates the changes. You 
manage to learn one thing, and then they make you learn something 
new all over again."
"They never ask the doctors.  They try to keep us away from what 
they are doing.   They build something and show it to us, and it 
is not great, but then it is too late to change anything."

Impenetrable domain knowledge

The doctor is looking at lab results.  He sees that haemoglobin is low.   In that situation he is taught to then look at past iron levels, and vitamin levels, which are results of other tests.   In the interface he showed me, the iron levels, and vitamin levels were hard to find.  The lab results were a long Excel like table, and he had to scroll far and wide to find them.  They were not close to the haemoglobin levels.  The interface was not smart enough to offer the iron levels and vitamin levels when it detects that the haemoglobin is low. The doctor said he sometimes surrenders to fatigue and irritation and simply orders the tests for iron levels and vitamin levels again.   Of course that is duplicated effort for someone, not to mention a wasted expense.

The business analyst who modeled the diagnostic processes obviously did not know how medical personnel are expected to react to low haemoglobin.    The UI designer did not know that a relationship exists between low haemoglobin, and iron, and vitamin levels.   The interface they built does not reflect that knowledge.  The EMR software did not make the physician’s job easier.  In fact, it made the whole process more inefficient, and wasteful.

There must be so many other little use-cases like this.  I imagine the patient care domain is vast, varied, and complex.   A doctor spends years acquiring all that training, and knowledge.   How can you expect a business analyst, or UI designer to absorb all that information?   Even in the best of circumstances, there are so many ways for domain knowledge to get lost in the translation, from business user through business analyst, to system designer, and finally to the developer.  I imagine that this exercise is even more error-prone in a complex and information-heavy field like patient care.

Are we barking up entirely the wrong tree?  Is it a fool’s errand to try to model the patient care domain in order to produce a structured interface that makes the doctor’s job easier?

 

A simple-minded EMR

I wonder if something like this would be a viable EMR system?

A universal key

You must be able to uniquely identify a patient, a human being.   In other words, you need a universal key for their records, which you can apply at any health-care organization they are visiting.  Say, fingerprints.  Would that work?

A simple collection of documents

Medical records themselves are just a collection of documents.  They can be anything at all – plain text, HTML, WORD, EXCEL, PDF, audio, photos, video, etc.  Each provider simply creates whatever records makes them happy, in any format at all.

Each document is characterized by very simple, non-medical meta-data.

  • Who created them?
  • When were they created?
  • etc.

The documents are all stored together against that universal key.   You have the patient’s fingerprint, you have her records.

Searchable

You need the ability to search through a patient’s medical records – the collection of heterogenous documents. You must be able to return results ranked by relevance.

Transferable

You must have the ability to simply transfer the records of a particular person between providers.  And even to the patient herself.   This is a simple transfer, because there is little structure to speak of.   It could be as simple as an email with attachments.

Start Minimal

That’s it.   There is your minimal, but possibly complete, EMR.  In capability, it probably matches what folks are able to do with paper records, with added sugar due to the fact that the records are native citizens of the digital world.

The system is simple enough that it can be quickly adopted by organizations.  This is what every institution must be able to do, off the blocks.  No complex analysis effort.   No errors that are introduced simply by the act of creating the new system.

And build on it

Once the system is on-line, slowly build on it.

  • Improve identity tracking, if necessary.
  • Improve entry, generation, and visualization of data.   As we saw above, this means applying knowledge that only physicians have.   Business analysis must come directly from the physicians.   Work on one specialty at a time.  Work on one disease at a time.  Or something like that.
  • Improve search.  Which is really ‘data analysis‘ aka ‘analytics‘ aka ‘big data‘.
  • Improve data storage, and data transfer.

And so on.

 

Many Whys

So why are EMR systems not as simple as the one described above?

Are there considerations that I still have not learned about?

Why is there so much structure, which is hard to get right, and much of it un-intuitive to physicians?   Are these related to ‘accountability‘, and ‘billing‘?

I mentioned HL7 standards, and SNOMED taxonomies to a couple of young doctors – a resident, and a fellow.  They had never heard of them.  This is medical knowledge that software engineers are basing EMR software on, and doctors have little knowledge of them?  I was about to sign up for an expensive week-long seminar on HL7.   Is that a waste?   What is going on?

I went to a meet-up of the local Health 2.0 chapter.  Folks spoke with enthusiasm about many things, but EMR systems did not come up at all.

There seem to be a lot of startups doing healthcare related work in Pittsburgh.   However, everybody I met is working on devices, and solutions for use by individuals to get control of their personal health .  No one said anything about making a doctor’s day to day work easier, and more effective.

There is something interesting going on here.   Are enterprise concerns, like EMR, simply  un-cool?  Or are they considered a done deal?   Is it too late, and well-nigh impossible, to enter the field now, and improve on matters?