Are you packing lunch?

This sammich is turning out to have staying power.

Start with whole wheat bread, from Costco.

On one slice, first lather on a generous glop of plain hummus, from Costco.

Then, cut cherry tomatoes (from Costco) in half and artfully place the halves, flat face down, all across the hummus coat.

Next, slice into thin strips, cold, red (must be red. I mean, if you are going to splurge on a Mustang, it must be yellow. Same reason), peppers. These go on, and around the tomato halves.

A teaspoon of extra virgin Spanish olive oil, from a barrel in the Strip district, sprinkled on the veggies.

Top this with crushed red pepper.

Did I mention that the peppers came from Costco?

Almost there.

Cover this edifice with exactly two slices of cheese. I used Swiss, and cheddar. Swiss is good, because it is hard, and gives you an element of crunch. Crunch is important. Close it off with another slice of bread.

The home stretch.

Wrap the whole thing in a crisp sheet of wax paper. Into a sandwich bag, and the fridge, for a few hours.

Eat it cold.

Wait. Forgot two things.

Throw in some fresh cilantro. And the cheese, wax paper, and sandwich bags came from Costco.

You probably guessed by now , I have a thing for Costco. Their stuff gives me an amazing warm and fuzzy – unfailingly decent quality, and with extra warranty. I wish I was from Costco.

Loose the debugger

Loose the debugger

My ideal development team would not use step-through debuggers.  If I am responsible for mentoring newbie programmers, my first rule would be – no step-through debuggers.  My ideal IDE would include all the editing, analysis, navigation, and refactoring features, that modern incarnations like Eclipse and IntelliJ have, but without the step-through debugger.

Does this make any sense at all?

What is a debugger used for?  It is used to understand what a piece of code is doing.  The code might be doing something wrong, aka a bug.   The debugger can help us understand how a bug came to be.  If there were no debugger, what can you do?

Well, read the code.

 

Less Code

If reading code was the only way to understand the code, wouldn’t you write less code?  You will.  This will act as a disincentive to proliferation by copy/paste.    This will be an incentive to learn to ‘not repeat yourself‘ (the DRY principle).

 

Intelligible Code

If reading code was the only way to understand the code, wouldn’t you write code that is easy to understand?   You will learn the difference between the code, and the intent of the code.  You look at code, and ask yourself, WTF, why was this code written?  That.

Is there any need for code to be a puzzle?  Here is an example.

protected <I, O> O mapIO(Class<O> clo, I in) {
   try {
       Class<?> cli = in.getClass();
       O out = clo.newInstance();

       for (Method mo : clo.getMethods()) {
           if ((mo.getName().startsWith("set")) && 
               (mo.getParameterTypes().length == 1)) {
               Class<?> pr = mo.getParameterTypes()[0];
               Method mi = null;
               Object ob = null;
               if ((BaseList.class.isAssignableFrom(pr)) || 
                   (Collection.class.isAssignableFrom(pr))) {
                   mi = null;
               } else {
                    try {
                        mi = cli.getMethod("get" + mo.getName().substring(3));
                    } catch (NoSuchMethodException e) {
                          try {
                              mi = cli.getMethod("is" + mo.getName().substring(3));
                          } catch (NoSuchMethodException e2) {
                    }
               }
               if (mi != null && pr.isAssignableFrom(mi.etReturnType())) {
                   ob = mi.invoke(in);  
               } else {
                   try {
                       ob = pr.getConstructor().newInstance();
                   } catch (NoSuchMethodException e)  {
                       ob = null;
                   }
               }
               mo.invoke(out, ob); 
           }
       }
       return out;
   }
   catch (Exception e) {
       ......
   }
}

Any idea what the above code does?   Right.  I looked at it and my eyes started to swim.   It is in fact, a decent method.  It is short, and once you decipher it, you see that it does one simple thing.  The rub, of course, is having to decipher it.  I had to spend some time digging into it, doing a little archaeology as it were, to discover the intent of the code.  So what does this code do anyway?

Given an input object, and the class of the output, instantiate, and 
initialize an output object, in the following manner.  

Scalar properties, which exist in the input object, are copied over. 

All other properties - properties that do not exist in the input object, 
and vector properties (collections), are initialized with the 
default no-arg constructor.  

That's it.  

This is the 'intent' of the code.   This is what the code is supposed to 
accomplish.  This is why the code exists.

Now, why can’t the code just say what it means?  Something like this.

protected <I, O> O prepareOutput(Class<O> outputClass, I input) {
   try {
       List<Field> propertiesToBeCopied = 
                     getMatchingScalarProperties(outputClass, input);

       List<Field> propertiesToBeInitialized = 
                     getRestOfTheProperties(outputClass, propertiesToBeCopied);

       O output = outputClass.newInstance();
       copyProperties(output, input, propertiesToBeCopied);
       initializeProperties(output, propertiesToBeInitialized);

       return output;
   }
   catch (Exception e) {
       ......
   }
}

This alternative code is definitely less efficient than the original version.  On the other hand, what this code is about, is fairly obvious.  Even if it turns out that I must optimize this code, I will have more confidence in that attempt, because I start with a better view of what the code is supposed to accomplish.

Wait, I see how we can make this more efficient, without sacrificing the clarity we are heading towards.

protected <I, O> O prepareOutput(Class<O> outputClass, I input) {
   try {

       O output = outputClass.newInstance();
       for (Field field : clo.getDeclaredFields()) {
           if (isMatchingScalarProperty(output, input, field) {
               copyProperty(output, input, field);
           else {
               initializeProperty(output, field);
           }
       }

       return out;
   } catch (Exception e) { 
      ...... 
   }

}

As a friend of mine says, whaddyathink?  Notice, this version looks a lot like the English description of the intent of the code.  I just translated English to Java.   Give me code like this, and I don’t need a step-through debugger.

This is also a good example of the oldest truth in design (or writing, for that matter) – you almost never get it right the first time.

Also, see the point Martin Fowler makes about about code that requires comments.  It is at the end of the ‘Bad Smells in Code‘ chapter, of his refactoring book.

 

What you see is what you get

Bob Martin, in his book, “Clean Code: A handbook of Agile Software Craftsmanship“, quotes Ward Cunningham’s notion of clean code – “You know you are working on clean code when each routine you read turns out to be pretty much what you expected“.

You loose the step-through debugger, you get this.

Haven’t you had to deal with a method that had some simple name like getDriversLicense, but went on to do everything from the groceries to changing your baby’s diaper, and in some obscure corner, almost as an after thought, it retrieved your driver’s license.  If the method, getDriversLicense, did just that and nothing else, you could skip reading the content of that method.

The more you are forced to read code, the more you will write methods that do one small thing, just the thing that the method’s signature suggests.

 

Developer tested

Of course that cleanly written, getDriversLicense method, could have bugs.   How do you increase your confidence in the getDriversLicense method?   As you read the code, you read a call to getDriversLicense, and say, okay, great, I know that works, and move on.  You don’t want to have to also read that method’s definition.

You know the answer.  How do you produce code that folks can implicitly trust?  You test the daylights out of the code that you deliver.   Automated developer tests.

Loose the debugger, and you will learn to hate the lack of developer testing.

 

Log your way out of trouble

Regardless of how clearly your code is written, there will be times when you will want hard evidence of what the code is doing to the data.   In the absence of a step-through debugger, you will necessarily have to rely on logging. You can understand what your code is doing by logging inputs, outputs, and execution paths, which trace the code’s work.

Any enterprise system worth its salt must have good tactical logging anyway.  Clear, and configurable logs, is useful for system maintenance, and business monitoring.

Loose the debugger, and you will be forced to nail down your application’s logging.

 

And so,

Do any of these alternatives to step-through debugging sound like a bad thing?  No.  Taken at face value, each of these alternatives, and in fact all of them together, add a lot of value, which the step-through debugger does not.

Think about it another way.

Why do you need the step-through debugger.  9 times out of 10, you need it to negotiate bad code.   If you are starting from scratch, if you do not have to deal with legacy code, stay away from the debugger.  This will force you to learn to write cleaner code.

Reading code makes you feel the pain caused by poor code.  Using a step-through debugger helps you turn a blind eye to poor code.  At its worst, the step-through debugger enables poor code.

 

A benchmark?

Say I am building a new software team, my own outfit even.   I almost think that the missing debugger can separate folks that I want to rely on, from folks that I am sort of forced to rely on.  At the minimum, I want developers that can learn to be productive without the step-through debugger.  If you cannot live without that crutch,  hmm, well, ….. I don’t know.

Do ‘meaningful use’ standards include pushing lab results to patients?

I went in for some medical tests early yesterday morning.  It has been over a day since then, and there are several questions I wish I had answers for.

  • My insurance changed recently.  Has the change registered correctly at the hospital?  Will there be any mishaps with payments?
  • I left my urine sample in a little cupboard.  The cupboard already had another sample, which had not been picked up yet.   Did someone pick the samples up, all right?  Have the tests been performed yet, and are the results in hand?
  • Have the results been sent to the physician who ordered the tests?
  • Will the lab send me the results?  I would like an electronic copy of the results.   They are my tests dammit.

Events in the workflow

Each of these is an event in the workflow that was my tests.  Notifications about the disposition of these events would set my mind at ease.  I understand some folks might not want this much information. Well, make these notifications optional; let me choose which ones I want to receive.

When a business process is carrying out a customer’s business, some events in the process will be of interest to the customer.  Are you able let the customer choose the events they want to know about, and how they would like to be notified?  I would not mind a tweet.  Others might like an email, or a text message.   It is not hard to see that this need is a natural part of any enterprise system, which is carrying out a customer’s business.  This is ‘user experience‘, which to my mind is also, often synonymous with ‘customer service‘.

It occurs to me that either the lab where the tests were performed, or the physician that ordered the tests could keep me informed.  Of course, no joy from either party at the moment.   Some day ….

I dug through the EHR certification requirements for Stage 2 of ‘meaningful use’, and found only a reference to facilities of the ‘pull’ variety.   The EHR must allow MU data (lab results included) to be viewed, downloaded, and transmitted.

   § 170.314(e)(1):  View, Download, and Transmit to 3rd Party

However, these documents are hard to read, and I might have missed something.

The computer programmer in me is a little bit happy I think.  As everyone keeps saying, a lot of work remains to be done.

I haven’t met a physician yet, who likes EMR software

I only have anecdotal evidence, but every physician I talk to, hates the EMR software he has to deal with.

Ye old Enterprise IT

One particularly tech savvy young resident in an area hospital, says, “… too many clicks, too many clicks“. These are folks that live in their iPhones and iPads every spare moment they have.

As he did book-keeping on his laptop, I peaked over his shoulder at the EMR program he was using.  Also, a couple of months ago, I had occasion to spend a few days at Johns Hopkins, helping take care of a family member, and I took every chance to watch the staff at work at their monitors, which now seem to be stashed into every available corner (each patient’s room had a desktop, monitor, keyboard, and mouse).  I was a little taken aback to see that they were all using Windows desktop applications.  Seen through eyes drunk on modern touch-screen mobile devices, these apps look old-fashioned – poor, tired, windows, boxes, lists and buttons, huddled together in an unappealing jumble.

My doctor friend said his EMR was cumbersome to use, and it took too long to enter all the data that he is prompted for.  He seemed to unconsciously separate information that is vital to patient care, from information that is just “for billing“. Often, he enters only the patient care information that he thinks is necessary, and ignores what he called, “fluff“.

The word “fluff” struck a chord.  Design for mobile first.  That forces you to identify the “fluff“, and drop it.

Essentially, what I saw was classic Enterprise IT interfaces.  They serve some bare business purpose, with little thought to ease of use.  Users, doctors, and nurses, in the midst of their high-stress workdays, just deal with it, because, well, they have to.

There were more tell-tale signs of Enterprise IT.

"Yea, they improve things, but everybody hates the changes. You 
manage to learn one thing, and then they make you learn something 
new all over again."
"They never ask the doctors.  They try to keep us away from what 
they are doing.   They build something and show it to us, and it 
is not great, but then it is too late to change anything."

Impenetrable domain knowledge

The doctor is looking at lab results.  He sees that haemoglobin is low.   In that situation he is taught to then look at past iron levels, and vitamin levels, which are results of other tests.   In the interface he showed me, the iron levels, and vitamin levels were hard to find.  The lab results were a long Excel like table, and he had to scroll far and wide to find them.  They were not close to the haemoglobin levels.  The interface was not smart enough to offer the iron levels and vitamin levels when it detects that the haemoglobin is low. The doctor said he sometimes surrenders to fatigue and irritation and simply orders the tests for iron levels and vitamin levels again.   Of course that is duplicated effort for someone, not to mention a wasted expense.

The business analyst who modeled the diagnostic processes obviously did not know how medical personnel are expected to react to low haemoglobin.    The UI designer did not know that a relationship exists between low haemoglobin, and iron, and vitamin levels.   The interface they built does not reflect that knowledge.  The EMR software did not make the physician’s job easier.  In fact, it made the whole process more inefficient, and wasteful.

There must be so many other little use-cases like this.  I imagine the patient care domain is vast, varied, and complex.   A doctor spends years acquiring all that training, and knowledge.   How can you expect a business analyst, or UI designer to absorb all that information?   Even in the best of circumstances, there are so many ways for domain knowledge to get lost in the translation, from business user through business analyst, to system designer, and finally to the developer.  I imagine that this exercise is even more error-prone in a complex and information-heavy field like patient care.

Are we barking up entirely the wrong tree?  Is it a fool’s errand to try to model the patient care domain in order to produce a structured interface that makes the doctor’s job easier?

 

A simple-minded EMR

I wonder if something like this would be a viable EMR system?

A universal key

You must be able to uniquely identify a patient, a human being.   In other words, you need a universal key for their records, which you can apply at any health-care organization they are visiting.  Say, fingerprints.  Would that work?

A simple collection of documents

Medical records themselves are just a collection of documents.  They can be anything at all – plain text, HTML, WORD, EXCEL, PDF, audio, photos, video, etc.  Each provider simply creates whatever records makes them happy, in any format at all.

Each document is characterized by very simple, non-medical meta-data.

  • Who created them?
  • When were they created?
  • etc.

The documents are all stored together against that universal key.   You have the patient’s fingerprint, you have her records.

Searchable

You need the ability to search through a patient’s medical records – the collection of heterogenous documents. You must be able to return results ranked by relevance.

Transferable

You must have the ability to simply transfer the records of a particular person between providers.  And even to the patient herself.   This is a simple transfer, because there is little structure to speak of.   It could be as simple as an email with attachments.

Start Minimal

That’s it.   There is your minimal, but possibly complete, EMR.  In capability, it probably matches what folks are able to do with paper records, with added sugar due to the fact that the records are native citizens of the digital world.

The system is simple enough that it can be quickly adopted by organizations.  This is what every institution must be able to do, off the blocks.  No complex analysis effort.   No errors that are introduced simply by the act of creating the new system.

And build on it

Once the system is on-line, slowly build on it.

  • Improve identity tracking, if necessary.
  • Improve entry, generation, and visualization of data.   As we saw above, this means applying knowledge that only physicians have.   Business analysis must come directly from the physicians.   Work on one specialty at a time.  Work on one disease at a time.  Or something like that.
  • Improve search.  Which is really ‘data analysis‘ aka ‘analytics‘ aka ‘big data‘.
  • Improve data storage, and data transfer.

And so on.

 

Many Whys

So why are EMR systems not as simple as the one described above?

Are there considerations that I still have not learned about?

Why is there so much structure, which is hard to get right, and much of it un-intuitive to physicians?   Are these related to ‘accountability‘, and ‘billing‘?

I mentioned HL7 standards, and SNOMED taxonomies to a couple of young doctors – a resident, and a fellow.  They had never heard of them.  This is medical knowledge that software engineers are basing EMR software on, and doctors have little knowledge of them?  I was about to sign up for an expensive week-long seminar on HL7.   Is that a waste?   What is going on?

I went to a meet-up of the local Health 2.0 chapter.  Folks spoke with enthusiasm about many things, but EMR systems did not come up at all.

There seem to be a lot of startups doing healthcare related work in Pittsburgh.   However, everybody I met is working on devices, and solutions for use by individuals to get control of their personal health .  No one said anything about making a doctor’s day to day work easier, and more effective.

There is something interesting going on here.   Are enterprise concerns, like EMR, simply  un-cool?  Or are they considered a done deal?   Is it too late, and well-nigh impossible, to enter the field now, and improve on matters?

 

 

 

 

 

 

 

 

 

 

What use is knowledge that I cannot name?

It turns out that I have done information architecture for some time now, but I never knew it.

A young acquaintance is learning HTML, CSS and such.  I gave him Steve Krug’s classic handbook, Don’t Make Me Think.  My young friend banged through the book in a day, and his cup running over, started going on about this, that, and the other, including at one point, ‘tabbed pages‘.  After some confusion, I realized he was referring to a web page whose content was organized with tabs,

A web page organized with tabs

A web page organized with tabs

 

rather than the tabs that most modern browsers offer.

Tabs in a browser

Tabbed Browser windows

Unaccountably, the ‘tabbed pages’ triggered a thought that had not occurred to me before – the ‘tabbed page’ represents at least three different kinds of knowledge.

The users’ needs

Tabs in a web page exist for a reason.  They serve a purpose.   Someone devised them to solve a problem.  What is that problem?

Shoe store

A user is at shopping site.  Say, a shoe store.

The store sells stuff that you can classify in categories that are familiar to, and expected by the user – Men, Women, Children, Casual, Formal, Outdoor, etc.   Each category of shoes has more items than you can fit in the real estate available on a single web page.

The shopper must be able to peruse any category that she is interested in.   Further, regardless of where she is in the site, the shopper must be able to switch to any other category of shoes.

Farm insurance policy

You own a large farm.  You have an insurance policy for the farm, which includes many individual coverages.  You have had the policy at the same insurer for several years.   You want to log in to your insurance company’s web site, and see all available information on your farm policy.

An insurance policy is characterized by many different kinds of information.

  • Demographic information about the policy’s owner – name, address, age, marital and employment status, etc.
  • The various coverages included in the policy – Livestock, limits of the coverage, deductible associated with the coverage; outdoor buildings, limits of the coverage, deductible associated with this coverage; coverage on machinery; coverage against crop failure; and it can go on and on.
  • Historical data – the policy as it existed in past years (the industry refers to that as ‘terms’)
  • Billing history
  • Documents associated with the policy – signed agreements, photos, letters that went between customer and insurer, etc.

You cannot fit all of this information into a single screen.   You must clearly indicate what types of information are available for a policy.  Finally you must allow the user to pull up and view any type of information, at any point in her interaction with the web site.

Someone must know

Someone must know the data you are presenting.   What is this data’s place, and purpose in the world as we know it?  Can we break the data down into coherent parts?  How might the parts be related to each other?  How do you refer to the various slices, and components of the data?

Someone must know the needs of the user vis-a-vis that data.   How does the user understand the data?  What is her mental map of the data?  How does she categorize the data?  What names and labels does she use to refer to the data?  How does she think the various pieces of the data are related to each to other?  What exactly is the user interested in viewing?   What processing might she want to trigger on the data?

What do you call this knowledge?

I always thought of this knowledge as ‘business analysis‘. I suspect that most folks I have worked with, none of them trained in the magic arts of user interface design, would also think of this as ‘business analysis‘.

However, I am learning lately that this particular sliver of the business might be known as ‘information architecture‘.  I am not sure. Does it matter what it is called?

 

Presentation solutions

Now you understand the data that you have to present.   You also understand the user’s expectations regarding the data.   Can you devise ways of presenting the data such that the user’s needs are served?

You do not know how to write computer programs.  You know color, shape, image, typography, and layout.  You know how people react to these visual elements.   You can communicate with people using just those tools.   You know how to convey information, suggest actions, and create a virtual environment on the computer screen, which makes a user feel safe, competent, and perhaps even happy.

You are able to use the expertise I described above and design many different ways to satisfy the needs documented by the ‘information architecture‘.  Of these various alternatives, ‘tabbed pages‘ are just one.  Here are some others.

 

Drop-down for choosing between categories

Drop-down for choosing between categories

 

 

An index, and a tree of links

An index, and a tree of links

 

tag cloud

A tag cloud

 

You also probably know how to present these alternatives to users, and test their use of them to determine what works best.

Once one of these alternatives, perhaps even tabbed pages, is chosen, you ask computer programmers to construct the interface.

What do you call this knowledge?

So what do people call this expertise?   I hear several terms, all of which seem related.

  • Graphic design
  • Visual design
  • Interaction design
  • Human computer interaction
  • Anything else?

Do they all refer to the same questions, and answers?  Is there an overlap?   Does it matter?

Is it possible to be good at one, and not the other?  Can you be a good graphic designer, but a poor visual designer?  Could you be an an expert on human computer interaction without being a good visual designer?  Or could you be good visual designer but know little about interaction design?  The last does seem plausible.  Mostly, all of this seems a little bit nuts; another case of words getting in the way of meaning.

In layman’s (that would be me) terms, this expertise simply seems to be what falls in between knowledge of the business – data, processes, and users, on the one hand, and the ability to construct the interface, on the other.   These folks design an interface in response to the business knowledge that the ‘business analysts’, or ‘information architects’ provide, and the ‘programmers’ (the construction workers of the digital world) tell the designers what they are able to slap together.

Constructing a solution

Finally, we have the field of knowledge that I am able to grasp effortlessly.   Given a blueprint of the interface, someone has to make it flesh.

You use languages and technologies like HTML, CSS, Javascript.   You can also choose some server-side poison like ASP, JSP, XUL, and so on.   The more you are cognizant of the other two kinds of knowledge, the more effective your construction will be.  However, you often know little other than the construction tools.

What is this knowledge called?

I think this is what many people call ‘web development‘.  Often, a web developer is almost always someone with only knowledge of  ‘construction‘.

 

Why does any of this matter?

If you are starting out, and getting a kick out of creating web-sites, are you able to figure out which of the three types of work is giving you the high?

Just as important, are you able to tell which of the three areas you have a facility for?

  1. Do you like the rigorous analysis, and the emphasis on precision in language, which seems the central characteristic of ‘information architecture‘?
  2. Do you have a gift for communicating visually, without which you can’t do well at the ‘presentation design‘ part of the work?
  3. Or do you have a gift for nuts and bolts detail, and are able to methodically, and relentlessly concentrate on a job till it is done?  This is what construction requires – stamina.

The question is just as relevant for someone like me, who has been programming for a while, and has lately developed an interest in something that I know is not exactly programming, but I don’t know what to call it.  The best thing I can think of is – you know, that thing that Apple does so well.   Design?  User experience design?  Usability?  What?  I have never thought of myself as a creative person.  What?

For the record, I think my strengths might be (1), and (3).   I enjoy, and obsess over (2), but I don’t have a gift for it.  I think.   As the man said, “it is a puzzlement”.

User experience for the techies

What do I think of when considering ‘user experience‘ for the technical folk?   What makes it easier to do their jobs?  What helps them do their jobs better?

Techies include the following sorts of resources.

  • Programmers
  • Testers
  • Dev Ops
  • Program Operators

As always, common factors that I listed in the post, User experience in an enterprise system, apply to these resources as well.

Programmers can see everything

Interestingly, it appears that programmers occupy a special place in the enterprise.   Everything, systems and data, that any corner of the enterprise sees and uses, must be accessible to some programmer or the other.  Who builds these systems?  Programmers.  Who do you call when anything goes wrong?  Programmers.   Nothing can be hidden from the programmers.

This means that the the user experience considerations that apply for everybody else in the enterprise, which I have written about in other posts, apply to programmers as well.

One thought occurs to me.   Doesn’t this kind of all-pervasive access raise privacy issues?  Not to mention security concerns?  I wonder how enterprises deal with this.

 

A Domain Specific Language (DSL)

Often, for some reason or the other, a programmer is asked to perform a business task.  The enterprise system typically supplies business users a graphical interface for this work.  Programmers can use that interface too.  However, programmers have technical skills that typical business users may not have. Further, programmers have responsibilities other than performing business tasks, which means they are looking to save as much time as possible.

Hence, if it is possible to provide programmers an alternate interface, which is more powerful, and more performant, even if more technically complex, it would be a good thing.

In some of my previous work, programmers did considerable customer support work. Often they would have to perform some activity that meant wading through pages and pages of UI, in order to make one small change, or press one button.  I heard them lamenting the lack of a more ‘expert’ kind of interface. After all, they arguably had more technical expertise than the rank and file business users.  We could have used a scripting solution. A couple of lines of a well designed DSL (domain specific language), would have put them all in a good mood, not to mention saved a lot of time, and energy.

Say that you have to change the type of roof, on the 3rd barn of the farm that is insured by farm policy, FU-237EKS. After the change, the policy must be assigned to an underwriter for review.   In the UI, the change happens on the 8th screen of the farm policy. The assignment to the underwriter is a further 3 screens down.   Rather than schelp through all that UI, some folks I knew would have liked to execute something like this script.

FU-237EKS.barns[3].roof = shingle
underwriters['nate silver'].reviews += fp237eks

However, keep in mind that it ought to be possible to design the graphical user interface so that it provides the same kind of power, and efficiency. The recent, industry-wide emphasis on usability is all about this kind of improvement.  The point here is that alternatives to GUIs (graphical user interfaces) exist, which might more naturally fit programmers’ sensibilities.

I also wonder if there might be certain kinds of business functions that are hard to represent in a GUI.  Some complex, but one-off, workflow, which has to be created on the fly, for instance.  This is a question to explore.

 

Some items related to support

  • You must be able to change the logging level that is in effect without stopping the application.
  • You must be able to add muscle to the system, without stopping anything.  Bring more servers on line, add more threads to a running server, and so on.

 

Software Configuration Management

Essentially, all of configuration management must be automated.

Check out, and check in of code; build, test, assembly, and release of an application, must be completely scriptable.  You should be able to do all of this at a click of a single button, or the issue of a single command at the command line.

In some of my previous work, even though the code base was all in one source control system, we used to have to explicitly issue about 60 check out commands. We never automated the check out. So to
create a new workspace, we would do a lot of manual work – 60 clicks of the mouse, 60 commands at the command line.

Releasing a new version of the application was a multi-step, manual process, which would require some Dev Ops person to be up at ungodly hours.

Some releases, especially emergency patches, were horrendously complex. Some poor schnook, sitting in India, used to have to painstakingly undo changes to several parts of the code base, make the release, and then restore all those changes.

Needless to say, this was error-prone. Disasters, big and small, begging to happen.  This should never be the case.

Software configuration management (sometimes referred to as build management, release management), must never be a burden to the developer.   Folks that specialize in this work (Dev Ops), must hide the complexity by automating it all away.   If you are using tools that do not lend themselves to this kind of automation, well, you are using the tools that developers cannot love.   This is infrastructure, which is meant to remove some of the drudgery from a developer’s working life.   So let us have useful, and reliable, infrastructure.

 

Testing

One of the most critical lessons of the Agile philosophy is the recognition that developers must also test.   The Agile world asks, how does a developer know he is done with a task?  The Agile world answers, he proves it with tests.  When all of his tests run successfully, the developer is done with his work.

Each developer must be able to test her work independently of other programmers.    This means that each developer must have separate sandboxes for code, and data.

Developers have their own sandboxes for code by virtue of using a 
source control system.  However, in my previous work, I often 
encountered resistance to setting up independent sandboxes for 
the data.  I never really understood this.  Why couldn't each 
developer have their own copy of the system database for instance?
Isn't this sort of thing quite inexpensive these days?

We must automate the generation, and load, of test data.

In one of my previous jobs, many tests required testers to create an
insurance policy.  These were manually entered, a laborious process,
which took significant time.   

As you can imagine, testing was not as rigorous as it could have been,
because it was just too hard to setup the data required for the test.

The system’s user interfaces must support automation.   This is necessary not only for functional testing, but more importantly, for load testing.   You have to be able to simulate many users banging on the UI.   For this to be done with any kind of rigor, you have to be able to drive the UI with scripts.  Keep that in mind when you construct the UI.

Tests must run continuously.  If you have ‘continuos integration‘ going, you have this in place.   Continuous integration is a feature of software configuration management.   Every time a change is checked into the source control system, you must automatically kick off a build of the whole system, which, by definition, includes tests.   This allows you to find errors sooner rather than later.   Continuous integration is possible only if all of your software configuration management is automated.

Finally, a replica of the production environment must be available to the developers.   Often, you run into errors that only seem to happen in production.   Give developers an environment that is identical to production, where they can test, and debug problems, without messing with the production environment itself.  Without this, you are asking developers to be brilliant, which seems like a high risk strategy.

 

Techies other than developers

So how about testers, dev ops personnel, and program operators?   These folks perform functions that have been covered above, and in earlier posts.

The section on testing applies to folks that are exclusively black box testers.   The section on configuration management applies to dev ops.   Earlier posts on graceful processes applies to program operators.

 

User experience for managers

There are at least two types of managers in an enterprise, right?

I think of them as ‘business managers‘ and ‘systems managers‘.

How are they different, in so far as user experience is concerned?

Business Architecture vs. System Architecture

We can approach this question using the same yardstick that I used in an earlier post, ‘User experience for business employees‘.  Business managers are expected to know the business, and not necessarily any one enterprise computer system that helps run the business.  A business manager should be able to move between companies that are in the same business, but may use different enterprise systems.   Within the same company, the enterprise systems may change as technologies evolve, but the business might stay largely the same.  Changing enterprise systems ought not to be the business manager’s concern, bread or butter.  So who is responsible for the enterprise system, which helps run the business.  Enter, the systems manager.

The business manager knows the goals of the business.  She is familiar with the various functions, capabilities and resources that collaborate to achieve the goals of the business.  Does that definition sound vaguely familiar?  It should, because that is the general definition of an architecture.   A business manager is cognizant of the business architecture.   A business architecture is separate from, and independent of what we could call the system architecture – individual computer based systems each with its own capabilities and responsibilities, interacting in well defined ways to implement the business architecture (aka the goals of the business).

For instance, a simplistic insurance company may be organized around these components – sales, underwriting, billing, and claims.   A business manager knows the responsibilities of each of these business components, and how these components collaborate with each other to produce outcomes that the insurance company wants.

However, the billing department might run its business on the backs of four different computer systems – a billing app that manages transactional billing data, a document management system that manages documents coming out of the billing app (bills, delinquency notices, etc.), a high volume print manager, and a messaging system that helps the billing department collaborate with the other business components – claims, and underwriting, and sales.  This is the system architecture that implements the billing component.   The billing manager’s focus stays with the business component as a whole, while some IT manager must know and keep control over the computer systems that help run billing.

 

User experience for a business manager

Truth be told, what a business manager requires will change from business to business.  I have little knowledge of any business, so there is specificity that I am not going to be able to provide.

However, thinking about this at a general architectural level, and applying anecdotal experience gained from working in a few enterprises, I believe we can come up with a list of what a business manager might find useful.

 

See the work flowing through the business architecture

The business manager will need to be able to witness, and evaluate the business that is flowing through the business architecture.   Two sorts of  views will be useful.

  • A snapshot of the state of things at a certain moment.  Now, two hours ago, closing time yesterday, etc. This should allow creation of a real time tracker of the business.  Any hotspots, bottlenecks?
  • Aggregate data.  The business that was done in some duration – all day today, the week so far, in the last 6 months, etc.

Similar questions will need to be answered for parts of the architecture.  Say just one component, like claims in an insurance company.

  • Snapshot.  How many claims does each claims adjuster have outstanding at the moment?  What is each claims rep doing – in the office, out in the field, etc.  
  • Aggregate data.  How many claims were paid, and how many rejected in the last 15 days?   How much money was paid out and by whom yesterday?  etc.

 

Presentation

The information described above must be available in two forms.

  • Old fashioned kind – reports, tables, graphs and charts.

 

Alerts

The manager must be able to setup alerts, and notifications, on arbitrary events of interest.   These alerts should be available on devices, and social platforms of the manager’s choice.

 

User experience for an IT manager

The user experience requirements for the business manager apply to IT managers as well, with one difference.  The IT manager wants information on how the system architecture is performing.

Consider the example of the billing component described earlier.   While the business manager is interested in how the billing component as a whole is performing, the IT manager will want to keep track of how things are going with the four computer systems that run billing – the billing app, the document management system, the print manager, and the messaging system.

  • A realtime visual representation of the work running through the billing workflow.  This must include the number of and the type of various billing transactions, the documents going to the document management system, what the print queues are doing, etc.  This will show me hot spots.
  • How many of the Missouri auto policy billings were finished today?
  • How much did we pay out as agents’ commissions last month?
  • Map delinquencies by region, but I don’t want a spreadsheet.  I want it represented in shades of color on a physical map of the country.
  • And so on.

 

How to get there

As with any user experience problem, you have to start with accurate knowledge of the business.   In any environment, we have to know the business architecture well, in order to satisfy business managers’ requirements.

Reporting requirements for system architecture demand that each computer system in the architecture must expose snapshot, and aggregate descriptions of the work that is going through the system.

Finally, we are going to have to pick up expertise in data visualization beyond filling spreadsheets.   There seem to be many tools out there for the client, which can be supported in the backend by either the JVM, or node.js.

 

 

 

User experience for business employees

Business employees

Who am I referring to exactly?   After all even the IT developer who builds and maintains the enterprise system is an employee of the business.  In fact, I mean folks that are not IT employees.   I am referring to people whose primary knowledge is the business of the enterprise, and not computer related skills.

For instance, she is a certified underwriter of farm policies.  She is not expected to know SQL.  She is not expected to be able to setup CRON jobs, or put together a Lucene query, or even an advanced Google query.  She does not tune the Oracle database where her policies reside.

What does it mean for such employees to have good user experience?

 

User experience

Common characteristics

At the outset, the common characteristics, as described in these posts, apply to business folks too.

Besides these, there is a perspective that applies to business folks specifically, I believe.

No training

Business resources must really only need training, experience, and expertise in the business, and not in whatever enterprise-wide computer system that is in place.

For instance, an underwriter should be able to come into a new insurance company, and knowing no more than how to use a keyboard, mouse, and perhaps a touch screen interface, and without formal training, should be able to very quickly learn and be productive using the existing enterprise system.

Anything the business resource has to learn, she must be able to learn painlessly, by just using the system.

The interfaces that the business user encounters must be a clear, natural, and seamless representation of her knowledge of the business.   The system should guide the user down paths that are instantly familiar, and obviously correct.

The enterprise system must leave little room for mistakes.   Even when the mistakes happen, they must be caught early, and there must be little or no cost.  This is essential to facilitate experimentation, and self-learning.

Transparent

You know that the user experience is good when the enterprise system recedes into the background.

The computer system must not register in the user’s mind as an obstacle, as a challenge, or as anything at all that is above and beyond her knowledge of the business.

Granted, regardless of how convoluted a system is, once a user learns it, the system will recede into the background.  That is almost unfortunate, because that is how a lot of clunky systems come to be.

However, say you have to make a change to the system.  Or say you want to replace the system.   How much resistance do you encounter?  If the business complains about having to learn a whole new system all over again, your user experience is suspect.

To put it another way, your system’s interfaces must not add any cognitive burden, beyond that which the business expertise itself requires.

How to get there

The fundamental business

Good solution design begins with effective business analysis.  Before diving into solutions, business analysis must first describe the essential business problem.  Volere, a Requirements and Business Analysis consultancy has a good definition of such analysis, which it calls ‘systemic thinking’.  To paraphrase Volere, you want an understanding of the essence of the business, without being prejudiced by any solutions, whether digital, or the old-fashioned kinds.

In a lot of my past work, the product of business analysis included 
someone's notion of a solution too, typically a user interface 
designed by folks with knowledge of the business, and good intentions, 
but not much else.  

Folks with the expertise (in user experience design) to translate the 
essential business rules, processes, and outcomes, into a transparent 
computer solution, never got a chance to understand the business at all.  
Result, more often than not, avoidable errors, unnecessary iterations, 
and ultimately, an interface, that was not useless, but that was less 
optimal than it had to be.

These were common refrains - "They will get used to it", "This is a 
documentation issue", "This is a training issue", etc.  All telltale signs 
of user experience that has room for improvement.

Human interaction design and construction

There are folks with the expertise to take the description of the essence of the business that systemic business analysis produces, and design a system with the characteristics described above.

Much of this expertise has been codified, as guidelines, patterns, and frameworks, which a competent generalist can learn as necessary.

Here is a list of resources that must serve as our guides.

Further, design is an iterative process, which will include the following sort of cycle.

  • The human interaction designer comes up with a design.
  • The design is implemented as some kind of prototype.
  • Users use and evaluate the prototype.
  • Tweak, enhance, start over, until everyone arrives at a satisfactory destination.

As this suggests, besides the design expertise, you have to be able to repeatedly construct, deploy, review, and change these solutions quickly.

Construction skills include the following.

  • Create plain wireframes with a tool like Balsamic.
  • Create colors and images laden, HTML, and CSS mockups with tools like Dreamweaver, and Photoshop, etc.
  • Create live prototypes with RAD frameworks like Ruby on Rails, or Django (Python), or Grails, or Play with Scala etc.  In particular, my personal interest is in the Java eco-system (Grails, Play), and a pure Javascript solution (for instance, Bootstrap.js, and BackBone.js at the client, and node.js in the backend).

You have to have the infrastructure and the skills for continuous integration, and continuous release.

Part of the review capabilities must include the ability to run usability tests.

Finally, there will be times when interfaces will have to change deep into the construction of the system.  Your engineering must be such that the interface can change quickly, without adversely affecting the backend.  You never want to say to the client – “It is too late to make that UI change.  You should told us this earlier,”.

As a generalist, what must I know regarding ‘transactions’

I believe, a generalist, or a team of generalists, must offer these skills, related to the implementation of ‘business transactions‘.

The failure of a business transaction must still leave the system in a safe, and valid state. As a ‘generalist’ I must either know how to achieve that goal, or know where to quickly find a solution.

Platforms

As always, I want to be able to solve this problem in two platforms – the Java eco-ssytem, and Node.js.

 

Short-lived business transactions

Most of us are familiar with ACID transaction support in relational databases.  Relying on this support is only recommended for short-lived business transactions.

Database transactions are typically implemented by locking database tables for a particular user, which forces all others to wait till the locks are released. This necessarily slows the system down, among other complexities. Hence the recommendation that ACID transactions be very short-lived.

Here are some examples of short-lived business transactions.

  • Change the address of a customer. Typically you have already gathered the new address, and you simply have to update a few tables with the new data.
  • Apply a payment against a policy. Again, the whole transaction is typically an update to a few tables.

In the Java eco-system, this sort of short-lived transaction, when applied against a single database, is implemented with the JDBC API. We must be able to able that, using Java, Scala, and Groovy.

I must be able to talk to relational databases, and manage ACID transactions against a single database, using node.js.

 

Long-lived business transactions

However, often, there are business transactions that are long-lived, and must behave gracefully.

Here is one example of a long-lived transaction – migrating old insurance policies from one system to another.

This is historical data, often several years worth and can be voluminous. It contains many different parts, like contacts, coverages, changes made to the policy over the years, documents, etc. Accepting all of the policy into a new system can take a significant time. We used to run into policies that took 20 seconds to process completely. Things will go wrong, and when they do, you would like to cleanly rollback all of the incoming data. However, you cannot keep an ACID database transaction open for 20 second, 10, or even 5 seconds. That locks up database tables, which in turn will severely diminish your ability to handle load.

Here is another example – business workflows that extend over several days.

They are initiated, passed around to several folks, and then eventually completed. If for some reason this business process ends in some kind of rejection, or failure, you may want to discard, or perhaps archive data that this workflow created.

Consider that new accident information is received for an automobile policy. Maybe some documents are uploaded. Some premium adjustments are made. Some bills are generated. Underwriting, and billing managers review, and sign off. This process may take several days to finish. Say after doing a lot of work, you discover all this work was done on the wrong policy – perhaps an ex-husband’s. How do you roll back this work, and data that has been accumulating over several days? Surely not with a database transaction.

So how do ensure that such operations are well behaved? If necessary, how to ensure that these long-lived business transactions exhibit ACID properties?

As a ‘generalist’ I must know standard approaches, and solutions to this problem. Further, I must be able to implement these solutions in the Java eco-system, and in node.js.

 

Single database vs. multiple

In a large, heterogenous environment, you are often working with several databases. Perhaps all documents are in some legacy SQL Server DB, and day to day transactional data are in a fast MySQL DB.

How do you implement ACID when a business transaction, even a short-lived one, works with data that is distributed across more than one database?

In the Java world, there is the JTA API, which supports so-called ‘distributed transactions’. But very few people seem to use this. As a ‘generalist’ I must know what the alternative is.

Similarly, I must know how this problem can be handled in node.js.

 

Polyglot persistence

This is just a special case of the multiple database scenario. The data repository can be anything at all – relational DB, NoSQL DB, text based index, messaging end point, flat disk file, etc.

How do you implement ACID when the business transaction works with many different kinds of data repositories?

For instance, say you are recording an auto accident. Photos of the car might go into a noSQL database, like MongoDB. A description of the accident is saved to Oracle, and this information is also parsed and pushed into Lucene. Finally, a notification is dropped into a queue that a claims adjuster is watching. If this transaction dies for some reason, you want to rollback the changes you made to each of these very disparate data repositories.

First Contact, Australia

It is winter in Australia now.

You know, winter, as in, cold. Well all right, it is just chilly. The temperature swings between the 30s and the 50s. Yesterday I think it crossed 60. That doesn’t sound too bad, does it? Think again.

None of the dwellings I have been in, have central heating. That changes everything. I can handle low temperatures outside the house. In fact, I love winter in the U.S. However, apparently, inside the house, I need a steady, balmy, 70 degrees.

My sister’s apartment, a very nicely appointed 2 bedroom, which costs a pretty packet of Australian dollars, does not have central heating. A beach cottage that we rented for the weekend, which was even nicer, and plenty expensive, did not have central heating. My nephews tell me that friends of ours who live in detached houses, do not have central heating. It seems to be the norm here.

Australians, damn their hides, are hardy types. Restaurants in the city have outdoor seating, which is almost always taken. It is a chilly, blustery, 55, and there are folks on the patio with a sandwich and fries. I am sorry, but that is nuts.

I find myself going to bed early, because I know that within a few minutes, my bed is going to be warm. In the mornings, I am getting out of bed late, for exactly the same reason. A hot shower has never seemed so good; once I am in, I never want to leave. During the day, I can’t bring myself to do any kind of work. Every activity requires braving the chill. Everything you touch is cold. Everywhere you put your foot down is chilly. The cold is seeping through my clothes, past my skin, into my bones, and has touched my mind. I am one of the wretched in a Dickens novel, on the verge of consumption, and will fade away in the next 10 pages. Or, maybe I am just living inside of a refrigerator.

My last stop was India, where it is summer. All my clothes were meant for India. Nothing I have is helping. I have taken to wearing two of everything – two tee-shirts, a tee-shirt and a shirt. I am going to bed fully clothed. Needless to say, I am running through my clothes fast. And that leads to another problem.

The house does not have a dryer. Most folks drip dry their clothes. But how do you do that in the winter, with temperatures in the 50s? My sister tells me, “You better do your laundry soon. The clothes take 2 days to dry”. Of course they do.

I have had to buy myself some warm clothing. Do you know what passes for common winter-wear here? Hoodies. I don’t know whether to laugh or cry. My sister gave me a bright blue hoodie that was lying around the house. Where is the gun, I am thinking. I am not wearing a hoodie unless it comes with a gun, and a hip holster. Yes sir, that gun is going to be unconcealed.

Ah well, this is probably just my usual bout of culture shock. First contact always seems to be a little rough for me.

In any event, word to the wise – if you are visiting India, and Australia in June, and July, remember, you have to pack for two seasons.