Â
Building on a recent post that examined the cost of negative experiences as opposed to that of the value of positive ones in the domain of experience memory, I wanted to consider this phenomena’s implications on the cost of inconsistency. Â Recent research into the human psyche has shown that both the persistence and magnitude of negative memories can be on the order of 1.5x that of positive ones.
Â
Furthermore a colleague recently told me the story of some additional research around how we tend to amplify negative memories. Â Take for instance a bad experience with an insurance agent over a claim. Â Often times we tend to start to pile on additional events and their relevance to the initial event. Â “My claim was slow to process which inhibited my ability to get around, consequently I was late for a critical meeting which caused me to miss an opportunity at work.” Â This is a snowball effect which is very common in association with negative events is quite rare in occurrence with positive ones.
So what exactly is inconsistency? Â Nothing more than the normal variability that goes along with almost any experience. Â I would argue that rare is the organization that heavily skews to the positive in terms of the experiences they deliver. Â Most organizations get it right and wrong with at least some degree of mixed occurrence. Â Not only is there a mix but the randomness of the events also includes experiences of small and large magnitude as well. Â When you put it all together, as humans we are bombarded with good and bad, small and large and our brains are putting these all in our “experience memory” which we will later draw in consideration of that company and their product or service.
I decided to run a little experiment to determine what exactly inconsistency could cost given the assumptions above. Â I happened to have the data of a Fortune 50 company around the frequency of their delivery of the good and the bad and with what magnitude these were delivered. Â I fed the data into a simple simulation model that used the assumptions of persistence of bad experience versus good and the results looked something like this:
The graph above shows the accumulation of bad memories in red over a 60 day period and good ones in green. Â The model takes into consideration the persistency of both bad and good memories and some amplification of bad memories with this “snowball effect” phenomenon.
What becomes clear from the model is that bad memories quickly begin to overtake good ones (red over green) even when the good memories take place with the same frequency of bad ones. Â Even more surprising is that when good experiences happen with twice the frequency of bad ones, there are times when our cumulative memory of experiences still skews to the negative as shown by the red over taking the green at the mid point of the simulation below.
So what is the cost of inconsistency? Â One way to put it is that in order to maintain a positive overall impression all of the time, one could easily argue that inconsistency forces us to be at least twice as good as we are bad. Â That is a lot of cost in anybody’s book.
{ 2 comments… read them below or add one }
Those of us floundering in the sea of telecom have ample experience with inconsistent service delivery. Few, if any, carriers have the ability to own a network end-to-end, which inevitably leads to a myriad of hurdles and potential “bad” experiences.
The dilemma we face revolves around the definition of good and bad. If a provider delivers services on time, with accurate billing and competitive rates, that does not necessarily interpolate into a single “good” experience; it’s simply an expectation of performance. However, a single failure along those lines will most likely fit the “bad” profile and have lasting impact on the customer’s perception. In a nutshell, the big picture experience/relationship may be “good”, but individual “bad” elements have a greater impact due to the sheer number of potential hurdles.
There’s a saying that one “ah, s**t” erases a thousand atta-boys. While that ratio is somewhat skewed, it gets the point across: service providers must take steps to accentuate positive experiences, and minimize negative ones. Sounds obvious, but sometimes falters in practice.
Working in the hospitality industry we constantly are measuring the “guest experience.” Unfortunately, the majority of the data that arrives is the negative sample. The positive sample seldom if ever comments or returns the surveys that hotel brands e-mail a week after the stay.
We still collected the data and used statistical process control (W.E. Deming “Red Bead Experiment”) to establish upper and lower control limits on the errors in the guest experience process and then determined if it was “out of control”. Any effort to bring the process into control or to reduce the upper and lower control limits requires ownership investment. Couple statistical process control with the predictive data from experience memory, and one could almost do a discounted cash flow analysis to justify investment in iterative process improvements.