617.826.9056 (US) Info@HampsteadSolutions.com

There are great expectations for the valuable applications that artificial intelligence (AI) could have in attracting, acquiring, and growing customers.

We should be excited: the potential for recognizing customers’ true interest real-time and then immediately engaging them through relevant experiences is extremely compelling.

However, we should be realistic about what we can expect from AI. Intelligence will not magically assemble itself—at least not in the near future. Insights will not jump out of data and present themselves along with the appropriate course of revenue-generating action.

Instead of relieving our obligation to create intelligence, AI demands more from us.

The demand for critical thinkers to form context and direct actions will only increase. We will still must select data, train models, synthesize information, factor context, and then think some more to create meaning and application.

We still first have to solve for X.

That may be a controversial perspective, but I’m happy to defend. First, some background.

Thinking vs. Doing

In 1980, philosopher John Searle took a highly provocative position about AI. His Chinese Room Argument was based in the analogy of an English-only speaker locked in a room. That English speaker was passed Chinese writing and instructed to translate it to English.

Absent any key to transform symbols into English sentences, the translation was impossible. However, when the English-speaker was given a Chinese-to-English dictionary, the translation became workable.

While the translation itself was successful, the English speaker contributed no context, no understanding—s/he effectively added nothing to the exercise except the act of completing it.

Searle’s Chinese Room Argument served to bolster the distinction between “strong AI” and “weak AI”. With strong AI, “the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states”. In contrast, weak AI follows a series of defined calculations to reach the pre-defined conclusion, for example: A + 2B = X.

Back to the original point: with both strong and weak AI, there is a source of predefined input. A set of data has been selected, analyzed, and an original conclusion has been reached.

A trained expert and critical thinker first solved for X, and then defined the artificial intelligence to automate recognition and the resulting action.

In some cases, machine learning (ML) can be utilized to refine results. A friend offered an example about voice recognition technology, a form of AI. Having designed an algorithm to recognize a set of responses, ML was used to introduce greater learning: recognizing different accents, response variations, etc.  The result improved recognition over time from 90% of over 97%.

However, humans first defined and developed the algorithm that satisfied 90% of the cases.

Instead of relieving our obligation to create intelligence, AI demands more from us.

Interesting examples taken from commonly referenced current (weak) AI applications further illustrate the point. Let’s review:

AI Detects the Failing Elevator

In an ad, a repair person appears because an elevator is projected to fail within days. The company is saved elevator downtime, and employees aren’t stuck at the moment of failure.

However, someone—a thinking human—first had to recognize, and then program the predictors of pending failure. Perhaps call response time increases. Perhaps critical equipment must be replaced within a known timeframe or failure is certain. Perhaps the elevator has already failed, and the system captured the anomaly.

All these scenarios are knowable. A conclusion was previously reached that any or all of a series of events occurred prior to elevator failure. And then those events were programmed so that AI could recognize the signal of pending failure, and then act as programmed to alert the repair service.

AI Speeds Diagnosis

There are numerous stories about AI being used to accelerate disease diagnosis. Whether diagnosing lung cancer or predicting heart failure, there is great application underway that ideally will improve healthy outcomes.

However, the basis for all these AI diagnoses was the work of physicians. Trained, practiced, experts first considered the series of evidence that ultimately resulted in initial diagnosis. And then the hypothesis was researched, tested, refined or discarded, documented, published following peer review, and perhaps accepted and widely practiced. And maybe the cycle began again.

Once a clear set of inputs and conclusion was reached—and only then—could the series those inputs and outputs be programmed for AI to automate.

Critical, expert thinkers were involved in identifying those disease states. AI then executes them.

Watson Wins Jeopardy!

IBM’s supercomputer was able to sort through recorded facts to provide answers in the form of a question faster than past Jeopardy! champions.

Well, of course it did. The facts were known and stored in Watson. Watson’s accomplishment was to speed to the answer that someone had previously programmed.

All of these examples serve to illustrate an important application of AI: quickly recognizing a programmed state (failing elevator, certain illness), and prescribing the pre-defined action.

In all cases, an expert had previously identified the answers, first solving for X.

Understanding Complex Creatures

This reality must inform our expectations for the role AI will play in understanding customers and their needs and behavior. Human-defined intelligence must come first. Turning that understanding into AI including a prescribed course of action necessarily follows.

In fact, the demands for critical human thinkers to create intelligence that is the input for AI are high now and will only increase.

Human beings are complex creatures. Behavior is not always linear, or logical, or consistent. As if we aren’t vexing enough, we change, reverse, and restart. Adding to that complexity, what may have one meaning for one group may have a different meaning, or none at all, for others. We are blasted elusive creatures.

To improve our understanding of customer behavior, and how that understanding can be exploited to engage and increase revenue, we must continue to interact with data to create intelligence.

We still must select relevant data. As the volume increases, experts must identify necessary input to create the intended intelligence. And we have to train models, to generate analytic insights.

There is simply no escaping the reliance on expert humans to create input to AI.

We Must be Accountable

Increasingly there are technology solutions that speed analytic results and support faster, hopefully better, decisions.

Those decisions, however, remain a human responsibility. And they should.

We cannot abdicate the responsibility of decisions that affect how we interact with customers.

As an example, Target developed a model that identified newly pregnant women. The retailer began marketing pregnancy- and baby-related products based on that data-driven recognition that preceded notification by the customer.

The practice made the news when a parent objected to Target marketing pregnancy-related products to his teenage daughter. It turned out that the daughter was indeed pregnant, and the retailer’s communication forced the daughter’s disclosure to her family.

Clearly, no one at this company considered the ethics of marketing pregnancy-related products to a minor. Truthfully, I don’t think a woman of any age would appreciate such intrusion.

The larger point is that someone at the company was responsible for the poor decision. There must have been an accountable throat to choke, as there should be.

There is simply no escaping the reliance on expert humans to create input to AI.

Alejandro Eliaschev, a strategist with expertise in advanced technologies, beautifully summarized: “Even with AI capable of making decisions—and hopefully the right ones—we are far from taking such responsibility [away from] an individual and passing it to a system”.

We Must Think

This is the singular most important factor: the critical-thinking expert. Without expertise and consideration, advancements or insights are not conceived.

As an example, I recently attended a presentation by a data scientist who was reviewing a model created for a professional outdoor sports team.

The presenter revealed his findings: attendance increases on sunny days.

Also, he found—wait for it: attendance increases for games against key rivals.

I can appreciate that these findings were the result of concerted effort. But none of these findings were great or even new insights.

Where this data scientist had available data and analytic skills, he was clearly not interacting with the result. There was no thought given to whether any significance was created, or the application. And therefore the findings, frankly, added nothing.

The outcome was akin to Searle’s Chinese Room Argument: the only success was completing the exercise.

& & &

The demand on active, critical-thinking experts will only increase. Data is available to us in overwhelming volumes and from a myriad of sources. As the Internet of Things (IoT) expands, so will data. Relatively speaking, Big Data is coming from a garden hose now. Data from a fire hose is certainly our future.

We are far from taking such responsibility [away from] an individual and passing it to a system.

We must necessarily have correct expectations and prepare for the marvelous, valuable customer intelligence we can create by cultivating skills and expertise that will empower us to distinguish signal from noise in customer behavior.

This post is not intended as a summary of AI advancement. There are plenty of other reports and predictors. I also do not intend to capture all the rebuttals and defenses of Searle’s position either.

What is intended is a dose of reality: the great expectations for AI must be tempered with an appreciation for the demands for active, expert thinkers. If we are to realize all the benefits of automating recognition of customer needs, and then automating the resulting experience, we first must have expert humans to cultivate intelligence.

 

Would you like to discuss customer acquisition, growth, or retention challenges? Or are you struggling to scale your business? Let’s talk! Set up a 30-minute phone conversation with Marina.

Great thanks to Alejandro Eliaschev for his thoughtful feedback on this post.

Photo credit: Sebastien Gabriel