The Business Outlook

Publishing the Tales With More Insight

The phrase “human in the loop” is rapidly becoming a corporate mantra for adopting artificial intelligence. AI is often seen as an augmenting technology that works best alongside human workers, acting as a co-pilot.

This perspective marks a significant shift from the traditional vision of full automation, which has driven the adoption of new technologies in business. Consider the introduction of automated financial market-making in the 1990s, which transformed common sense by making human market-makers redundant and enabling new global transaction methods.

But which vision—augmentation or full automation—suits an AI-powered economy better?

Answering this question is crucial as each approach leads to vastly different economic outcomes, impacting value creation and competitive advantage now and in the near future. When organizations commit to augmentation, the technology evolves to be designed around human workers. Consequently, productivity and performance gains are limited by what augmented humans can achieve.

Herbert Simon’s “The Sciences of the Artificial” illustrates these limitations. He describes the U.S. State Department’s switch from teletypes to line printers in the 1960s, intended to improve message handling during crises. Despite the technology change, the human requirement to process information still hindered effectiveness. The augmentation paradigm sacrifices many economic benefits of automation, such as greater standardization, security, speed, and precision.

Given our limitations as slow, serial information processors, the gap between powerful machines and even augmented humans is widening in the AI age. Likewise, the gap between the economic potential of workflow-level automation and task-level augmentation is expanding. Understanding where full automation might be feasible tomorrow is essential for today’s investments, especially with nascent technologies like generative AI. By assessing well-known obstacles to full automation, we propose investment criteria to help leaders navigate the uncertainty and promise defining the dawn of GenAI.

The factory floor of cognition

Imagine a bank committed to a “human in the loop” model for AI-augmented lending. This bank would need to design risk assessment algorithms that human workers can interpret, constraining credit approval volume and speed to workers’ throughput. In contrast, Alibaba’s MyBank, created in 2015, has no loan officers or human risk analysts. MyBank’s AI risk models use over 100,000 variables, enabling loan approvals in minutes with a competitive default rate (1.94%) at less than one percent of its peers’ processing costs. This model is possible because it eliminates human cognitive limitations, allowing full automation of the lending decision process.

In physical operations, the best automation efforts, similar to MyBank’s model, fully automate complex processes. Japanese manufacturer Fanuc uses robots in lights-out factories to make new robots, requiring human intervention only for routine maintenance and issue resolution. Today, the system operates unsupervised for 30 days at a time and produces 11,000 robots per month. Similarly, China’s Tianjin port, the world’s 7th-largest, partnered with Huawei in 2021 to launch an automated terminal. Powered by an AI “brain,” the port automates and adjusts schedules and remotely operates 76 self-driving vehicles to manage container movements from over 200 countries with minimal human intervention (less than 0.1% compared to the industry standard of 4%-5%).

These successful automation instances are no longer outliers, sharing a common pattern: Machines perform best when interacting with other machines. For machines, humans are too idiosyncratic and erratic, making them confounding partners. As Karl Marx noted, “An organized system of machines… is the most developed form of production by machinery.” It’s easier to engineer a fully automated warehouse (as Amazon is doing) than to design robots that operate effectively and safely alongside human workers prone to unexpected actions. Tianjin port couldn’t achieve the same performance if half its fleet were human-operated.

AI extends this logic to more human actions, making much cognitive activity (so-called knowledge work) equivalent to the old factory floor. The performance of LLMs, for example, shows that natural language, despite its subtleties, is sufficiently systematic and pattern-based to be reproducible and amenable to automation. AI can augment a human workforce, but operating alone, it can also drive deeper automation efforts.

Limits of automation as a guide to investment

Returning to the conundrum business leaders face: The latest AI technologies are impressive, but beyond ubiquitous GenAI-powered chatbots and copilots, where to invest next? Leaders can make better decisions by distinguishing between technological applications that lead to full automation and those that primarily augment human workers. The best way to do this is by focusing on known roadblocks to full automation, signaling the relatively modest returns of merely augmenting technology.

Integration constraint: interfaces. Automation is more difficult when a process relies on interfacing with different systems. The London Stock Exchange’s Taurus project, started in 1983 and called off in 1993 with an estimated £75 million in losses, exemplifies this. Taurus aimed to automate London’s paper-based stock trading system but failed due to redesign requirements that allowed human registrars to continue playing a middleman role, interfacing with the stock exchange’s system.

This interface problem may be somewhat alleviated as autonomous agents gain the ability to directly access, control, and execute changes on external systems (e.g., as GenAI-powered chatbots become able to directly issue refunds, rebook flights, and so on). That technology isn’t here yet, and until it is, interfaces remain a constraint on automating end-to-end workflows relying on legacy systems.

Engineering constraint: systematicity. The less structured a process, the harder it is to automate, as a lack of systematicity requires more management of exceptions, which is challenging to engineer.

It’s crucial not to mistake complexity for systematicity. Complex processes involving natural language, such as live interactions with customers, have been effectively systematized by LLMs. However, global supply chain operations remain elusive due to their exposure to unpredictable shocks like violent conflict, regulatory change, or climate events. This unpredictability hinders systematization, requiring human judgment for the foreseeable future.

Economic constraint: uniqueness. Even if an activity is stable and systematic from an engineering perspective, it may not be repeatable enough for economically feasible automation. This often applies to “one-off” activities, like construction, which require adapting general blueprints to specific terrains. Automating such unique tasks may be more onerous than having humans do it. Foxconn, for example, found that using robots to produce many consumer electronics often doesn’t pay off due to short production cycles and rapidly changing specifications.

Executives must grapple with these constraints, as they will endure in some form despite technological advances. This is why human labor remains essential to all businesses and why augmentation strategies remain important despite efforts to achieve full automation. Businesses must understand which workflows are amenable to full automation and develop a technology strategy that differentiates the “augmentable” from the “automatable.” Where automation is feasible, technology should not be designed around human workers to maximize AI’s competitive advantage.

Leave A Reply

Exit mobile version