Is AI sentient? No, but it’s rapidly getting better

Estimated read time 5 min read


Emerson Sklar, director of machine learning at Applause, explains how your organization can improve its artificial intelligence.

iot machine learning with human and object recognition which use artificial intelligence to measurements ,analytic and identical concept, it invents to classification,estimate,prediction, database
Image: Monopoly919/Adobe Stock

The media had a field day when a Google engineer recently claimed that the company’s artificial intelligence technology had become “sentient.” For every article joking about Skynet and HAL 9000, there was another assuming it must be true and questioning the ethics of it all.

Missing in most of the coverage was any recognition of how far and fast this technology has advanced and how broadly it impacts our lives on a daily basis, in ways both large and small.

It was only ten years ago on June 26, 2012 that the New York Times wrote about Google’s deep learning discovery using machine learning, essentially teaching a computer to train itself with enormous amounts of data. The article was headlined How Many Computers to Identify a Cat? 16,000. Here we are today, with restaurant recommendations to the early diagnosis of diseases and nearly everything in between being driven by AI and machine learning.

The fact is that companies like Google, Microsoft, Amazon and many others have invested billions in AI technology. Some of the world’s smartest engineers across hundreds of companies are working on new applications every day.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

There’s still room for improvement. People don’t want an AI experience that is less functional than human interaction or other existing software solutions. They want less Matrix and more AI-powered experiences that are easy to use and work flawlessly when they want them. How do we get there?

How to make AI more effective

Start with the data

Before embarking on any kind of AI project, It’s important to understand the sheer amount of data needed to keep an AI application up to date. AI applications that use machine learning are “trained” and often require many thousands of examples to successfully return correct results under real-world usage. The way users interact with the technology changes over time, so to stay accurate and aligned they must keep retraining and validating their algorithms with more and more data.

Even the biggest companies struggle with scaling data curation. Most companies vastly underestimate how long it takes to roll out a successful AI application — the development might take the same as with traditional apps, but far more time is required for training, testing and validating the product.

Add more (and more diverse) data

When it comes to data, whatever you have, it’s likely not enough. More data means more learning for the algorithm. Early or smaller sample sizes make it difficult to identify trends and make accurate correlations. However, in-house teams of developers, data scientists and QA specialists can not provide a sufficiently diverse sample of age ranges, genders and backgrounds to train the systems. They’re simply not representative of the wider population, and despite their best intentions, this lack of diversity introduces inherent biases into the underlying algorithms.

The best way to avoid this problem is to leverage communities that represent the diversity of your real users to ensure the quality, quantity and diversity of training data. This is a crucial step in eliminating bias and provides the AI/ML product with the capacity to continuously learn and improve.

Use humans to test AI

Yes, you read that correctly. Collecting and processing data can be automated, but a machine cannot effectively and thoroughly validate another AI system at this point — only real people can determine what works well, where the glitches are and where the process breaks down.

Testing digital experiences can be achieved by tapping into a global community of digital experts to represent a target customer group in order to address customer needs and identify bugs, biases and potential flaws. This crowdtested approach helps organizations to make sure that the AI applications they are rolling out aren’t doing more harm than good.

What will a “good” AI experience look like?

When people speak of sentient AI, they often mean systems that enable conversations in a natural free-form way that is not limited to one specific use case. Many AI interactions that consumers have right now are slow and frustrating, and worst of all, ultimately require consumers to speak to a human support agent, completely defeating the purpose of using the AI in the first place.

In the future, a good experience, regardless of application, will be hyper-personalized and track seamlessly across different devices and locations. As AI technology evolves to become more and more “real,” the companies that offer good experiences are those that remember that real users need to stay front and center of any conversation about user experience.

Emerson Sklar, Director of AI/ML, Applause

Emerson Sklar is the senior director of AI/ML at Applause. With over a decade of experience designing, delivering, and optimizing high-quality, robust solutions to challenging customer problems, Emerson has helped countless companies improve quality across every phase of the software development lifecycle through a human-centric, community-driven approach to testing. Emerson previously worked for Borland, the Army Intelligence and Securities Command, and the Army Research Lab.



Source link

You May Also Like

More From Author