Responsible AI has a burnout problem

Estimated read time 3 min read

[ad_1]

“The loss of just one person has massive ramifications across entire organizations,” Mitchell says, because the expertise someone has accumulated is extremely hard to replace. In late 2020, Google sacked its ethical AI co-lead Timnit Gebru, and it fired Mitchell a few months later. Several other members of its responsible-AI team left in the space of just a few months.

Gupta says this kind of brain drain poses a “severe risk” to progress in AI ethics and makes it harder for companies to adhere to their programs. 

Last year, Google announced it was doubling its research staff devoted to AI ethics, but it has not commented on its progress since. The company told MIT Technology Review it offers training on mental-health resilience, has a peer-to-peer mental-health support initiative, and gives employees access to digital tools to help with mindfulness. It can also connect them with mental-health providers virtually. It did not respond to questions about Mitchell’s time at the company. 

Meta said it has invested in benefits like a program that gives employees and their families access to 25 free therapy sessions each year. And Twitter said it offers employee counseling and coaching sessions and burnout prevention training. The company also has a peer-support program focused on mental health. None of the companies said they offered support tailored specifically for AI ethics.

As the demand for AI compliance and risk management grows, tech executives need to ensure that they’re investing enough in responsible-AI programs, says Gupta. 

Change starts from the very top. “Executives need to speak with their dollars, their time, their resources, that they’re allocating to this,” he says. Otherwise, people working on ethical AI “are set up for failure.” 

Successful responsible-AI teams need enough tools, resources, and people to work on problems, but they also need agency, connections across the organization, and the power to enact the changes they’re being asked to make, Gupta adds.

A lot of mental-health resources at tech companies center on time management and work-life balance, but more support is needed for people who work on emotionally and psychologically jarring topics, Chowdhury says. Mental-health resources specifically for people working on responsible tech would also help, she adds. 

“There hasn’t been a recognition of the effects of working on this kind of thing, and definitely no support or encouragement for detaching yourself from it,” Mitchell says.

“The only mechanism that big tech companies have to handle the reality of this is to ignore the reality of it.”

[ad_2]

Source link

You May Also Like

More From Author