The Chinese surveillance state proves that the idea of privacy is more “malleable” than you’d expect

Estimated read time 3 min read

[ad_1]

“They probably saved millions of lives by using those technologies,” he says, “and the result is that sold [the necessity of] state surveillance to a lot of Chinese people.”

Does “good” surveillance tech exist?

Once someone (or some entity) starts using surveillance tech, the downward slope is extremely slippery: no matter how noble the motive for developing and deploying it, the tech can always be used for more malicious purposes. For Chin and Lin, China shows how the “good” and “bad” uses of surveillance tech are always intertwined.

They report extensively on how a surveillance system in Hangzhou, the city that’s home to Alibaba, Hikvision, Dahua, and many other tech companies, was built on the benevolent premise of improving city management. Here, with a dense network of cameras on the street and a cloud-based “city brain” processing data and giving out orders, the “smart city” system is being used to monitor disasters and enable quick emergency responses. In one notable example, the authors talk to a man who accompanied his mother to the hospital in an ambulance in 2019 after she nearly drowned. The city was able to turn all the traffic lights on their path to reduce the time it took to reach the hospital. It’s impossible to argue this isn’t a good use of the technology.

But at the same time, it has come to a point where the “smart city” technologies are almost indistinguishable from “safe city” technologies, which aim to enhance police forces and track down alleged criminals. The surveillance company Hikvision, which partly powers the lifesaving system in Hangzhou, is the same one that facilitated the massive incarceration of Muslim minorities in Xinjiang. 

China is far from the only country where police are leaning on a growing number of cameras. Chin and Lin highlight how police in New York City have used and abused cameras to build a facial recognition database and identify suspects, sometimes with legally questionable tactics. (MIT Technology Review also reported earlier this year on how the police in Minnesota built a database to surveil protesters and journalists.)

Chin argues that given this track record, the tech itself can no longer be considered neutral. “Certain technologies by their nature lend themselves to harmful uses. Particularly with AI applied to surveillance, they lend themselves to authoritarian outcomes,” he says. And just like nuclear researchers, for instance, scientists and engineers in these areas should be more careful about the technology’s potential harm.

It’s still possible to disrupt the global supply chain of surveillance tech

There is a sense of pessimism when talking about how surveillance tech will advance in China, because the invasive implementation has become so widespread that it’s hard to imagine the country reversing course. 

But that doesn’t mean people should give up. One key way to intervene, Chin and Lin argue, is to cut off the global supply chain of surveillance tech (a network MIT Technology Review wrote about just last month).

[ad_2]

Source link

You May Also Like

More From Author