Wednesday, February 22, 2023

"Stop Dave... I'm afraid" - The latest on AI and crime prevention

AI-driven shape-recognition tech in the Smart City
- photo by Creative Commons courtesy of QueSera4710


by Gregory Saville

With apologies to the deactivation scene of the Hal 9000 computer in 2001: A Space Odyssey, the title above came to mind when I recently read a paper on AI. What is the latest in AI? Our work in SafeGrowth and CPTED often places us face-to-face with suggestions for artificial intelligence as an answer to crime. 

AI appears in this blog in the form of my keynote address to the ICA annual conference in Sweden regarding AI, Smart Cities and CPTED. I have blogged on this topic a number of times including AI in law enforcement and AI and CCTV

I recently read a provocative new research study on AI and Smart Cities. In the paper Understanding citizen perceptions of AI in the smart city, Finnish computer science researcher Anu Lehtiƶ and her colleagues examine public perceptions of AI in relation to Smart City urban planning. The Smart City movement is gaining momentum and it is intricately tied to AI.


RESEARCH FINDINGS

Here are some responses they uncovered:

I don’t need to worry about AI. I trust those who are in control of it. I’ve already seen digitization in other aspects of life. It isn’t so bad, maybe even good!

Some people are happy just letting things happen and they rely on “those who know better” to handle problems. This is an understandable position, given the speed of change today and the overwhelming amount of information flooding our screens.  However, given the catastrophe of other unfettered technologies (nuclear power, for example), this is a poorly thought-out and risky strategy.


Mapping the connectivity of the worldwide internet
- photo courtesy of Creative Commons, By The Opte Project 
  

I’ll avoid AI if I think there is a danger. 

It is difficult to anticipate all the potential dangers of AI. I presented some in my keynote address (see link above). But because AI arrives with so many advantages for us, especially in the Smart City, we tend to look the other way and ignore unanticipated consequences. But in truth, there are many ways where AI might go awry and sci-fi writers have offered some terrifying possibilities. Consider The Terminator, The Matrix, or the beautiful Ava in El Machina!

I don’t like “them” monitoring me. 

In this era of digitization, it is hard to imagine any modern society without extensive digital record-keeping. We are already thoroughly embedded into one database or another. There are motor vehicle databases, government databases, health services, pension, and social assistance databases, not to mention thousands of corporate databases, both virtual and in person, whenever we purchase something.

The fact is that we are already monitored. Further, with AI there is no “them”. Artificial intelligence is literally anywhere there is electricity, a processor, and some AI programming. The public, it seems, is blissfully unaware of the risks posed by AI. 


IT WILL BE NORMAL IN 50 YEARS

The most concerning findings:

The interviewees saw no reason to fret over something that would become a natural part of citizens lives, in time. The younger generation would deem current worries as unintelligible, having grown up in a society where the use of AI was the norm (female, 25). On the other hand, the interviewees presumed that it would still take a long time (“another 50 years” female, 44, personal assistant) before AI was mature enough to operate on a level that was notable enough to bring about significant, concrete changes in people’s everyday lives. 

In other words, don’t fret about AI because it will become normal and it’ll take 50 years before it matters. Talk about clueless bliss.

This is known as the AI effect - the tendency to claim AI is not real intelligence. If you have read anything on AI and its exponential growth you will agree that waiting around for AI to wake up is cavalier and dangerous. We need to be much more diligent.

Ultimately, it seems the researchers too are alarmed. They recommend a form of AI called human-centered AI which is a programming approach based on human ethics and the importance of personal privacy. Not quite Asimov’s 3 Laws of Robotics, but close enough for now.


2 comments:

  1. Thank you for the interesting links Greg. Great read

    ReplyDelete

Please add comments to SafeGrowth. I will post everyone except posts with abusive, off-topic, or offensive language; any discriminatory, racist, sexist or homophopic slurs; thread spamming; or ad hominem attacks.

If your comment does not appear in a day due to blogspot problems send it to safegrowth.office@gmail.com and we'll post direct.