Four ways criminals could use AI to target more victims
The UK Prime Minister Rishi Sunak has even set up a summit to discuss AI safety.
- The UK Prime Minister Rishi Sunak has even set up a summit to discuss AI safety.
- And we’re already seeing the early stage adoption of AI by criminals.
- Observing how criminals have adapted to, and adopted, technological advances in the past, can provide some clues as to how they might use AI.
1. A better phishing hook
- However, this technology could also help criminals sound more believable when contacting potential victims.
- Think about all those spam phishing emails and texts that are badly written and easily detected.
- Phishing is a numbers game: an estimated 3.4 billion spam emails are sent every day.
2. Automated interactions
- One of the early uses for AI tools was to automate interactions between customers and services over text, chat messages and the phone.
- Criminals can use the same tools to create automated interactions with large numbers of potential victims, at a scale not possible if it were just carried out by humans.
3. Deepfakes
- Deepfake technology in video and audio is an example of this.
- So is the data to train it, which can be gathered from videos on social media, for example.
- This deepfake can be exploited to interact with friends and family, convincing them to hand criminals information on you.
4. Brute forcing
- This is where many combinations of characters and symbols are tried in turn to see if they match your passwords.
- Brute forcing is resource intensive, but it’s easier if you know something about the person.
- All of this information would go into making the profile, making it easier to guess passwords and pins.
Healthy scepticism
- But as with any new technology, society needs to adapt to and understand it.
- Although we take smart phones for granted now, society had to adjust to having them in our lives.
- We should develop our own approaches to it, maintaining a healthy sense of scepticism.