Robot burglars that enter homes through cat flaps, new threat to homeowners

Artificial Intelligence presents a series of opportunities for criminals according to researchers at UCL

Robot burglars are a new threat according to researchers
Robot burglars are a new threat according to researchers

For many homeowners the traditional way to protect against burglars has been to ensure doors and windows are locked and perhaps even leave a light on.

But such gestures might prove futile in years to come after scientists warned that the next generation of home invaders could be robots programmed to gain entry through cat flaps or even letter boxes.

Using Artificial Intelligence (AI) it is feared small autonomous robots are being developed that could breach traditional security safeguards.

Delivered through small openings such as cat flaps, they could then scan a person’s home in order to retrieve keys that could then allow a human burglar to enter.

Alternatively, scientists believe more advanced machines could use AI to search a property for valuables, or cash, using cameras to scan and assess different rooms.

The robots could also be used to simply determine whether anybody is at home, relaying the information to a human operative who could then break in if the coast is clear.

The frightening prospect is just one area where scientists and police believe AI could be used by criminals to exploit people in the future.

A study, published by researchers at UCL, identified a range of criminal opportunities that technological advances could create.

While the use of so called ‘burglar bots’, is regarded as a low harm and low reward crime, scientists and crime fighters are extremely concerned about the use of ‘deepfake’ videos and images that could be used to exploit and blackmail unsuspecting victims.

Using sophisticated AI software, criminals are able to generate convincing impersonations of people, which could be used to persuade people to part with money or secure passwords.

AI offers criminals new opportunities
AI offers criminals new opportunities Credit: Getty Images

Police fear unscrupulous criminal gangs could generate a video of someone from material freely available online and use it to persuade their elderly parents to send them money.

Another sinister application might be to create fake videos of public figures speaking about controversial issues to manipulate support.

The researchers also highlighted the potential risks posed by the roll out of driverless cars, which they warned could be used by extremists to carry out terror attacks.

Senior author, Professor Lewis Griffin from UCL’s computer science department, said: “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. 

"To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Dr Matthew Caldwell, from UCL, who also helped author the report added: “People now conduct large parts of their lives online and their online activity can make and break reputations. 

“Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”

Professor Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL, which funded the study, said: “We live in an ever changing world which creates new opportunities – good and bad. As such, it is imperative that we anticipate future crime threats so that policy makers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur.  “

License this content