Joshua Harrison is one of the researchers who conducted the experiment at Durham University in the U.K. They ran their experiment on a popular computer, a 16-inch MacBook Pro laptop, using an AI device that listened to keystrokes and learned the sound of each letter.
What they found, Harrison said, was quite surprising.
“The model we created and trained classified the keystrokes that we had recorded with 95% accuracy when recorded on a phone near a laptop, so a similar distance you might have to someone at a coffee shop,” Harrison said.
Researchers caution it’s not cause for concern yet but should be considered when discussing the dangers of AI.
“I think there’s a lot of ways that people already get around an attack like this without them even thinking about it. For example, probably opening your computer today, you probably used your fingerprint to open it. You didn’t type a password. When you open your phone, it’s probably a similar thing with facial recognition,” Harrison said.
The release of the report comes after tech leaders and senators met behind closed doors this week about AI’s rapid rise. Congress and the White House have been pressing tech companies to create safeguards for the technology.
“I think having these (safety) principles in place, for example with respect to deepfakes or otherwise altered or doctored videos, it’s wise to get those requirements in place now rather than to wait until there’s some real problem that comes up,” said former U.S. Secretary of Homeland Security Michael Chertoff.