AI, machine learning will revolutionize fraud detection: Duquette
By Patricia MacInnis, AdvocateDaily.com Senior Editor
Though artificial intelligence (AI) and machine learning technologies are still in their infancy, both hold large promise for businesses in the battle against cybercrime and fraud, says Ryan Duquette, principal of Oakville-based Hexigent Consulting.
“Currently, the cost of these tools is somewhat prohibitive, so adoption of these technologies is primarily by large organizations, like financial institutions and insurance companies, he tells AdvocateDaily.com. “With time, we expect the cost to scale down to meet the needs of smaller companies.”
It has quickly become standard for insurance companies to use AI to identify inconsistencies and unusual patterns, whether they’re looking for the rigging of car accidents by sophisticated fraud rings or individuals who embellish how much their damaged property was worth, reports Fast Company.
Duquette says that because AI and machine learning allow organizations to compare larger data sets than ever before, it makes the process of detecting fraud more accurate and much faster.
“Using AI, for example, you can quickly compare large data sets of log files or photos much more efficiently than a human can,” he says. “You can set the parameters to flag average behaviour versus something that’s out of the ordinary."
When it comes to protecting corporate networks, cybersecurity firms like Hexigent employ AI tools that “scrape the internet and Dark Web,” looking for any traces that their data has been compromised, Duquette says, adding Hexigent is frequently called in to use these technologies to help companies targeted by hackers. Without such tools, he says it often takes an organization more than 200 days to become aware it has been hacked.
“That is an incredible amount of time that someone is in their environment and possibly stealing information,” he says.
Duquette says these AI-enabled tools can also be employed to look at users’ behaviours within an organization, identifying anyone whose online activity steps outside the norm.
“If someone is looking at documents on a network or shared drive that they don’t usually access, that activity will be flagged,” he says.
Wired magazine reports that recent AI developments have led to smarter autonomous security systems which are also capable of learning for themselves.
“With the right AI software, computers can now keep up with big data that cybersecurity systems produce,” the online news agency reports. “AI algorithms are very good at identifying outliers from normal patterns.
"Instead of looking for matches with specific signatures, a tactic that new age attacks have rendered useless, AI blends with cyber by first making a baseline of what is normal. From there, deep dives into abnormal events can be made to detect attacks.”
Duquette says as AI and machine learning become more entrenched, the idea of a password will soon become obsolete, replaced by log-in methods like facial or fingerprint scans. Biometric authentication, a security process that verifies the unique biological characteristics of a person through retina scans, iris recognition or other forms of identification, is also becoming common, he says.
“Passwords are going to die,” Duquette says. “We have to get past the idea of just using one method to authenticate ourselves.”
Many of these new programs rely on artificial intelligence and machine learning, he says, allowing users to better understand the data on these systems while identifying any weaknesses and security flaws.
“The software tools we use are constantly evolving, pointing out things we would not have been able to detect otherwise,” Duquette says. “From a cybersecurity point of view, AI allows us to quickly process mountains of information, to find patterns and events outside the norm.”