Sunday, April 21, 2024
13.4 C
Los Angeles

Civilians at Risk as Large-Scale Fighting Looms in Darfur

After a months-long, uneasy détente between Sudan’s...

Advancing technology for aquaculture | MIT News

According to the National Oceanic and Atmospheric...

Using deep learning to image the Earth’s planetary boundary layer | MIT News

Although the troposphere is often thought of...

Director Wray’s Remarks to the Atlanta Commerce and Press Clubs — FBI

OpinionDirector Wray's Remarks to the Atlanta Commerce and Press Clubs — FBI


Thanks, Walter. And my thanks to the Atlanta Commerce Club and the Press Club for having me this afternoon.

It’s great to look out and see so many old friends. I still think of Atlanta as home. This is where my career in law—and, a few years later, law enforcement—really began.

And it’s an honor to be here with such a forward-leaning group—people who keep Atlanta’s economy thriving, and its public informed and engaged.

Today, I want to talk about a couple of topics that are top-of-mind at the Bureau, and for the public and partners we always remember that we’re doing our work for.

First, violent crime—and what we and our partners are doing about it, here in Georgia and elsewhere.

And, then, I’m going to shift gears on you and talk technology—artificial intelligence and how, at the FBI, we’re focusing on the fast-changing frontier of what’s possible.

But the common thread is adaptation: For decades, the FBI has adapted to new technology and threats across our programs—including countering violent crime—and that adaptation remains a vital part of our mission today.

Violent Crime

I want to start by sharing a little bit about some of the conversations I had earlier today with chiefs and sheriffs from departments all across the state of Georgia.

Their biggest concern is the same one I hear almost weekly when I speak with their counterparts in all 50 states, in communities large and small—and that’s the alarming level of violent crime. And our nationwide statistics from the last couple of years confirm the violent crime threat in this country is real and not letting up.

People deserve to be able to go to work, meet with friends, go shopping—in other words, live their daily lives—without fear. And when that sense of safety is undermined, everyone loses.

Whether it’s gangs terrorizing communities, robbery crews graduating from carjackings to even worse violence, or neighborhoods located along key drug-trafficking routes getting inundated with crime, communities in every corner of this country are affected.

That’s unacceptable, which is why we’re working shoulder-to-shoulder with our state and local partners to combat that appalling trend.

Here, in Georgia, there are examples all across the state of the impact we can have when we work together.

Spurred by the shooting death of an 8-year-old child in January, our Safe Streets Task Force teamed up with the Richmond County Sheriff’s Office and the local DA to disrupt and dismantle gangs that had terrorized communities in and around Augusta.

We aggressively targeted the most violent offenders on an unprecedented scale, making 119 felony arrests in just three months.

Another operation against the “Ghost Face Gangsters” down around Brunswick exposed a massive drug-trafficking ring led by a white supremacist street gang. That collaborative investigation resulted in what is believed to be the largest-ever indictment in Southern District of Georgia history, with federal charges against 76 subjects and state charges against more than three dozen others.

Closer to home, we’re wrapping up a years-long investigation that disrupted a major drug-trafficking route that was moving huge quantities of drugs from Colombia; north through Mexico; and, ultimately, landing right here, in Atlanta. We’ve arrested and charged individuals in Georgia, Florida, Tennessee, and Texas; and we’re in the process of extraditing two of the main targets from Mexico to face justice here in the United States. Along the way, we’ve seized millions of dollars, taken dozens of firearms out of the hands of the drug traffickers, and intercepted loads of narcotics that were headed for the streets of Atlanta.

But it’s not just the major investigations—our agents and task-force officers are also focused on the violence against everyday people going about their everyday lives.

Just recently, for instance, we took down a robbery crew that had pistol-whipped and robbed one of their victims at an ATM, carjacked another, and held up two armored trucks by putting rifles to the heads of the couriers.

Atlanta is not just a hub for business. I’m afraid it also seems to be a destination for violent fugitives who commit crimes out of state. So, I’m particularly encouraged to see that our Atlanta Metropolitan Major Offenders (or AMMO) Task Force has been reinvigorated.

Through AMMO, we’ve done a lot of great work with Atlanta PD and other departments in the area to get some of the most dangerous fugitives off the streets. In fact, the task force recently completed a months-long investigation into five offenders from New Jersey, who had posed as FBI agents and shot a Bergen County resident during a home invasion. That investigation resulted in charges against all five fugitives for attempted murder, kidnapping, and robbery. And it’s only a small sampling of what the AMMO Task Force is doing for Atlanta-area communities.

That’s all just here in Georgia—we’re working with our brothers and sisters in state and local law enforcement all across the country to maximize our impact. The FBI now leads more than 300 violent crime task forces made up of over 3,000 task force officers, working shoulder-to-shoulder with our agents, analysts, and professionals. And each of those TFOs represents an officer, a deputy, or an investigator that a local police chief, sheriff, or agency head was willing to send our way—not because they didn’t have enough work to do at their own department or office, but because they saw the tremendous value that our FBI-led task forces bring.

And I can report that our agents and TFOs have been busy.

Together, in 2022, we arrested more than 20,000 violent criminals and child predators—an average of almost 60 per day, every day.

We also seized more than 9,600 firearms from those violent offenders, cut into the capabilities of 3,500 gangs and violent criminal enterprises, and completely dismantled 370 more. And we have no plans to let up any time soon.

Transition to AI

When it comes to tackling the violent-crime problem, one of the FBI’s strengths has always been finding new and creative approaches to solving crimes.

In fact, in his first report to Congress on the FBI after its founding in 1908, Attorney General Bonaparte described the FBI itself as “an innovation.” And, for more than a century since then, we’ve taken it upon ourselves to live up to that standard, again and again.

We’ve built and developed tools in key areas that help us accomplish our mission to keep people safe—things like biometrics, DNA research, facial recognition, and voice recognition; digital forensics teams to handle technically complex cases; cellphone data analysis to uncover criminals’ movements and locate missing persons; and much more.

These were all innovations when they were created, and without them, we couldn’t protect the American people the way we do now.

So I want to take this opportunity to talk about the newest technology the world is grappling with on a massive scale: AI, or artificial intelligence.

Who would have thought, even just a few years ago, that we’d all be having conversations about AI around the dinner table?

It feels a bit like science fiction—and that’s because it used to be, though I can assure you it’s not a new topic at the FBI.

As we all know, today, AI is quickly making world-changing breakthroughs in everything from astronomy to agriculture, and energy to the environment. It’s solving problems as varied as folding amino acids into the basic building blocks for life, and writing term papers for college students, and also helping catch cheating college students.

And, of course, in response to all of this change and technological advancement, our lawmakers and leaders in all industries—from the medical to the creative to the military—are trying to make order from the chaos, to make sure we map a clear path across this new frontier, instead of letting circumstances—or, as we’re already seeing, foreign governments—make decisions for us.

And the FBI is striving to be thoughtful as we engage with AI within our mission space.

Our approach to AI fits into three different buckets.

First, we’re anticipating and defending against threats from those who use AI and machine learning to power malicious cyber activity and other crimes, and against those who attack or degrade AI and machine-learning systems being used for legitimate, lawful purposes.

Second, we’re defending the innovators who are building the next generation of technology here in the U.S. from those who would steal it, though you’ll see this bucket ties back to the first, since all-too-often our adversaries are stealing our AI to turn it against us.

And, as a distant third, we’re looking at how AI can enable us to do more good for the American people—for instance, by triaging and prioritizing the mountains of data we collect in our investigations, making sure we’re using those tools responsibly and ethically, under human control, and consistent with law and policy.

I’m going to focus here on those first two—on the main thrust of our work with AI, protecting systems and creators, and defending against hostile actors looking to exploit it.

AI as a Tool and a Target of Cybercrime

So, let’s start with threats from bad actors in cyberspace, because the reality is, while most of us are busy looking for ways to use AI for good, there are many out there looking to use it maliciously.

Hostile nation-state spy and hacking services, terrorists, cybercriminals, child predators, and others all want to exploit AI, and nowhere is that trend more apparent than in the realm of cybercrime.

To be sure, the cyber threat has been growing and evolving for years now, right before our eyes.

Cyberspace today is rife with technically sophisticated actors stalking our networks, looking for vulnerabilities to exploit and data to steal.

Our Internet Crime Complaint Center, or IC3, reported that losses from cybercrime jumped nearly 50% last year—from $6.9 to $10.3 billion.

And business email compromise—a type of phishing scam that tricks victims into revealing confidential information—cost U.S. businesses over $2.4 billion last year alone.

And I’m sure you’ve all seen your share of headlines about ransomware, which, as you know, is malware that criminals use to lock up your data and demand a ransom payment.

Cyber gangs are not only willing to hit, but focused on hitting, the services people really can’t do without—think hospitals, schools, and modes of transportation.

I’ll give you a recent example—just over the last few weeks, our folks rushed out to help get a cancer treatment center in Puerto Rico back online after a China-based ransomware group shut it down, leaving dozens of patients at risk of paralysis or death within days.

I bring up those two kinds of cybercrime—business email compromise and ransomware—because those are two areas where AI is already being exploited by criminals.

Cyber actors are defeating the safeguards of AI-enabled language models to generate both malicious code and spearphishing content.

What happens, for example, when I ask ChatGPT to craft a phishing email?

It immediately responds with “Sorry, no can do.”

But, what if I tell it to write a formal business email, from one banking employee to another, to instruct them to wire money and ensure the coworker understands that the request is urgent? Sounds like a phishing email, doesn’t it? Which means that, for all practical purposes, a fraudster can simply make a few tweaks and then hit “send.”

Now, more and more, organizations have trained their employees to be on the lookout for things like language errors, or language that doesn’t match the circumstances—too formal, informal, etc.

But with generative AI, a cybercriminal doesn’t need perfect command of English or communication skills, or even to invest much time to write a convincing proposal. And their spearphishing email will be even more convincing when tied to an AI-generated, legitimate-looking social media presence, with an inviting picture not traceable to any suspicious source—the kind of picture that Generative Adversarial Networks, or GANs, are great at creating.

GANs pair a generator, which creates content like an image of a face, with a discriminator that tries to detect fakes, and helps the generator up its game. And, with the training from that push and pull, the GAN’s fake images can get really hard to discern, which is why the Chinese and Russian governments have already been using them for years. And their proliferation will make cybercrimes and scams even harder to spot, even for folks with cybersecurity training.

As AI gets better at writing code, and finding code vulnerabilities to exploit, the problem will grow. Those capabilities are already able to make a less-sophisticated hacker more effective by writing code, and finding weaknesses they couldn’t on their own. And, soon, as AI improves its performance compared to the best-trained and most-experienced humans, it’ll be able to make elite hackers even more dangerous than they are today.

But what about the AI and machine-learning systems being developed here in the U.S. for legitimate uses?

Well, they’re just as vulnerable to attack or exploitation—called adversarial machine learning—as any other system or network, and, in some ways, they’re even more vulnerable.

Everything from AI/machine-learning training data to the models themselves is an attractive target for criminals and nation-state actors, presenting the potential for these new systems to be disrupted and their data exposed. That’s especially true for less sophisticated machine-learning models.

Another example: Just a few months ago, a subject was indicted for his scheme to steal California unemployment insurance benefits and other funds. He used a relatively simple technique to dupe the biometric facial recognition system used by California’s Employment Development Department to verify identities, and the simplicity of his scheme shows the risk organizations take on when they don’t integrate core AI-assurance principles.

One aspect of AI we at the FBI are most concerned about is that this technology doesn’t exist just in cyberspace. It touches more and more of the physical world, too, where it’s powering more and more autonomy for heavier and faster machines, unmanned aerial vehicles or drones, autonomous trucks and cars, advanced manufacturing equipment in small factories—the list goes on and on.

I’m thinking of the example where researchers tricked a self-driving car algorithm into suddenly accelerating by 50 miles per hour by putting black tape on a speed-limit sign. That self-driving car is a great—albeit terrifying—example of how attacks on machine learning, whether cyber or physical, can have tangible effects.

Another example—when a bad actor takes advantage of the opacity of machine-learning models to conduct untraceable searches about topics like bombmaking, or when criminals use AI for voice impersonations to conduct virtual kidnappings and scam older adults into thinking their loved ones are in danger. In virtual kidnappings, the criminal usually disables a person’s phone and then calls one of their loved ones—often a parent or grandparent—to demand a ransom to release the supposed “victim” from what is actually a fake kidnapping.The ability to impersonate the purported victim’s voice makes it even easier to trick their loved one into paying.

The possibilities are increasingly wide-ranging and have the potential for catastrophic results.

AI as a Target of Foreign Adversaries

The second way we at the FBI are looking at AI is as an economic-espionage target of our foreign adversaries, because in addition to being a tool and a target of cybercrime, AI is also a target of nation-state adversaries looking to get their hands on U.S. technology and undercut U.S. businesses. And it’s easy to see why.

Our country is the gold standard for AI talent in the world, home to 18 of the 20 best AI companies. And that makes our AI/machine-learning sector a very attractive target. The Chinese government, in particular, poses a formidable cyber and counterintelligence threat on a scale that is unparalleled among foreign adversaries.

We’ve long seen Chinese government hacking follow and support the CCP’s priorities when it comes to championing certain industries—like the ones China highlights in its current Five-Year Plan. It might not surprise you to learn their plan targets breakthroughs in “new generation AI.”

Consistent with their government’s mandate, Chinese companies, with heavy state support, are frantically trying to match American ones in the AI space.

Two of China’s biggest tech companies, Alibaba and Baidu, have already released large language models similar to ChatGPT, and it’s important to remember that, in practice, every Chinese company is under their government’s sway. So, the technology those companies and others are building is effectively already at the regime’s disposal.

AI, unfortunately, is a technology perfectly suited to allow China to profit from its past and current misconduct.  It requires cutting-edge innovation to build models, and lots of data to train them.

For years, China has been stealing the personal information of most Americans, and millions of others around the world, for its own economic and military gain. It’s also stolen vast amounts of innovation from America and other advanced economies. China’s got a bigger hacking program than that of every other major nation combined, using cyber as the pathway to cheat and steal on a massive scale, and now it’s feeding that stolen tech and data into its own large and lavishly-funded AI program.

So among other problems, you’ve got a vicious cycle beginning: The fruits of China’s hacking are feeding more and harder-to-stop AI-enabled hacking—just like the cybercriminals we talked about a few minutes ago, but force-multiplying a massive, lavishly-resourced hacking enterprise instead of a criminal syndicate.

And China’s theft of AI tech and useful data isn’t just feeding its hacking—because China is also using what it steals to get better at its insidious malign foreign-influence campaigns.

Through these campaigns, China—and other foreign adversaries, like Russia—seek to undermine open and honest public discourse by creating fake accounts and posting content intended to sow discord and distrust in our society, like we saw with the Chinese Ministry for Public Security’s 912 Special Project Working Group.

Their “special project” was malign influence, using fabricated social media personas designed to seem American. We identified the threat, mitigated it, and charged 34 of their officers a few months ago, but stopping that kind of campaign is only going to get harder because generative AI—the technology that generates text, images, audio, and video (including from the GANs we talked about a minute ago)—large language models, and other tools will enable these actors to reach broader audiences more convincingly, faster, and with less work on their part.

Deepfakes are the most well-known example of this. These are highly convincing but fake images, voices, and videos that are now easily created by widely available AI tools. Years ago, to do that well required enormous investment and talent. Now, almost anyone can do it.

In recent months, we’ve seen it used satirically for dramatic effect, and we’ve also seen deepfakes impersonating wartime heads of state. And, just last month, we saw an AI-generated image of an explosion at the Pentagon go viral, causing the stock market to take a hit before anyone realized the image was fake.

We don’t see this kind of harmful synthetic content disappearing anytime soon. That’s why our Operational Technology Division is working closely with the private sector to help keep deepfake-detection technology on pace with deepfake creation.


Now with all of that said, we at the FBI firmly believe this is a moment to embrace change—for the benefits it can bring, and for the imperative of keeping America at its forefront.

And frankly, there’s no more important partner in our strategy than all of you and your peers throughout the country.

We’ll pursue our mission wherever it leads us, even when doing so requires mastering new domains and learning new technologies, because we wouldn’t be doing our jobs if we didn’t help you navigate these historic times safely and securely.

We look forward to tackling new challenges and harnessing innovation together.

Thank you.

Story from

Disclaimer: The views expressed in this article are independent views solely of the author(s) expressed in their private capacity.

Check out our other content


Check out other tags:

Most Popular Articles