Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

AI Bots Now Outnumber Humans Online And It’s Getting Worse

AI Bots Now Outnumber Humans Online And It’s Getting Worse AI Bots Now Outnumber Humans Online And It’s Getting Worse
IMAGE CREDITS: FEBRABAN TECH

The internet has officially entered a new phase: bots now rule the web. According to the latest report by Imperva, automated bots accounted for 51% of all web traffic in 2024—surpassing human users. What’s even more alarming is that the majority of this traffic is hostile. Bad bots make up 37% of all activity online, outpacing the 14% of good bots that help keep systems running smoothly.

Behind this shift is a familiar culprit: artificial intelligence. AI is not only boosting the volume of bot activity but also raising its sophistication. Cybercriminals are using AI to streamline attacks, making them cheaper to launch, harder to detect, and nearly impossible to stop once they’re in motion.

Tim Chang, head of application security at Thales (which acquired Imperva in 2023), warns that we’re only seeing the beginning. Simple bot attacks are currently exploding in number because AI makes them easy to produce—even for attackers with little technical skill. As these criminals become more experienced with AI tools, the nature of bot attacks is set to become far more advanced, evasive, and relentless.

A Surge in Sophisticated API Attacks

Imperva’s annual Bad Bot Report paints a clear picture of where this trend is heading. A massive 44% of advanced bot attacks now target APIs, the connective tissue of modern applications. Attackers are exploiting API vulnerabilities through a combination of weak rate-limiting, poor authentication, and simple misconfigurations.

The most common types of API bot attacks include data scraping (31%), payment fraud (26%), account takeovers (12%), and scalping (11%). Each of these relies on automated access to sensitive endpoints—something AI-powered bots are increasingly adept at navigating.

Account takeover attacks alone have surged 40% in the last year. These bots not only brute-force passwords but also use AI to learn and bypass new defenses, adjusting their tactics in real time.

Malicious AI Bots: Disguised and Dangerous

Among the AI-powered bot networks causing the most disruption are ByteSpider Bot, AppleBot, Claude Bot, and ChatGPT User Bot. ByteSpider alone is responsible for more than half of all AI-assisted bot attacks. Its success is due to clever disguise—posing as the legitimate ByteDance web crawler that scrapes content for training TikTok’s LLMs.

This type of impersonation highlights a growing problem: many security teams hesitate to block web crawlers outright for fear of harming search rankings or cutting off useful bots. Cybercriminals exploit this hesitation, camouflaging their malicious bots to sneak past defenses designed to whitelist friendly traffic.

AI Is Changing the Game for Bot Operators

The rise of AI has given bot operators a serious edge. These actors no longer need deep technical knowledge to execute high-impact attacks. With AI, they can automate code generation, test evasion strategies, and refine their operations with precision. Essentially, AI transforms them into “zero-knowledge threat actors” capable of launching devastating campaigns without extensive experience.

Imperva’s data shows just how massive this trend has become: in 2024 alone, it blocked 13 trillion bot requests. Every day, around 2 million AI-enhanced bot attacks are detected—ranging from basic credential stuffing to highly advanced, polymorphic attacks that mutate to avoid detection.

As Tim Chang puts it, the future will bring a new wave of smarter, stealthier bots. They’ll evolve faster than our defenses unless organizations take action now.

Share with others