AI Poses Threat to Security Tech Distinguishing Humans from Chatbots
CAPTCHA technology faces a fundamental crisis. This technology, which has guarded the forefront of web security for over 15 years, is losing its effectiveness with the advancement of Artificial Intelligence (AI), necessitating new methods to discern human 'intent' and 'behavior.'
Recently, during a university class, some students encountered roadblocks when trying to install a specific program, being blocked at the user registration stage. Despite clearly being human, they were treated as robots by a quiz asking, "Are you human?". While this was dismissed as a humorous incident at the time, it frequently prompts reflection: how long will reCAPTCHA or Turnstile remain meaningful? We are now witnessing AI agents, driven by user instructions, facing tests that demand they prove they are not AI. This is not just comedic but a reality, and more importantly, these AI agents are now successfully passing these tests and gaining entry.
Signs of technological and philosophical collapse were already detected in September 2024. A paper titled 'Breaking reCAPTCHAv2' by researchers at ETH Zurich revealed they had bypassed Google's reCAPTCHA v2 with a 100% success rate using the YOLO image object detection model. This result far surpassed previous research's breakthrough rates of 68-71%. More notably, it was confirmed that reCAPTCHA v2 relies more heavily on cookies and browser history than on the image recognition test itself, suggesting that while users believe they are solving a test, they are actually undergoing a process evaluating browser reputation. A year later, in October 2025, bot detection research firm Roundtable released test results showing that general AI agents like Claude Sonnet 4.5 (60%), Gemini 2.5 Pro (56%), and GPT-5 (28%) solved reCAPTCHA v2 challenges with significant success rates. This demonstrates that even general AI tools, not specifically trained for CAPTCHA bypass, can circumvent this technology.
Amidst this situation, a market has already matured where APIs capable of solving most CAPTCHAs, including reCAPTCHA, Turnstile, and hCaptcha, are openly traded for hundreds of won per 1,000 images. One major research paper critically assessed reCAPTCHA v2 as having "immense costs and security close to zero."
The more fundamental problem than CAPTCHAs being breached is that the binary question of 'bot or human?' no longer explains the landscape of web traffic in 2026. According to an annual internet report released in December 2025, automated bots (non-AI) accounted for 47.9% of total HTML request traffic, AI bots for 4.2%, and human-generated traffic only 43.5%. On specific days, human traffic was even surpassed by non-AI bot traffic. The scale of AI 'user behavior' crawling, which searches the web in real-time to respond to user queries, increased 15-fold during 2025, and traffic from Large Language Model (LLM) crawlers also rapidly grew. This signifies a surge in traffic from new entities that are difficult to classify as conventional 'bots'.
Within this complex web traffic, the 'bot/human' classification fails to accurately distinguish between useful bots like search engine crawlers, malicious AI scrapers stealing brand content, legitimate users, and humans attacking with leaked information. Experts' opinions are gaining traction that web gatekeepers must now ask, 'What is the intent of this traffic, and is its behavior compatible with my service?' instead of 'Is it a bot or a human?'
In response to these demands, technical directions like the Privacy Pass protocol are being proposed. This protocol cryptographically proves a user's lack of past problematic behavior, rather than requiring their identity. Furthermore, major companies like Google and OpenAI are technically signaling their intent to 'not hide' by adopting 'web bot authentication,' which attaches cryptographic signatures to their crawler requests using the HTTP message signature standard.
This goes beyond a simple change in security technology; it shakes the foundation of web business models premised on users 'entering anonymously and leaving after viewing ads,' demanding a redesign of revenue structures. If web security technologies continue to be neutralized, companies will likely face two choices: either require logins for all content or sell data in bulk to AI providers. Both scenarios signify the end of the open web. Korean companies must assess whether their bot management policies remain at the 'IP blacklist + CAPTCHA' level, and how their content is consumed by AI crawlers at what asymmetric rates. Some analyses indicate that major AI platforms, as of the latter half of 2025, had referral ratios ranging from 25,000:1 to 100,000:1 relative to crawls.
쿠팡 파트너스 활동의 일환으로 일정 수수료를 제공받습니다