Even conservatively, I believe that in four years AI will be able to handle most human jobs. As AI replaces human tasks, new forms of information leaks and security incidents are inevitable.
Dan Karpathi, head of the AI center and vice president of Check Point Software Technologies (hereafter Check Point), said this in an interview with ChosunBiz at the Westin Josun Hotel in Sogong-dong, Seoul, on Aug. 27. He explained, "Over the past six years, the length of tasks AI can perform in place of humans has doubled every six months, and we are seeing a significant rise in the level of cyber security threats caused by AI."
Check Point, an Israel-based global cybersecurity company, has been a powerhouse in the security industry since its founding in 1993. It currently handles security for more than 7,000 corporations and institutions across 88 countries. More than 3,500 security experts belong to Check Point. Check Point was the first to develop firewall software, and it is now hard to find a global company that does not use Check Point firewalls.
Karpathi, who has about 25 years of experience in the cybersecurity industry, said recent AI developments in the past few years have completely changed the security landscape. He said, "Now AI with systems similar to the human brain is emerging. In the past, AI attempted to replace tasks performed by humans, but in the era of AI agents, AI will take responsibility for and carry out all those tasks itself."
However, the more humans share sensitive data with AI, the more new forms of incidents and attacks emerge. For example, with MCP (Model Context Protocol) connecting AI to services like Gmail, Calendar and YouTube, AI can think for itself and cause security problems such as data leaks.
Karpathi, who joined Check Point in 2019, played a key role in building the company's AI center. He emphasized that Check Point's AI strategy is "controllable security." To that end, Check Point prioritizes "preventive blocking" rather than "post-incident response."
He said, "Through Gen AI PROTEC, we provide means for corporations to monitor in real time how they use AI," and added, "We set up a kind of guardrail to distinguish what is allowable and what must never be allowed during AI implementation, and we even have features to block them when necessary." He went on to say, "ThreatCloud AI responds in real time based on threat information collected globally. For example, if an AI agent is attacked in Seoul, the structure allows the same attack occurring on another continent to be blocked in just two seconds."
Karpathi emphasized that to create a safe AI environment, ultimately a hierarchical system must be established with humans at the top managing and supervising. He predicted, "AI will take on direct management and supervisory roles beyond simple code writing. If so, at the lowest level of future work many AIs will be deployed like 'employees,' and above them will be 'AI managers' who oversee them."
He continued, "Because the scale is so large, it will be difficult for people to directly supervise all AIs. That's why another level of 'AI managers' who supervise those AI managers will emerge, and above them a human will ultimately exist."
Karpathi made clear his stance in the global AI regulatory arena that "regulation does not hinder AI innovation." He said, "We have already experienced explosive AI innovation over the past two to three years in a mostly unregulated environment, and now we can clearly see that AI is not just a possibility but an actual risk." He added, "Now regulation is necessary, and it is time to comply with it. Regulation will be the foundation for safe and sustainable innovation rather than a barrier to innovation."