I’m in Buenos Aires right now at the DeFi Security Summit, where I joined a panel on AI and Web3 security along with @blocksec (EigenLabs), @ChanniGreenwall (Olympix), @jack__sanford (Sherlock), and @nicowaisman (Xbow). What not long ago felt like a “distant future” is now being discussed as a very concrete roadmap for the coming years: both defensive and offensive AI‑powered technologies are going to advance rapidly, especially in vulnerability discovery. Everything auditors do today manually and through a patchwork of tools is gradually being bundled into more powerful and accessible automated stacks. It’s important to look at reality soberly: the hope that we can reliably “fence off” models from undesirable use is illusory. Any sufficiently capable model is, by definition, dual‑use. Providers will add restrictions, filters, policies — but that’s not a fundamental barrier. Anyone with motivation and resources will spin up a self‑hosted model, assemble their own agentic stack, and use the same technologies without worrying about ToS. You can’t design security under the assumption that attackers won’t have access to these tools. The economics of attacks are also far from straightforward. In the short term, attacks will get cheaper: more automation, more “wide‑area bombardment,” more exhaustive exploration of states and configurations without humans in the loop. But over the long run, as defensive practices and tools catch up, successful attacks will become more expensive: coverage will improve, trivial bugs will disappear, and effective breaches will require serious infrastructure, preparation, and expertise. This will shift the balance toward fewer incidents — but those that do happen will be far more complex and costly. My main takeaway: we’ll have to revisit the entire security lifecycle, not just “cosmetically improve” audits. How we describe and understand risk profiles, how threat models evolve with AI in the picture, how we structure development, reviews, testing, deployment, on‑chain monitoring, incident response, and post‑mortems — all of this will need to be rethought. Traditional audits will remain a key piece, but they can no longer be the sole center of gravity. The reality is that AI symmetrically amplifies both defenders and attackers — and Web3 security will have to adapt its entire operating model to this asymmetric arms race.
1,37 tn
11
InnehÄllet pÄ den hÀr sidan tillhandahÄlls av tredje part. Om inte annat anges Àr OKX inte författare till den eller de artiklar som citeras och hÀmtar inte nÄgon upphovsrÀtt till materialet. InnehÄllet tillhandahÄlls endast i informationssyfte och representerar inte OKX:s Äsikter. Det Àr inte avsett att vara ett godkÀnnande av nÄgot slag och bör inte betraktas som investeringsrÄdgivning eller en uppmaning att köpa eller sÀlja digitala tillgÄngar. I den mÄn generativ AI anvÀnds för att tillhandahÄlla sammanfattningar eller annan information kan sÄdant AI-genererat innehÄll vara felaktigt eller inkonsekvent. LÀs den lÀnkade artikeln för mer detaljer och information. OKX ansvarar inte för innehÄll som finns pÄ tredje parts webbplatser. Innehav av digitala tillgÄngar, inklusive stabila kryptovalutor och NFT:er, innebÀr en hög grad av risk och kan fluktuera kraftigt. Du bör noga övervÀga om handel med eller innehav av digitala tillgÄngar Àr lÀmpligt för dig mot bakgrund av din ekonomiska situation.