Decentralized finance teams have been withdrawing from public Discord channels after a surge of phishing, impersonation and social-engineering attacks made the platform a persistent security liability.
The shift reflects a reassessment of how protocols provide user support and preserve trust. Senior figures from multiple projects described public Discord servers as low-signal, high-risk environments that no longer justify the exposure to targeted fraud.
Industry reporting puts the immediate cost into sharp relief: in May 2025, crypto users and DeFi protocols lost about $302 million to hacks and scams tied in part to social-engineering campaigns on community platforms.
Sources from within the ecosystem described a pattern: scammers monitor public channels for users seeking help, impersonate staff or influencers, then move conversations into direct messages to complete phishing attempts. Morpho co‑founder Merlin Egalite said users were being phished “even while seeking legitimate help, despite heavy monitoring and safeguards.”
Another protocol leader, the pseudonymous founder of DefiLlama, 0xngmi, pointed to an operational blind spot: even when moderators enact instant bans, attackers can still DM users and widen the support risk surface. Richard Rodairos, a talent partner at Dragonfly, called public Discord servers “one of the lowest signal surfaces in the space,” and said their vulnerabilities argue for removal from critical support paths.
The platform’s design—optimised for engagement rather than verified interactions—compounded the problem. Attackers used cloned links, fake wallet applications and poseur support accounts to harvest seed phrases and credentials, producing complete wallet drains in many cases. The phenomenon was not limited to Discord; similar scams were reported on other community chat apps earlier in 2025.
Why Discord became untenable for DeFi communities
Teams responded by locking or limiting public channels and by shifting to controlled support workflows. Several projects moved toward ticketed help desks, asynchronous documentation and tools that create verifiable interactions. Morpho tested Intercom-style ticketing and AI-assisted responses, and some protocols set servers to read-only or required stricter access controls to reduce impersonation vectors.
The aim was pragmatic: reduce the number of points where an attacker can intercept a distressed user and convert that exposure into theft. Protocols emphasised clearer documentation and proprietary support channels that can authenticate staff identity, instead of relying on open chat moderation alone.
Investors and security teams will now watch whether ticketed support systems, stricter access controls and AI-assisted moderation measurably reduce social‑engineering losses. The effectiveness of those changes will be tested by subsequent reporting on scam-related losses and by whether community trust recovers as protocols move conversations off public servers.
