A parent recently discovered how vulnerable young users can be on Discord when automated support systems fail to respond to urgent security threats. Brady Frey's 13-year-old daughter had her account compromised after clicking a malicious link from someone impersonating Discord support staff. The attacker then attempted to extort banking information from the teenager and began targeting her friends list with similar social engineering tactics.
The incident exposed significant gaps in Discord's crisis response procedures. When Frey attempted to alert the platform to the security breach involving a minor, he encountered automated chatbots and support representatives who repeatedly closed his tickets with generic responses. The support team suggested he report the issue directly through the app—something impossible since his daughter had been locked out of her account. Over eight days, Frey submitted multiple requests emphasizing that the situation involved a minor and posed ongoing risks to other young users on the platform, yet his pleas went unaddressed.
The case highlights a broader concern about age verification on social media platforms. Frey's daughter had created her account at age 12, technically below Discord's minimum age requirement of 13. Like many young users, she simply falsified her age during signup, a practice regulators have documented as commonplace across social platforms. This false age designation may have further complicated support efforts, as Discord's systems initially treated her as an adult user.
What made the situation particularly alarming was the lack of a formal process allowing parents to intervene when a minor's account faces compromise. Frey found himself locked out of helping his daughter because the platform's support infrastructure didn't account for parental involvement in security emergencies affecting minors. Discord ultimately removed the attacker from the account only after external intervention occurred, raising questions about the platform's commitment to protecting underage users and their peer networks from active threats.
The incident underscores the need for major communication platforms to establish clearer protocols for handling security emergencies involving minors, particularly when parents or guardians must advocate on their behalf.