Swedish AI‑coding startup Lovable has sparked a heated debate after a user on X claimed a massive data exposure. The claim says every project created before November 2025 was viewable by anyone with a free account.
What happened?
On Monday, an X user named Impulsive posted that they could see other users' code, AI chat histories, and even customer data. The post listed employees from Nvidia, Microsoft, Uber and Spotify as affected.
Lovable first denied a breach, saying public projects are meant to be visible. A follow‑up statement admitted a backend permission change in February unintentionally re‑enabled chat access on public projects.
Company response
Lovable said it quickly reverted the change and has turned off public visibility by default for all plans since December. The firm thanked the researchers who uncovered the issue.
Why it matters to developers
Security specialists say the incident is a textbook case of weak defaults and poor threat modeling.
- Design flaw, not hack: Data was exposed because of a settings error, not because an attacker broke in.
- Vibe coding danger: Tools that auto‑share code can leak sensitive info if defaults stay open.
- Impact on enterprises: Companies using AI‑assisted coding risk exposing proprietary logic, credentials, or client data.
Expert opinions
Tom Van de Wiele, founder of Hacker Minded, called the episode “another example of lacking secure defaults.” He warned that relying on users to know what’s public will inevitably fail.
Jake Moore, global cybersecurity advisor at ESET, said the event shows a bigger problem: “It isn’t a traditional breach, but it’s not harmless either.” He added that companies that argue semantics over impact usually skipped security from day one.
Broader AI security trend
Lovable’s slip follows two recent AI‑related leaks: Anthropic exposed nearly 2,000 files in March, and Vercel reported a third‑party tool compromise that gave unauthorized access to internal systems.
These incidents underline a growing consensus: AI‑assisted coding tools must embed strong privacy controls, not bolt them on later.
What developers can do now
- Review project visibility settings regularly.
- Use separate accounts for sensitive code.
- Back up code outside of AI platforms.
- Stay informed about each tool’s default sharing policies.
As AI coding gains traction, the industry must shift from “cool feature” to “secure by design.”