Massive Data Leak From AI-Generated Apps

Cybersecurity firm RedAccess uncovered almost 380,000 publicly reachable assets tied to AI coding platforms such as Replit, Lovable, Base44 and Netlify. About 5,000 of those apps lacked basic security settings.

Sensitive Info Left Unprotected

Nearly 40% of the exposed apps stored data that should stay private. The leaks included:

  • Medical records and hospital schedules
  • Financial statements and shipping logs
  • Internal business strategies and chatbot logs with customer names

Many of the apps allowed anyone with a link to view the data without logging in.

Phishing Sites Ride the Trend

Researchers also found fake sites mimicking brands like Bank of America, FedEx, McDonald’s and Trader Joe’s. The sites were built with the same AI tools, showing how “vibe coding” can be misused.

Why This Happened

AI coding tools let users create software by typing simple prompts. The speed is appealing, but most creators lack cybersecurity training. They often forget to set privacy options or assume defaults are safe.

Industry Response

Platform leaders pushed back. Replit’s CEO Amjad Masad said users can switch apps from public to private at any time. Lovable and Base44 said security is the user’s responsibility.

What Experts Recommend

Security pros urge companies to treat AI-built tools like any other software:

  • Run regular audits on all AI-generated apps before launch
  • Enforce strict access controls and authentication
  • Provide training on privacy settings for developers

Without these steps, businesses risk exposing confidential data and facing compliance penalties.

Looking Ahead

As AI coding becomes standard in workplaces, the need for robust oversight will only grow. Companies that act now can avoid costly data breaches and keep customer trust intact.