fixmyvibe.codes
Back to Blog

How We Fixed a £50K App Built Entirely with Cursor

6 min read By FixMyVibe Team
case-study cursor code-rescue

In early 2025, a founder came to us with a problem. He’d built his SaaS — a project management tool for creative agencies — entirely with Cursor over about six months. The app was impressive: clean UI, solid feature set, well-thought-out product. He’d raised seed funding off the back of a strong demo and onboarded his first 50 paying customers.

Then it started breaking.

(Note: This case study is a composite of real engagements. Identifying details have been anonymised.)

The Situation

At 50 users, the cracks were small and manageable. At 200 users, the support tickets were coming in daily. At 400 users, the founder was spending more time on fires than on product development.

The symptoms were varied:

  • Users occasionally getting 500 errors on the dashboard with no explanation
  • “Invite team member” flow sometimes silently failing
  • Performance degrading throughout the day, recovering overnight (database restart)
  • Two users reporting they could see each other’s project data
  • Auth sessions timing out randomly, logging users out mid-workflow

The last two were the most alarming. Intermittent authorisation failures and unexpected data exposure are serious production problems. The founder had patched them with Cursor, which fixed the specific reported cases but left the underlying vulnerability.

The Assessment (Day 1-2)

We ran a full codebase assessment before writing a single line of code. What we found:

47 unhandled error paths. The app used a consistent async/await pattern throughout, but almost no try/catch blocks outside the authentication flow. Database queries, external API calls, file operations — all assumed success.

No rate limiting. Every API endpoint was fully open. The invite flow, the password reset flow, the content generation endpoint (which called OpenAI at ~$0.02 per request) — all callable at unlimited speed. A motivated attacker could have cost the founder thousands in API bills in minutes.

12 missing database indexes. The schema was logical and well-designed, but the high-read columns weren’t indexed. The queries that ran constantly — filtering projects by workspace, loading user activity feeds — were doing full table scans at 10,000+ rows.

Auth token expiry not set. JWTs were being issued with no expiresIn option, making them valid indefinitely. The user who “randomly” got logged out was experiencing a different bug; users who stayed logged in were using tokens that would never expire, including any that might have been compromised.

The data exposure root cause. The most serious issue: a single middleware function was responsible for validating that a user could access a given resource. This function was correctly applied in most routes — but three routes added late in development had been added without the middleware. Those three routes handled project export, team member listing, and activity history. Authenticated users (any authenticated user) could call these routes with any workspaceId in the URL.

The Fix Plan

We presented the findings with a clear priority tier:

P0 (fix immediately, before going further): The three unauthorised routes. Two hours of work. Deployed same day.

P1 (fix in week 1): Rate limiting on all API endpoints, auth token expiry, the most critical error handling paths (dashboard, invite flow), the worst-performing database queries.

P2 (fix in week 2): Remaining error handling, the other 8 database indexes, input validation on all public-facing forms, cleanup of console.log statements leaking user data.

The founder agreed with the prioritisation. We started immediately.

Two Weeks of Focused Work

Week 1 was about eliminating the risks and fixing the visible user-facing bugs.

We added rate limiting using a token bucket approach — strict on auth endpoints (5 attempts per 15 minutes), generous on normal operations (60 requests per minute per user). The invite flow was debugged: the failure was a silent catch somewhere swallowing an email delivery error. Adding proper error propagation and user feedback fixed it. Auth tokens were set to 24-hour expiry with refresh token rotation.

The three unauthorised routes got proper authorisation middleware. We also did a systematic pass of every route to verify middleware was applied consistently — and found two more routes missing it that hadn’t yet been discovered or exploited.

Week 2 was about performance and resilience.

The 12 database indexes reduced query times on the worst offenders from 4-8 seconds to under 200ms. The activity feed query that was running every 30 seconds per active user dropped from 3.2 seconds to 87ms. This was the primary cause of the daytime slowdown: as users opened dashboards throughout the day, the aggregate query load compounded until it was saturating the database.

Error handling was added systematically across all async operations. Not just try/catch — but meaningful error responses that users could act on, and proper logging so we could see what was failing in production.

The Results

Three weeks after the engagement ended, with 600 users (growth had continued during the fix work):

  • Uptime: From 94.2% average over the prior 60 days to 99.9% over the following 30 days
  • Average page load time: 3.1 seconds → 0.9 seconds (driven mostly by the database indexes)
  • Support tickets: Down 78% week-over-week
  • Security incidents: Zero in 90-day follow-up period

The founder’s comment at the end of the engagement: “I spent six months building this with Cursor and two weeks with you fixing it. I wish I’d found you at month four.”

What This Demonstrates

This isn’t an unusual case. It’s actually fairly representative of what we see:

  1. The code is good. The app had real value, sensible architecture, and clear founder intent. It didn’t need a rewrite — it needed a focused fix pass.

  2. The issues are systematic but small. 47 unhandled errors sounds like a lot. In practice, they all followed the same pattern — add error handling here, here, and here. The fix rate was about 8-10 issues per day.

  3. The critical issues are concentrated. The data exposure bug that was actually dangerous was three misconfigured routes — a few hours of targeted work once identified.

  4. The performance fixes are disproportionately impactful. 12 indexes. 3.5 seconds of average load time savings. Those 12 lines of migration SQL had more user impact than any feature shipped in the previous month.

The hardest part wasn’t the technical work. It was getting the founder to stop patching with Cursor and let us do a proper systematic fix. Every founder we work with has the same instinct: “Just fix the specific thing that’s broken.” Systematic improvement requires resisting that instinct long enough to understand the whole picture.


Ready for your own transformation? Start with a free code assessment — we’ll identify the specific issues, prioritise them by impact, and give you a fixed price to fix them.