When AI Met Community: Inside the Gapstars Hackathon

On a bright June afternoon, we brought together Gapstars engineers from across partner teams, internal squads, and locations for a high-energy AI Hackathon. This wasn’t a multi-day competition or a sprint for prizes. It was a focused 1-day buildathon to explore how far we could push co-creation with AI and what we’d learn in the process.

The goal? Use emerging tools to build community-focused apps fast. The tool of choice? A low-code platform powered by an LLM that enables teams to go from structured prompts to working prototypes in hours.

 

The Setup

We preloaded the challenge: ten app ideas designed to improve the Gapstars employee experience. From car park and shuttle reservation systems to a CV builder that could help young engineers land their first job, each team chose one concept to bring to life.

Teams were free to form across partner and internal squads, and the only rules were to collaborate, experiment with AI, and aim for a functional MVP by the end of the day.

 

What They Built

Hot Seat Booking – Team “No Humans Allowed”

“It looked like an airplane booking system… we’re proud of that.”

This team created a smooth seat reservation system for shared office space, complete with filters like window seats, floor layouts, and even a vision for an AI-powered seat recommender based on personal preferences.

What went well:

  • Clean, interactive UI
  • Clear UX flow and logical architecture

Where they struggled:

  • AI didn’t grasp responsive layout tweaks
  • Animations required several retries
  • Platform credits burned fast trying to fix spacing issues

 

Their frontend experience made a real difference, knowing how a React app should behave helped them prompt smarter and fix faster.

 

StarPlanner – Team “FusionFocus”

“Very appealing UX… but we ran into connection issues.”

StarPlanner tackled internal event RSVPs with a clean design and persistent backend logic planned via Supabase. But just as they hit their stride, the AI tool went down for 20+ minutes. Just a good old-fashioned reminder that yes, even AI tools crash.

 

What We Learned

Across all teams and even our own prompt logs, a few clear patterns emerged, not just about AI, but about the way our engineers think.

 

Key Prompting Strategies That Worked

 

1. Step-by-Step Prompting > Big-Bang Instructions

Teams that prompted in modular stages got better results. Instead of asking the AI to do everything at once, they broke it down:

 

Create a table layout for shuttle booking with time slots
→ Add login using email only
→ Connect to Supabase project ‘shuttle-prod’ and fetch bookings

 

2. Contextual Framing Improved Results

Prompts with real design and use case context worked significantly better than generic instructions.

 

❌ “Add filters”
✅ “Add filters for seat location (window/aisle), floor level, and availability on hover”
✅ “Use Gapstars brand colors with a modern card-based layout”

 

This made outputs more usable, branded, and aligned with real-world expectations.

 

3. Using Real Naming Conventions Helped the AI Stay Grounded

Prompts that referenced real Supabase tables, project names, and component labels reduced AI hallucinations.

 

✅ “Use Supabase project ‘office-zen-booking’ and table ‘massages’ to retrieve availability by date”

 

When vague table names like test1 or demoTable were used, the AI often defaulted to fake logic or broken code.

 

4. Tone & Clarity Mattered

Prompts structured like Jira tickets or Figma specs produced better results than chatty or ambiguous commands.

 

❌ “Can you make this prettier and modern-looking?”
✅ “Apply a minimal design with rounded corners, Gapstars Orange as primary CTA, and condensed spacing on mobile.”

 

Clear language = cleaner code.

 

5. Versioning Prompts Saved Time

Several teams labeled their iterations:

 

“Try V2 of homepage with fewer filters”
“Revert to V3 and reduce card padding”

 

This “prompt hygiene” helped with tracking changes, reusing effective blocks, and avoiding repetitive errors, especially when the project grew past 10+ steps.

 

A Moment with Our Partners

Adding to the day’s energy, we were thrilled to host Harver, one of our longest-standing partners, during the Hackathon. With product and engineering leaders joining in, it was a chance for them to see how Gapstars engineers collaborate, build, and troubleshoot in real time.

It was less about the outputs, and more about watching engineering thinking under pressure something you can’t really teach or fake.

 

The Takeaway

This hackathon wasn’t about building perfect apps. It was about testing ideas quickly, prompting clearly, and building creatively under real-world constraints.

Yes, AI tools helped. But the real advantage came from the people behind the prompts:

  • The ones who scoped well.
  • The ones who debugged fast.
  • The ones who iterated with intent.

 

Because at the end of the day, the quality of the build still depends on one thing: Not how good the AI is, but how good YOU ARE at telling it what to do. And if this hackathon was anything to go by, our teams are learning to do just that. We’re not just building teams. We’re building teams that innovate with intent, adapt with speed, and scale with purpose.

Here to help

Reach out to us, and let’s explore how we can build your dreams with the right people, expertise, and solutions.