My notes from HackMIT ’24

What was the HackMIT experience?

  • About 120 teams of 3-5 undergrad students compete for 24 hours to build projects. This year, the tracks were Sustainability, Education, Interactive Media, and Healthcare.
  • A number of companies (Fetch.ai, Modal, Convex, Terra API, Clerk, InterSystems, Suno, Akamai Technologies, …) set up booth and assist hackers with documentation and infrastructure credits for their platform. These companies offer their own prizes in addition to the main HackMIT prizes.
  • Hackers either build an independent project, or build on top of the platforms above – assembling a business plan and building a working demo.
  • Mentors are available to assist with planning, and with technical issues. The companies themselves provide a lot of technical help.
  • The hackers themselves come from many other colleges aside from MIT. I met some brilliant young hackers from places like Purdue, Carnegie Mellon, …
  • Some of the visiting teams booked an apartment for the day, and worked overnight. Some stayed up the whole night to finish things up!
  • The projects power through the technical difficulties, and pivot as needed. Most successful projects will need a working demo, and a business plan.

What projects did I see?

  • An FPGA-based BF compiler (if you don’t know what the BF language is… find out on your own!)
  • Diffusion implemented from scratch in pytorch
  • GenAI parsing classic poetry, identifying related stanzas, and combining poets together
  • A bike maps app that allows you to mark impassable/unbikable streets. That was a pretty impressive implementation!
  • An automated email parser that handles documents and fills forms for you. Could be used, for example, by landlords who need to handle paperwork from renters.
  • A music app connected to the Apple Watch, playing music faster when you run faster
  • Nerf algorithm for cat scans
  • AI chat assistant, with ability to order food, book appointments and reserve flights
  • Videos transcribed, and used to generate songs. Can be used to create customized songs for social media postings, or for marketing campaigns.
  • Physical therapy assistant app, that detects angle of arm motion
  • State-wide monitor of wildfires, using satellite maps to track vegetation, and using CV model to estimate fire risk – with interfaces for fire departments, govt, and home owners
  • Video app to detect/translate American Sign Languages
  • Nicotine modulating app using AI

Full list of projects is here.

What tech did the hackers use?

Many apps used React on the front end, FastAPI on the back end, connected to API services like Fetch.ai, Modal, TerraAPI, Intersystems,… Pipelines of varying levels of complexity were implemented. Pretty impressive work for 24 hrs!

While a small number of projects were developed with lower level tools (direct pytorch, lidar drivers, VHDL for FPGA programming…)

Most successful projects, developed quickly, used the higher level tools.

About three quarters of hackers used GenAI. About a quarter used computer vision. A few used Diffusion, Nerf, and more fancy algorithms.

There was no robotics or self driving. Very small number tackled financial apps.

All hackers used MacBooks. I did not notice Linux laptops at all. VsCode was the editor of choice.

How were the projects assessed?

Projects were rated for Innovation (30%), Technical Complexity (30%), Impact (30%), Learning & Collaboration (10%).

  • How novel or unique was the idea? Were there similar projects seen at other hackathons? This, actually, made a difference when the top 4 projects were selected.
  • Did the demo actually work end to end?
  • Was there a functional user interface? Did the team think about user experience?
  • Was the project technically impressive?
  • Does the project solve a significant problem? Can it be further extended to solve a real problem?
  • Did all team members collaborate? If a single person team, did the hacker do something difficult and learn something new?

How did the judging rounds work?

  • A first round of judges went table to table, and spent 10m with each project, hearing the presentations, asking questions, making suggestions…. rating the project through rating app, in each category (Innovation, Technical Complexity, Impact, Learning & Collaboration.
  • The winners of the 1st round were judged again, in a 2nd round, by judges going table to table.
  • This resulted in 12 top teams being selected for last round. These 12 presented again, but this time in front of the Panel Judges.

The Panel Judges, each, got to see presentations from half of the 12 top teams, and got to ask questions. Each presentation took not more than 8 minutes, with 2 minutes of questions from the Judges. The process took place pretty quick.

At the end, the Judges got together, and started briefly discussing their favorite teams. Since not one judge saw more than half the teams, they had to briefly make their impressions known, so all judges could vote for the winner.

How were the winners selected?

Ultimately, the top four teams were selected, by vote. Then, after more discussion, the top winner was picked – then, the winner in each track. The winners happen to be the teams are listed at the end of my bullet list!

I have to say, all teams were very impressive, and the difference between the winners and the almost-winners in the last round were pretty small. Technically, all teams were super sharp. They were quick to take advantage of the available tools and APIs, and to orchestrate them together into impressive working products.

Scroll to Top