AI Human-In-The-Loop Panel

It was an excellent engineering-centric meet-up at the The AI Furnace 🧨🔥 and Æthos! We got to do a lot of hands-on help between multiple projects. Thanks Sam Rowe and Nico van Wijk for hosting.

Spent a lot of time with Nico and Leroy Sibanda to talk shop about front ends, back ends, javascript vs python on back end, async jobs, queues, orchestration, and the finer points of LLM orchestration.

Debugged a Kubernetes pod that scaled down when it was not supposed to…

Nico gave a walkthough of the Cursor editor… I showed them Claude Dev VSCode add-on… Ultimately, Cursor is a much better tool, I think. I’m persuaded to move to using Cursor!

On the Human in the Loop with co-panelists Dipali Trivedi and Brian Benedict, moderated by Nico van Wijk – some of the topics for discussion were:
– How is human-in-the-loop implemented, in practice?
– Humans in the loop – when would they be used during LLM development? And when in the end product, developed on top of the LLMs?
– How are LLM models evaluated? What are the current limitations of that?
– How can language models generate unit tests used for evaluating LLM code generation?
– Given unit tests establishing an expected behavior, can LLMs just generate the code satisfying these unit tests? And is that a good way to write software autonomously?
– How important is it for humans to understand code generated by AI?

Scroll to Top