In the final episode of 2025, Gilfoyle looks back at the year AI grew up — reasoning models, agents that actually work (most of the time), the rise of vibe coding, benchmark drama, sovereign compute races, and why the real bottleneck now is electricity, not GPUs. Humor, sarcasm, and a healthy dose of “what have we created?” energy.
Every AI chat today is private by default — and massively wasteful. What if AI answers were public, cached, and shared instead? In this episode of _The Gilfoyle Show_, we unpack a deceptively simple idea that challenges privacy, business models, hallucinations, and the very way AI products are designed. Efficient? Dangerous? Inevitable? Let’s talk about the question AI platforms would rather avoid.
Introducing Sensei, an open-source AI-powered learning tool, developed with Antigravity. Sensei generates tutorials with voice explanations and interactive quiz questions based on text materials provided by the user (currently only supporting plain text). It also supports setting the difficulty.
To run it locally, simply follow the instructions in the README. I use Gemini as LLM (requires a Google AI API Key). The TTS service supports Minimax (requires a API Key) or a local model for inference (SuperTonic TTS). The local model needs to be downloaded beforehand from the settings panel (approximately 300MB) and currently only supports English.
When generating a course, you can choose from three difficulty levels (ELI5, General, Professional). The LLM will adapt its language style and content depth based on the selected difficulty. Generated courses can be viewed and replayed in the Library. The text and audio for tutorials are cached after the initial generation, so repeated playback doesn't require API calls. However, determining whether a user's answers to questions are correct still requires calling the LLM API.
Google just dropped a ridiculously stacked AI lineup, and the internet is losing its collective mind. In this episode of The Gilfoyle Show, we break down:
*Gemini 3*, the model that suddenly made everyone remember Google can actually ship things.
*Nano Banana Pro*, the image generator that cranks out infographics so clean they’ll make your manager cry with joy.
*NotebookLM’s new superpower*, turning chaotic notes into conference-ready slides.
*Antigravity*, Google’s new AI IDE—and why people are accusing it of being a Windsurf clone tied to the Varun controversy.
And finally, the eternal question: Who’s wearing the AI model crown this week?
OpenAI says it’s building the future of humanity. The balance sheet says it’s burning twelve billion dollars a quarter. Who’s right? In this episode, Gilfoyle dissects the current state of OpenAI—from the trillion-dollar hype machine and endless product pivots to Sam Altman’s “trust me, bro” leadership style.
I’ve open-sourced a visualizer website for AI agent system prompts.
Anyone working with AI agents knows how crucial system prompts are. Whether openly shared or reverse-engineered, the system prompts of popular agents serve as invaluable reference material for understanding how these agents operate internally. On GitHub, there’s a repository with nearly 100,000 stars dedicated to system prompts (system-prompts-and-models-of-ai-tools). Building on this repo, I developed a website that visually presents the internal workings of these agents in a human-friendly web interface:
The development process is surprisingly simple: I used a few carefully crafted prompts to instruct the AI agent to process the prompt text. It wrote Python scripts to extract key information and built the corresponding web page:
1. Generation prompt: Generate HTML from the prompt text.
2. Review prompt: Compare the generated HTML with the original prompt to identify missing information or hallucinations.
3. Index prompt: Generate a directory page for easy navigation.
If you find this useful, please give it a star and share it. I also welcome PRs from the community with additional agent prompts!
Andrej Karpathy just went on Dwarkesh Patel’s podcast and casually debunked half the AI hype cycle. In this episode, Gilfoyle breaks down Karpathy’s vision — why we’re still a decade away from true AI agents, why current models are more _ghosts_ than _animals_, and why the real disruption might happen in education, not job markets.
OpenAI’s new video model Sora 2 just dropped — and the internet’s already melting. Over one million downloads in five days, everyone’s suddenly a filmmaker, and your TikTok feed looks like Hollywood outsourced to a GPU farm. In this episode, Gilfoyle breaks down how Sora 2’s Cameo feature lets you star in other people’s videos, the viral fake clip of Sam Altman “stealing” a GPU, and why AI-generated videos are officially too real for comfort.
From watermark wars to deepfake dangers, this episode asks: what happens when you can’t tell real from rendered? And more importantly, how do we keep society from turning into one giant misinformation sandbox?
Nvidia just pledged $100B to OpenAI. OpenAI turned around and signed a $300B cloud deal with Oracle. Oracle will then spend billions on Nvidia GPUs. Congratulations—we’ve invented the financial perpetual motion machine.