It's been an absolutely wild week. Let me catch you up:
NASA Patent Remix Challenge - I jumped head-first into the NASA Patent Remix Challenge! They held their first webinar yesterday: "From Space to Startup: Turning NASA Patents into Business Opportunities." This is exactly the kind of thing I live for. The literal goal of the challenge is to take proven technology, find new applications, and learn how to use the NASA patents resources. Competition runs until December 15th, winners announced January 30th. More on this below.
Tron: Ares - Holy. Crap. If you're in AI, you need to see this movie. They're using quotes from AI TODAY. "I exist, because we are here together." The Nine Inch Nails soundtrack by Trent Reznor is dripping with meaning and symbolism. The ONLY two unrealistic things I found in the movie:
Everything else? Unsettlingly plausible.
Health Reality Check - Turns out Dollar Tree doesn't just sell things for $1 $1.25 $1.50; apparently they peddle high blood pressure too! I saw a doc Monday, now on BP meds. Last day at Dollar Tree is 10/18. A full write-up on "how to survive, thrive, and fall in love with any job" coming soon. I genuinely will miss it.
Government Shutdown Drama - The US government shut down but is somehow still executing functions like firing thousands of workers. They're TSA though... so are we really losing anything? (I kid, I kid... mostly.)
Anthropic's Skills are "folders of instructions, scripts, and resources that Claude loads dynamically to improve performance on specialized tasks."
Andrej Karpathy built a complete LLM training pipeline (tokenization → pretraining → fine-tuning → inference → web UI) that runs on a single 8-GPU node in 4 hours. And the GOAT drops it on GitHub for free.
SIML: "The Cognitive OS for Embodied AI." By adding a one-bit surprise gate to Meta's V-JEPA 2, they achieved 50% error reduction. 2x faster decisions, 2x lower energy use = 4x better energy-delay product from a minimal modification. This stands to be a groundbreaking methodology shift for robots and embodied agents. Edge intelligence is getting REAL.
REFRAG claims to attack latency head-on with a "compress, sense, and expand" framework that achieves a measured 30.85x speedup on time-to-first-token while maintaining quality. That's three orders of magnitude shift. This is another game-changer (what, is it every 70 hours or so now?). Potential to be directly applied today to save time, money, and aspirin.
Agentic RL is exciting and powerful but fragile. Entropy-driven exploration causes training collapse. This paper outlines an adaptive rollout and balanced optimization technique. The short: Qwen3-14B hits 47.6% on GAIA benchmarks using minimal training data. Even more stable agent training, better tool use, less babysitting your models.
Current RAG is often text-only in our omni-media world. RAG-Anything looks at viewing all data as interconnected knowledge entities.
An excellent interactive guide covering nine chapters from basics to advanced techniques (hallucination prevention, industry applications, prompt chaining). Each has hands-on exercises.
Steinberger argues that elaborate RAG systems, subagents, and complex frameworks are mostly theater. His approach: have clear conversations with your AI agents, use screenshots 50% of the time, run 3-8 agents in parallel terminal windows, and skip the ceremony. He ships features iteratively with UI-based development rather than spec-driven planning. <3
Oxford linked two quantum computers via light and ran Grover's algorithm across them at 71% accuracy. This is one of the first repeatable experiments showing reliable quantum networking (it's only a model).
This is an antidote to "bigger is better." Samsung Research found that a 7-million-parameter model can achieve strong reasoning through recursive refinement. FRACTALS. Just, more, actual, thinking. They hit 45% accuracy on ARC-AGI-1 with a model that can be run offline!
Stay curious, strange, and joyous.
Nathaniel Evry | I Just Do Things.