All Stories

Upcoming Talk: When AI Gets Hyperfast - Rethinking Design for 1000 tokens/s

I’ll be speaking at AI Tinkerers Raleigh about what happens when AI inference becomes fast enough to fundamentally change how we think about application design.

ConTalk: Apple's On-Device VLM: The Future of Multimodal AI

When we think about the future of artificial intelligence, we often imagine systems that can think like humans—not just processing text, but understanding images, video, and even the physical properties...

Cerebras OS - OpenRouter Hackathon Project

When AI becomes fast enough, computing itself changes. Cerebras OS is a hackathon project exploring what happens when inference speed crosses a threshold—when AI stops being something you wait for...

Apple Intelligence features for any iPhone

Apple prides itself on safety and privacy, but if you’re willing to sacrifice some of that, and you have an older iPhone, you can use the Shortcuts app to create...

Apple Vision Pro Music Viz App

Currently a WIP, but using audio processing APIs and particle emitter SDK, i’m working on a music visualizer app to be completely immersed in a 3D field of dancing particles....

Why you may not have to worry about superintelligent AI as much: Entropy

The Simple Reason You Don’t Have to Worry About Super Intelligent AI: Entropy

LLM Visualization: How embedding space creates intelligence

Using Python I created this visualization to help explain how LLMs capture and synthesize information. LLMs take the encoding of language and deconstruct it into concepts.

Where the puck is going: An AI Roadmap

I created this AI roadmap last year for our internal team, and we’re somewhere around line 6.5 as of May 2024 (current video multimodal models are just looking at periodic...

LessWrong: ChatGPT is our Wright Brothers Moment

There’s a lot of discussions, often derisively, that many AI apps are “just” ChatGPT wrappers. I wrote a short article that was promoted to the front page of LessWrong about...