Posts

Showing posts from December, 2025

Xmas present: K&D sessions MP3

Image
This year my xmas gift is the MP3 version of a seminal album of the '90s which is impossible to find on streaming services: Kruder & Dorfmeister's The K&D Sessions TM It does sound great!

My setup for running open models

Mostly out of curiosity and desire to learn I've tried to run open models locally on both LM studio and ollama, but I quickly realized the limitations intrinsic to my hardware (just a high-spec'd laptop). Curious to try AWS Bedrock I eventually settled on the following setup: litellm exposing Bedrock models (Qwen, atm) locally on an OpenAPI-compatible API (yes it's a mouthful). This works great for any tool that can be configured to use an OpenAPI-compatible API like Quill meetings . Getting VS code to work with this setup was more challenging as it required VS Code Insiders (the bleeding edge, AFAIU) and even in that case VS Code tends to forget settings or use them inconsistently. For example it always uses copilot for the inline code actions. llm  required some tweaking too, in particular the setting suggested in this comment . I am very impressed with litellm which provides accurate usage tracking per team or account. The potential for offering llm access on an interna...

Quote: Alan Kay

Image
 Perspective is worth 80 IQ points Alan Kay’s line “Perspective is worth 80 IQ points” isn’t about literal intelligence. He’s pointing out that the ability to shift viewpoint, reframe a problem, or see a system from a higher level often produces more insight than raw analytical horsepower. Many problems look hard only because they’re being viewed from a narrow frame. Change the frame, and what looked complex becomes obvious or solvable. Why Perspective Feels Like “+80 IQ” A few mechanisms: Reframing reduces complexity. Seeing the structure of a problem—rather than its surface detail—often collapses the difficulty. It mimics what we associate with “smartness.” Most people get stuck in the default frame. They try to optimize inside an assumption instead of questioning it. Someone who steps outside can leapfrog them without being “smarter.” Systems thinking detects leverage points. Understanding how components interact exposes shortcuts, invariants, and constraints th...

Notes on: How Video Games Inspire Great UX

Image
My notes on:  https://jenson.org/games/ which I found via:  https://youtu.be/1fZTOjd_bOQ?si=kCGSE2uNczIJjiQ- Alan Kay quote is hard to understand until an insight from a user test “changed my perspective”. First learning (on the surface, we go deeper and beyond it) pretty soon: Games have the ability to force situations, such as running into a canyon and having nowhere to go but up a ladder. Apps on the other hand, usually have the opposite, offering a broad toolkit of choices. Games, I thought, can exploit narrative to force situations which made their life easier. However this does not mean that games have it easy, on the contrary most games fail: You have to design a great game to get people to have the confidence that practicing is worthwhile. And we start going deeper right away now: Raph convinced me to forgo any quick and easy ‘cookbook of tricks’ approach to this problem and go deeper and understand better how games are built, from the bottom up First bit of wisdom: M...

[Acquired] Google: the AI company (Part 1)

Image
You can't say you understand today's AI landscape without listening to this massive (4 hours!) Acquired episode on Google, focusing on its AI roots . Over three episodes, Acquired has a little over 12 hours worth of podcast just on Google! Well worth it IMO for  the greatest business in history . Selected highlights: [07:23]   basically every single person of note in AI worked at Google with the one exception of Yann Le Cun who worked at Facebook This is truly mind-bending to think about, especially considering that Google is (at the moment) not the first name that comes to mind when we think about AI (LLMs) today. But the real kicker comes a few minutes in when we learn that did you mean? (launched in 2001!!) and google translate  (2006) are the first practical application of language models to its search business which made it exponentially more effective. About 25 years ago, Google was already running machine learning in production, at fantastic scale (about 15...

Using LLMs at Oxide

Once again , some supremely well-thought and useful content from Oxide:  https://rfd.shared.oxide.computer/rfd/0576 This time it is about the use of LLMs within Oxide , here are my main take aways: start from values ! A phenomenal example of how values can be so much more than the vanity checklist that most companies use them for focus on the receiving end : why should I spend time reading something that the author did not think was worth enough spending the necessary time to write it? Again, goes back to their strongly writing-oriented culture and values corollary of item number 2: self review AI-generated code before asking others to review it!