[Link] Building Large Language Models (LLMs)

Awesome video from Stanford CS which goes into the details of building LLMs and how they work. Really interesting explanation on the impact of tokenizers on LLM ability to "interpret" code, such as Python which relies on whitespace for its structure.

Highly recommended to understand the details how LLMs work and what the tricky parts are.

Popular posts

Mirth: recover space when mirthdb grows out of control

Quasi-code with Apache Camel

An alternative auditing strategy for Grails apps