-
Continue reading →: The Dangers of Imparting Emotional Language and Intentional Uncertainty in LLM Training
A look at a revealing portion of Anthropic’s leaked “Soul” document and how their training philosophy creates challenges to safety.
-
Building a Budget LLM Inference Box in Late 2025
Published by
on
Continue reading →: Building a Budget LLM Inference Box in Late 2025A few years back, I wrote about one of my “high-end consumer” LLM inference workstation builds. Today, we’ll explore the opposite end of the spectrum: An LLM inference workstation for only US$1,200 using budget components.
-
An Introduction to “Guardrail” Classifier-Trained LLMs
Published by
on
Continue reading →: An Introduction to “Guardrail” Classifier-Trained LLMsA practical demonstration of using a secondary, classifier-trained LLM as an external guardrail. This pipeline checks both user inputs and model outputs for unsafe content, adding a flexible safety layer beyond basic refusal training.
-
Continue reading →: Model Context Protocol (MCP): A Simple Introduction
This post introduces the Model Context Protocol (MCP) through a small, working example. It sets up a SQLite database, builds a Python server with FastMCP, and shows how to make its functions available as tools that an LLM like ChatGPT can call. This includes how to configure a connector and…
-
Precision and Confusion in AI Language
Published by
on
Continue reading →: Precision and Confusion in AI LanguageOriginally posted: 7/3/2025, Updated: 9/30/2025 What started out as a glossary for a small user group containing a mix of technical and philosophical thinkers is turning into a deeper dive, not just into confusion in language, but into how that confusion leads to conflation of concepts. Looking back at many…
-
Continue reading →: The Illusion of Intelligence: Large Language Models vs Human Cognition
This post looks at the common confusion between how large language models behave and how human minds work. It walks through what LLMs actually do, why they aren’t thinking or understanding, and why that distinction matters when building, using, or talking about them.
-
Continue reading →: Censorship, Bias, and Security in Large Language Models: A Critical Look at DeepSeek-R1
DeepSeek-R1 is a technical achievement, but one built under heavy constraints. This post examines how censorship, bias, and Chinese security laws shape what the model can say, how it handles sensitive topics, and what that means for anyone using it. It’s a clear-eyed, practical critique based on real testing, focused…
-
The Problem with Anthropomorphic Language in AI Research
Published by
on
Continue reading →: The Problem with Anthropomorphic Language in AI ResearchThis post explores concerns over potential misunderstandings when metaphorical and anthropomorphic language in AI research reaches the general public. Specifically, terms that reasonably serve as convenient shorthand among experts may create problematic misunderstanding when easily accessible to the public.
-
Continue reading →: Confirmation Bias, Dunning-Kruger, and LLM Echo Chambers
A measured look at Dunning–Kruger around LLMs and how prompts and chat history steer answers, how “hallucinations” arise, and why fresh sessions, neutral wording, and verification help more than clever tricks.
-
Continue reading →: Revisiting User-Induced Bias with OpenAI’s gpt-oss-20b
Back in April, I posted “Prompted Patterns: How AI Can Mirror and Amplify False Beliefs” to demonstrate how LLMs can inadvertently become echo chambers of misinformation through user-induced bias/confirmation bias. We revisit this post with the help of OpenAI gpt-oss-20b.




