LLMs Predict My Coffee
Recorded: March 22, 2026, 10 p.m.
| Original | Summarized |
LLMs predict my coffee DYNOMIGHT best Underrated reasons to be thankful topics AI follow RSS go about LLMs predict my coffee dynomight · Coding, math, whatever. Can LLMs predict the outcomes of physical experiments? Suppose I pour 8 oz (226.8 g) of boiling water into a ceramic coffee mug that weighs 1.25 lb (0.57 kg). The ambient air is still and 20 degrees Celsius. The cup starts at room temperature. Give me an equation for the temperature of the water in Celsius over time. The only free variable in the equation should be the number of seconds t since the water was poured. Focus on accuracy during the first 5 minutes. Does that seem hard? I think it’s hard. The relevant physical phenomena include at least: Conduction of heat between the water, the mug, the air, and the table. And many details aren’t specified in the prompt. Is the mug made of porcelain or stoneware? What is the mug’s shape? What is the table made of? How humid is the air? How am I reducing the spatially varying water temperature to a single number? (Technically, they gave equations as text. I’m plotting those equations.) Or, here’s a zoomed-in view of the first five minutes: The predictions were all OK, but none were great. Probably Claude 4.6 Opus did best, albeit after consuming $0.61 of tokens. (Insert joke about physical experiments / Department of Defense / money / coffee.) (Appendix: The equations) Here were the actual equations all of the models gave for T(t), the predicted temperature after t seconds. LLM Kimi K2.5 (reasoning) Gemini 3.1 Pro GPT 5.4 Claude 4.6 Opus (reasoning) Qwen3-235B GLM-4.7 (reasoning) Interestingly, they were all based on one or two exponentially decaying terms. The way to read these is to think of exp(-t/b) as a function that starts out at one when t is zero, and gradually decreases. After b seconds, it has dropped to 1/e ≈ 0.368, and it continues dropping by factors of 0.368 every b seconds forever. ok mistakes fix comments lemmy / Maybe there's a pattern here? · The real data wall is billions of years of evolution · Why didn't we get GPT-2 in 2005? · The modern formatting addiction in writing · |
This document, authored by Dynomight, presents an engaging exploration of the limitations of Large Language Models (LLMs) when applied to a seemingly straightforward physical experiment: the cooling of hot water in a ceramic mug. The core of the piece centers on a thought experiment designed to test the predictive capabilities of various LLMs, namely Kimi K2.5, Gemini 3.1 Pro, GPT 5.4, Claude 4.6 Opus, Qwen3-235B, and GLM-4.7, alongside some less successful attempts like DeepSeek and Grok. The author’s methodology involved posing a specific question – predicting the temperature of the water in Celsius over time – and then comparing the model’s outputs to experimental data collected during a carefully controlled experiment. The experiment itself meticulously recreated a common scenario: pouring boiling water into a ceramic mug. The conditions were precisely defined – 8 ounces of water at 20 degrees Celsius, a 1.25 lb mug, and ambient air at 20 degrees Celsius. Measurements were taken at intervals, ranging from every five seconds initially, then gradually decreasing to every 15, 30, 60, and finally 5 minutes. The author emphasizes the complexity of the underlying physics involved, highlighting processes like conduction, convection, evaporation, radiation, and surface tension – facets that, arguably, contribute to the inherent difficulty in predicting the system's behavior. It’s important to note that the author emphasizes the “taste” required in answering the prompt, acknowledging that a definitive “correct” answer is impractical due to the sheer number of variables and the unstated assumptions. The generated equations from the LLMs reveal a consistent, albeit simplified, approach. All the models utilized exponential decay functions to represent the cooling process, employing parameters like “b” to control the rate of decay. This suggests a foundational understanding, albeit lacking nuance, of heat transfer dynamics. The author identifies a “fast rate” and “slow rate” within these models, likely reflecting the initial rapid heat loss and the subsequent, more gradual cooling as the water and mug approach thermal equilibrium with the surrounding air. The cost of generating these equations also offers a compelling insight into the computational resources demanded by the different models. Claude 4.6 Opus, using the most complex equation and the largest token count, incurred a significant cost of $0.61 – a humorous, if slightly cynical, observation regarding the expense associated with even relatively simple physical simulations. The author demonstrates a critical evaluation of the LLM’s performance. While the models provided reasonable predictions, especially Claude 4.6 Opus, they systematically underestimated the initial cooling rate and overestimated the eventual cooling rate. This misalignment between prediction and reality underscores the limitations of purely mathematical models when confronted with a system characterized by so many interacting variables. The experiment itself further reinforced this observation. The actual temperature decline deviated from the model predictions, particularly in the early stages, and slowed considerably later on. Ultimately, the document serves as a valuable demonstration of the current state of LLM capabilities – particularly their ability to mimic domain expertise but often struggle with the subtleties of physical reality. It’s a hands-on illustration that a vast amount of mathematical knowledge, seemingly readily available to LLMs, doesn't equate to practical understanding or the ability to accurately predict complex phenomena. The author’s inclusion of appendices that break down the generated equations and provides context on their interpretation demonstrates a desire to educate and engage the reader in the process, highlighting the underlying principles at play, and furthering the reader's understanding of the complexity of the data. |