Discussion about this post

User's avatar
David Howell's avatar

Sure, everything in this article is true, but the whole premise is wrong. Throwing giant reams of data and numbers at them and ever expecting sense from the prediction is madness.

They are great at writing code however, and that code can be used to do the analysis. Most of the AI services we have now like Cursor, Claude code etc, have tool usage or MCP access so they can do more than predict text. With access to extra tools and chain-of-thought reasoning, there is emergent behaviour beyond the "stochastic parrot" which looks and smells a lot like understanding.

With an MCP to Google Sheets, the LLM can write the cell formula, or the appscript to prepare the data. With an MCP to a database, it can write the SQL, it can write the python and pandas analysis, it can write the jupyter notebook, or create the streamlit ior observable or other interactive data-viz story for you. It won't ever do the data analysis directly, but all of this coordination is going to end up behind the scenes so for the regular user, it's going to look exactly like it's the LLM doing it. Nevermind it's an API with some decision systems and routing to RAG and a bunch of tools. The end user doesn't care.

It's the same with how amazing the latest OpenAI models are with images. It's not because the transformer or diffusion model suddenly got so much better, it's because it has access to photoshop-like tools. Crop, frame, remove background, scale, rotate, colour, skew. These are all just regular software tools that when orchestrated by the AI and strung together are effectively one system. The fact that you know there's transformer architecture and text prediction going on is besides the point. It's now one small cog in the wheel. Pretty world-changing cog, but just one, and one of very many.

Expand full comment
2 more comments...

No posts