LLMs are not reasoning models

LLMs are not reasoning models, and I'm getting tired of people saying otherwise.

Please, keep in mind that this is my opinion, which may differ from yours, so if it does, or even if it doesn't, please be respectful and constructive in your answer. Thanks

It's almost practically everywhere now (after the announcement of o3) that A.G.I. is here or very, very close, and that LLMs or more sophisticated architectures are able to fully reason and plan.

I use LLMs almost every day to accelerate the way I work (software development), and I can tell you, at least from my experience, that we're very far from reasoning models or an A.G.I.

And it's really frustrating for me to hear or read about people using those tools and basically saying that they can do anything, even though those people have practically no experience in algorithmic or coding. This frustration isn't me just being jealous, it comes down to the fact that:
It's not because a code works that you should use it.

People are software engineers for a reason, not because they can write code, or because they can copy and paste some lines from Stack Overflow, it's because they know the overall architecture of what they're doing, why they're doing it this way and not any other way and for what purpose.

If you ask an LLM to do something, yes it might be able to do it, but it may also create a function that is O(n2) instead of O(n). Or it may create a code that's not going to be scalable in the long run.
You'll say to me that you could ask the LLM to tell you what's the best solution, or the best possible solutions for this specific question, and my answer to you would be: How do you know which one to use if you don't even know what it means ? You're just going to blindly trust the LLM, hoping that the solution is the one for you ? And if you do use that proposed solution, how do you expect to debug it/make it evolve over time ? If your project evolves, and you start hiring someone, how do you explain your project to your new collaborator if even you don't know how it works ?

I really think it's a hubris to think that Software engineers are going to vanish from one day to the next. Not because their work may not be automated, but by the time you get a normal person to the level of a Software engineer thanks to A.I., that same Software engineer is going to be worth a whole team, or even a small company.

Yes, you could meticulously tell the LLM exactly what you want, with details everywhere, and ask it something simple, but first, it may not work, even if your prompt is dead perfect, and second, even if it does, congratulations, you just did the work of a Software engineer. When you know what you're doing, it takes less time to write the code of a small task yourself, than having to entirely explain what you want. The purpose of an LLM is not to do the job of thinking (for now), it's to do the job of doing.

Also, I say those models are not reasoning at all because, from my day-to-day job, I can clearly see that it's not generalizing from its training data, and it's practically not able to reason at all on real world tasks. I'm not talking about benchmarks here, whether private or public, abstract or not, I'm talking about the real software that I work on.
For instance, not so long ago, I tried to create a function that deals with a singly linked list using the best Claude model (Sonnet New). Linked List is something that a computer science graduate learns from the very beginning (this is really basic stuff), and yet, it couldn't do it. I just tried with other models, and it's the same (I couldn't try with o1 though).
I'm not beating the hell out of those models just to tell that they can't or can do something, I'm using this very specific example, because it shows just how dumb they can be, and how not reasoning they are.
Linked Lists involve some kind of physical understanding of what you're doing, basically, it means that you'll probably have to use a pen and paper (or tablet) to get to the solution, meaning that you have to apply what you know to that very specific situation, a.k.a. reasoning. In my situation, I was doing singly linked list with a database, using 3 tables of that database, which is totally different from just doing singly linked list in C or Python, plus there are some subtleties here and there.
Anyway, it couldn't do it, not by just a tiny bit, but by a huge margin, it fucked up quite a lot. That's because it's not reasoning, it's just regurgitating stuff it's seen here and there in its training data, that's all.

I know people will say: Well it may not be working right now, but in x months or years it will. Like I said earlier, it doesn't matter if it works, if it and you don't know why.
When you go to the doctor, they might tell you that you have a cold or the flue, are you going to tell me that just because you could tell me that too, that it means you're a doctor too, or that you're almost qualified to be one ? It's nonsense, because as long as you don't know why you're saying what you're saying, your answer will almost be worthless.

I'm not writing this post to piss on LLMs or similar architectures, I'm doing so as a reminder, in the end LLMs are just tools, and tools do not replace people, they enhance them.
You might say that I'm delusional in thinking this way, but I'm sorry to tell you so, but until proven otherwise, you've been, to some extent, lied by Corporations and the Media into thinking that A.G.I. is nearby.
The fact is, it's not the case and no one really knows when we'll have thinking machines. And until then, let's stop pretending that those tools are magical, that they can do anything, replace entire teams of engineers, designers or writers, but instead, we should start thinking deeply how to incorporate them into our workflows to enhance our day-to-day lives.

The future that we've been promised is, well, a future, and it's definitely not there yet, and it's going to require way more architectural changes than just test-time compute (hate that term) to achieve that very future.

I thank you for reading !

Happy new year !