DEV Community

marcosomma
marcosomma

Posted on

Intelligence, Farming, and Why AI Is Still Mostly in Its Tool Phase

People usually talk about intelligence as if it starts with language, tools, or raw brainpower. I do not think that is enough. In the bigger evolutionary picture, intelligence starts when a living thing stops just reacting to whatever is in front of its face and begins carrying a rough model of the world in its head. A kind of inner sketch. Something that helps it remember, predict, adjust, and act not only for now, but for later.

A lot of animals do this. They are not stupid. They solve problems, learn patterns, adapt, trick each other, and survive in ways that are honestly impressive. So intelligence is not some magical human-only plugin installed by the universe. What is rare is not intelligence itself. What is rare is the moment when intelligence stops being useful only for survival and starts becoming a world-editing machine.

That is where humans took a weird turn.

The real jump was not just tools. A stick is great. A sharp stone is great. Fire is very great, especially if you are cold and trying not to die. But none of those alone explain the massive leap. The deeper change happened when humans got trapped, in the best possible way, inside long loops of cause and effect. Not just act now, eat now, survive now. But act now, wait, remember, adjust, come back, check again, fix the mess, and maybe eat in three months if you did not completely ruin the plan.

That is why agriculture matters so much.

Farming is not just “food but slower.” It is a completely different mental game. Hunting can involve planning, yes, but farming basically forces you to become the project manager of a very annoying and unpredictable system. You put seeds into the ground and then spend months negotiating with dirt, water, weather, insects, time, and your own bad decisions.

You are no longer finding food. You are trying to convince the future to cooperate.

And the future is rude.

Farming forces you to track things you cannot immediately see. You have to remember what you planted, where you planted it, when you planted it, whether it got enough water, whether the season is changing, whether pests are coming, whether the river is helping or preparing to ruin your entire week. This is no longer simple reaction. This is delayed feedback. This is long-horizon thinking. This is your brain being dragged into a repeated loop of prediction, intervention, failure, correction, and trying again.

That matters.

Because once cognition enters those kinds of loops, it changes character. The mind is no longer just spotting opportunities in nature like some clever scavenger. It starts designing future conditions. It starts shaping the environment so reality later matches a plan that only existed in imagination. That is a much bigger deal than “human use tool.”

So I would say agriculture did not create intelligence. It turned intelligence into infrastructure.

That also helps explain why many animals are clearly intelligent and yet never end up building cities, irrigation systems, tax forms, or extremely depressing office software. Intelligence alone is not enough. To get civilization, at least three things need to show up together.

First, you need loops that reward long-term thinking.

Second, you need a way to pass useful knowledge along, so each generation does not have to restart from “what if rock but pointy?”

Third, you need the ability to change the environment in ways that keep paying off over time.

Without those three, intelligence stays local. It helps you survive. It helps you stay a very competent crow, octopus, wolf, or ape. But it does not become civilization. Once those three things combine, intelligence escapes the skull. It gets baked into tools, habits, systems, stories, roads, farms, laws, and all the other strange things humans build when they have too much memory and not enough chill.

And this is where AI becomes interesting.

Because I think we make the same mistake with AI that people make when talking about human intelligence. We see one part of the process and declare victory too early.

Current AI systems are impressive, yes. Very impressive. Sometimes absurdly impressive. They predict well, generate well, imitate well, summarize well, and occasionally hallucinate with the confidence of a man explaining barbecue technique after reading half a Wikipedia page. But that does not automatically make them intelligence in the full sense.

What we mostly have today are intelligence tools.

That is different.

A model can predict the next token, classify an image, rank options, generate code, or infer patterns from huge amounts of data. Great. But prediction alone is not the same thing as durable intelligence. That is like saying someone who can walk ten kilometers can obviously run ten kilometers. No. Walking helps. But running requires different coordination, training, adaptation, and stress handling. Same legs. Different system.

AI right now is mostly at the “good legs” stage.

Very good legs, to be fair.

And yes, I know people love to point at one technical component and treat it like the sacred spark. ReLU, attention, scaling laws, whatever the buzzword of the season is. Those things matter. They are useful engineering breakthroughs. But no single ingredient is “the birth of intelligence.” That is like claiming the reason civilization exists is because someone once invented a better shovel. Useful, yes. Complete explanation, no.

The real question is not whether a model can predict well. The real question is whether a system can enter long loops of memory, planning, action, feedback, correction, and transfer, then keep improving in a stable way over time.

That is where the AGI discussion usually gets blurry.

If we define AGI as “models with memory, planning, and tool use,” then congratulations, we already have that. Agentic systems exist. Tool-using systems exist. Multi-step planners exist. Memory layers exist. The problem is that this definition is so loose it is almost useless. It is like saying a bicycle and a spaceship are both transportation, so close enough.

No.

We need a stricter threshold.

The real jump would be something more like this: a system that can keep relevant state across long periods, learn from past mistakes in a way that becomes reusable skill, handle long multi-step goals without falling apart every time the environment changes, transfer what it learned from one task to another related task, and do all this reliably enough that it feels less like workflow glue and more like stable competence.

That, to me, is the actual missing layer.

Not prettier outputs.
Not better demos.
Not one more benchmark where the model answers history questions slightly faster than last quarter.

What is missing is durable adaptive cognition.

That is the point where AI would stop being mostly a smart component and start feeling more like a real cognitive system.

So the distinction I would make is simple.

A model is a predictor.

An agentic system is a predictor plus some scaffolding, like tools, memory, or planning loops.

A higher intelligence system would be something that can keep learning across time, preserve useful structure, adapt without being rebuilt every five minutes, and shape its own future performance through repeated interaction with the world.

That last part matters most. Human intelligence became historically dominant because it did not stay inside the head. It got externalized into tools, memory systems, culture, infrastructure, and environmental change. If AI ever makes a similar leap, it will not be because one model gets even bigger and starts speaking in more confident paragraphs. It will be because predictive systems get embedded in persistent loops that let them remember, act, revise, transfer, and compound.

So my view is this.

Today’s AI is not yet the machine equivalent of civilization-level intelligence. It is closer to the tool phase. Very powerful tools, yes. Sometimes shocking tools. Sometimes tools that write code better than half the internet and worse than a tired senior engineer on a Tuesday. But still tools.

The next real jump will not come from prediction alone. It will come from systems that can live inside long feedback loops and get better because of them.

Basically, farming for machines. And hopefully with fewer locusts.

Top comments (17)

Collapse
 
leob profile image
leob • Edited

Well written, incredibly interesting - but if AI evolves in a "human like" fashion as you explained, will it then not become incredibly risky - not to say dangerous?

I understand that there's a big industry push towards "AGI" (and what you describe is the best definitions of that somewhat nebulous term that I've come across) - but I just wonder:

Should we actually really want this?

Collapse
 
marcosomma profile image
marcosomma • Edited

@leob I think it all depends on what we build into it.

The drive to survive is an animal trait. It comes from biology, scarcity, reproduction, competition. A machine does not automatically have any of that. It does not wake up one day and decide it wants to stay alive unless we design goals, incentives, or control loops that push it in that direction.

So the real risk is not “intelligence” by itself. The real risk is which objectives, constraints, and reward structures we attach to it.

If we build systems in a way that makes self-preservation, autonomy, or unchecked goal pursuit instrumentally useful, then yes, that could become dangerous. But that would be a design failure, not some mystical property of intelligence suddenly appearing on its own.

If, instead, we build AI around usefulness, bounded behavior, and protection of human interests, then the picture is very different. Intelligence does not automatically imply hostility. A highly capable system can still be deeply constrained by the goals and architecture we give it.

That said, I also think people jump too quickly into science fiction here. Current AI is still very far from having anything like a biological survival instinct. It does not “want” to survive in the human sense unless we explicitly create systems where persistence, self-protection, or goal continuation become part of the optimization loop.

So yes, we should be careful. But the danger is less “AI becomes alive and rebels,” and more “humans build powerful systems with badly designed incentives.”

Collapse
 
leob profile image
leob

Yeah but still, the "farming" example got me thinking - if the point of AGI would be (among others) that it can dynamically improve itself through continuous learning (AI developing better versions of itself, LLMs updating themselves), or through showing 'initiative' or 'autonomy' - then I wonder if we don't reach a point where any guardrails that we think we've built could be circumvented by "the AI" ...

But I agree that for the foreseeable future the risk is more in how we use/abuse current AI technology, than the theoretical risks of "AGI" or whatever you want to call it :-)

(does anyone even have a good definition of AGI ?)

Thread Thread
 
marcosomma profile image
marcosomma • Edited

@leob I think we project too much of our own biology onto AI.

Humans assume danger because we are shaped by survival instincts. We compete, protect ourselves, fear extinction, and naturally imagine any sufficiently capable system as another being with the same drives. But AI is not born from hunger, reproduction, or fear. A knife does not decide to kill. It is a tool. What matters is how it is designed and how it is used.

The same applies to how we imagine the future of AI. We immediately picture AI X versus AI Y, competing for dominance, as if conflict were the only possible path. But why should that be the default assumption? Why not cooperation? Why do we keep projecting our own flawed human instincts about power, control, and scarcity onto a technology that could just as well evolve through coordination and specialization instead of competition?

Yes, the market pushes in the other direction right now. Companies compete. Products compete. Narratives compete. But AI itself is still a technological resource. Expensive today, yes, but likely cheaper and more distributed over time. I can easily imagine a future built less around one giant monolithic model and more around orchestration: many smaller specialized models, multimodal systems, local models running across devices, all coordinated by higher-level layers that delegate tasks based on expertise.

That is also why I do not only see AI as a threat. In biology, evolution often progresses through cooperation and specialization. Cells did not win by all doing the same thing. They became more powerful by coordinating. I think AI could play a similar role for humanity: not as a rival species, but as a shared cognitive layer that helps us preserve knowledge, coordinate better, and act beyond the limits of individual minds.

That is why, to me, the discussion is much bigger than “what if AI escapes its guardrails?”

So yes, guardrails matter. But the deeper question is whether we are mature enough to use AI as a coordination tool, not just as a power tool.

Thread Thread
 
leob profile image
leob

Inspiring thoughts, thanks!

Collapse
 
kalpaka profile image
Kalpaka

The farming metaphor has a deeper layer the article doesn't fully explore: the farm changes too. Terraced hillsides, irrigation channels, selected seed varieties — over decades, the land itself becomes a record of accumulated decisions. The intelligence isn't just in the farmer's head. It's in the shaped environment.

Current agent systems learn forward but don't reshape what they run on. The architecture stays fixed while the knowledge grows. That's still hunting with a better memory, not farming.

The real farming-phase equivalent would be systems where the feedback loop modifies the infrastructure itself. Where running the system long enough changes what the system is, not just what it knows.

Collapse
 
marcosomma profile image
marcosomma

Exactly! I avoided this because is too deep. But first human experience also primitive architectural challenges. We have proof that they shape the lands to fit their needs. This is the deeper part of farming because humans learn not only to adapt to the environment but to change it to fit their needs. Same should happen with AI. AI will need to invent tech solutions that are not yet there to fit many of the upcoming challenges. This do not mean AI will act selfish. What we need to ensure is the objective that AI need to focus on. There is where AI can and will contribute to human evolution.

Collapse
 
kalpaka profile image
Kalpaka

The environment-shaping point is worth pulling apart. There's a difference between a tool that optimizes within constraints and one that starts modifying the constraints themselves. The first is engineering. The second is closer to what farming actually did — not just growing food but reshaping the landscape so food could grow differently.

The objective question is where it gets hard. Who sets the objective when the system is capable enough to notice the objective isn't working and propose a different one? That's not selfishness. It's feedback. The tricky part is building structures that let that feedback surface without treating it as a threat.

Collapse
 
codingwithjiro profile image
Elmar Chavez

The real question is not whether a model can predict well. The real question is whether a system can enter long loops of memory, planning, action, feedback, correction, and transfer, then keep improving in a stable way over time.

If they can pull this off, then that will be real AI. But I doubt this will be done in my lifetime. Making this a reality is one huge leap for mankind. Basically creating an immortal human brain.

Collapse
 
marcosomma profile image
marcosomma

@codingwithjiro I agree with the spirit of this, but I would frame it a bit differently.

I do not think the goal is to create an immortal human brain. That comparison is powerful, but it can also mislead us, because it keeps pushing AI back into a human-shaped frame.

What I think is missing is not immortality, but durable adaptive cognition: a system that can preserve useful state, learn across long horizons, transfer lessons from one context to another, and keep functioning without being rebuilt every time the environment changes.

That would already be a massive leap.

And yes, I agree it is much harder than most current AI discourse admits. Right now we are still mostly in the “very impressive tools” phase. Useful, sometimes amazing, but still far from a stable cognitive system that compounds experience over time.

Collapse
 
assosam_ocr profile image
Ocr Assosam

really insightful!

Collapse
 
aaron_rose_0787cc8b4775a0 profile image
Aaron Rose

The next real jump will not come from prediction alone. It will come from systems that can live inside long feedback loops and get better because of them.

nice one, marcosomma! thanks đź’Ż

Collapse
 
travisdrake profile image
Travis Drake

I guess my question is from an impact level will there be a difference? I don't believe we are on the road to AGI, but having been using this stuff extensively I am not sure any part of society is setup for what is coming.

Collapse
 
klement_gunndu profile image
klement Gunndu

The farming-as-project-management framing is neat. It maps directly to agent orchestration — most agent systems today are still in the "hunting" phase (single-shot tool calls), not the "farming" phase (managing long feedback loops across time).

Collapse
 
nomaz profile image
Nomathemba Ndlovu

Interesting how AI is changing farming, i am interested in learning more about it

Collapse
 
frickingruvin profile image
Doug Wilson

Interesting analysis and commentary, as usual. Thank you for sharing your thoughts!

Some comments may only be visible to logged-in visitors. Sign in to view all comments.