When change becomes revolution, the evolution of AI Agent x Crypto

trendx logo

Reprinted from panewslab

01/20/2025·1days ago

A work of art is never completed, only abandoned.

Everyone is talking about AI Agent, but they are not talking about the same thing. This leads to the difference between the AI ​​Agent we care about and the public perspective, as well as the perspective of AI practitioners.

A long time ago, I wrote that Crypto is an illusion of AI. From then until now, the combination of Crypto and AI has been an unrequited love. AI practitioners rarely mention the terms Web3/blockchain, while Crypto practitioners rarely mention the terms Web3/blockchain. But I am passionate about AI, and after seeing the wonder that the AI ​​Agent framework can be tokenized, I don’t know if we can really introduce AI practitioners into our world.

AI is Crypto's agent. This is the best explanation for looking at this round of AI surge from a crypto perspective. Crypto's enthusiasm for AI is different from other industries. We especially hope to integrate the issuance and operation of financial assets with it.

Agent evolution, the origin under technical marketing

Investigating its roots, AI Agent has at least three sources, and OpenAI's AGI (General Artificial Intelligence) lists it as an important step, making the term a buzzword beyond the technical level. However, in essence, Agent is not a new concept, even if it is added AI empowerment can hardly be said to be a revolutionary technological trend.

One is the AI ​​Agent in the eyes of OpenAI, which is similar to L3 in the autonomous driving classification. AI Agent can be regarded as having certain high-level assisted driving capabilities, but it cannot completely replace people.

![When change becomes revolution, the evolution of AI Agent x Crypto](https://cdn-img.panewslab.com/yijian/2025/1/20/images/e86331a966219f2a791a78f23504bbd3.png) Image description: AGI stage of OpenAI planning Image source: https://www.bloomberg.com/

Second, as the name suggests, AI Agent is an Agent with the blessing of AI. Agent mechanisms and models are not uncommon in the computer field. Under OpenAI's planning, Agent will become a dialogue form (ChatGPT) and a reasoning form (various Bots). The final L3 stage is characterized by “carrying out certain behaviors autonomously”, or in the definition of Harrison Chase, founder of LangChain: “AI Agent is an agent using LLM A system for making program control flow decisions.”

This is the mystery. Before the emergence of LLM, Agent mainly executed artificially set automated processes. To give just one example, when programmers designed crawler programs, they would set up User-Agent to imitate the real world. Details such as the browser version and operating system used by the user. Of course, if AI Agent is used to imitate human behavior in more detail, an AI Agent crawler framework will appear. This operation will make the crawler "more human-like."

In such changes, the addition of AI Agent must be combined with existing scenarios. There is almost no completely original field. Even code completion and generation capabilities such as Curosr and Github copilot are based on LSP (Language Server Protocol). Protocol) and other thinking, there are many examples of this:

  • Apple: AppleScript (Script Editor)--Alfred--Siri--Shortcuts--Apple Intelligence
  • Terminal: Terminal (macOS)/Power shell (Windows)--iTerm 2--Warp(AI Native)
  • Human-computer interaction: Web 1.0 CLI TCP/IP Netscape Browser--Web 2.0 GUI/RestAPI/Search Engine/Google/Super App--Web 3.0 AI Agent + dapp?

To explain a little bit, in the process of human-computer interaction, the combination of Web 1.0 GUI and browser really allows the public to use computers without any barriers, represented by the combination of Windows + IE, and API is the data abstraction and transmission standard behind the Internet, in the Web 2.0 era The browser is already in the era of Chrome, and the shift to mobile has changed people's habits of using the Internet. Apps from super platforms such as WeChat and Meta cover all aspects of people's lives.

Third, the concept of intent in the Crypto field is the precursor to the explosion of AI Agent circles. However, it should be noted that this is only valid within Crypto. From incomplete Bitcoin scripts to Ethereum smart contracts, the concept of Agent itself is Widely used, and then spawned the cross-chain bridge - chain abstraction, EOA - AA wallet are all natural extensions of this kind of thinking, so after the AI ​​Agent "invaded" Crypto, it led to DeFi The scene is no surprise.

This is where the concept of AI Agent gets confused. In the context of Crypto, what we actually want to achieve is an Agent that “automatically manages finances and creates new memes automatically.” However, under OpenAI’s definition, such a dangerous scenario even requires L4 /L5 can be truly realized, and what the public is playing with are functions such as automatic code generation or AI one-click summary, ghostwriting, etc. The communication between the two parties is not in the same dimension.

Now that we understand what we really want, let’s focus on the organizational logic of AI Agent. The technical details will be hidden later. After all, the agent concept of AI Agent is to remove technology from the obstacles of large-scale popularization, just like browsing. The Midas touch of the personal PC industry, so our focus will be on two points: looking at AI Agent from the perspective of human-computer interaction, and the difference and connection between AI Agent and LLM, which leads to the third part: Crypto and AI Agent The combination leaves nothing behind in the end.

let AI_Agent = LLM+API;

Before chat-based human-computer interaction models such as ChatGPT, the interaction between humans and computers was mainly in the form of GUI (graphical interface) and CLI (Command-Line interface). GUI thinking continued to derive browsers, App and other specific forms, the combination of CLI and Shell rarely changes.

When change becomes revolution, the evolution of AI Agent x
Crypto

But this is only the human-computer interaction on the surface of the "front-end". With the development of the Internet, the increase in the amount and type of data has led to an increase in "back-end" interactions between data and data, and between Apps. The two interact with each other. Relying on, even a simple web browsing behavior actually requires the collaboration and cooperation of the two.

If we talk about the interaction between people and browsers and Apps, we talk about user portals, then the links and jumps between APIs support the actual operation of the Internet. In fact, this is also a part of Agent. Ordinary users do not need to understand terms such as command line and API. You can achieve your purpose.

The same is true for LLM. Now users can go one step further and do not even need to search. The whole process can be described as the following steps:

  1. The user opens a chat window;
  2. Users use natural language, that is, text or speech, to describe their needs;
  3. LLM parses it into streamlined operating steps;
  4. LLM returns its results to the user.

It can be found that in this process, the biggest challenge is Google, because users do not need to open the search engine, but various GPT-like dialogue windows, and the traffic entrance is quietly changing. It is for this reason that some people think that this LLM revolutionizes the life of search engines.

So what role does AI Agent play in this?

In a word, AI Agent is a specialization of LLM.

The current LLM is not AGI, that is, it is not the ideal L5 organizer of OpenAI. Its capabilities are greatly limited. For example, if the user inputs too much information, it is easy to cause hallucinations. One of the important reasons lies in the training mechanism. For example, if you repeatedly tell GPT 1+1=3, then there is a certain probability of asking 1+1+1= in the next interaction? gives the probability that the answer is 4.

Because the feedback of GPT at this time comes entirely from the individual user. If the model is not connected to the Internet, it is entirely possible that your information will change the operating mechanism. From now on, it will be a retarded GPT that only knows 1+1=3. However, if the model is allowed to be connected to the Internet, Then the feedback mechanism of GPT is more diverse. After all, the vast majority of people on the Internet believe that 1+1=2.

Continuing to increase the difficulty, if we must use LLM locally, how can we avoid such problems?

A simple and crude way is to use two LLMs at the same time. At the same time, it is stipulated that each time you answer a question, the two LLMs must verify each other, so as to reduce the probability of errors. If this does not work, there are other ways, such as letting two users handle one process at a time. , one is responsible for asking, and the other is responsible for fine-tuning the questions, trying to make the language more standardized and rational.

Of course, sometimes the Internet cannot completely avoid problems. For example, if LLM retrieves answers from a mentally retarded bar, it may be worse. However, avoiding these data will reduce the amount of available data, so the existing data can be split and reorganized. They even generate some new data based on old data to make answers more reliable. In fact, this is the natural language understanding of RAG (Retrieval-Augmented Generation).

Humans and machines need to understand each other. If we allow multiple LLMs to understand and collaborate with each other, we are essentially touching the operating mode of AI Agent, that is, the human agent calls other resources, which can even include large models and other agents.

From this, we have grasped the connection between LLM and AI Agent: LLM is a collection of knowledge that humans can communicate with through dialogue windows, but in practice, we found that some specific task flows can be summarized into specific small programs , Bot, and instruction set, we define these as Agent.

AI Agent is still a part of LLM, and the two cannot be regarded as the same. The calling method of AI Agent is based on LLM, with special emphasis on the collaboration of external programs, LLM and other Agents, so AI Agent = LLM+API Feeling.

Then, in the LLM workflow, you can add instructions for the AI ​​Agent. Let's take calling X's API data as an example:

  1. A human user opens a chat window;
  2. Users use natural language, that is, text or speech, to describe their needs;
  3. LLM parses it into an API call-like AI Agent task and transfers the conversation permission to the Agent;
  4. AI Agent asks user X for his account and API password, and communicates with X online based on the user's description;
  5. The AI ​​Agent returns the final results to the user.

Do you still remember the evolutionary history of human-computer interaction? The browsers and APIs that existed in Web 1.0 and Web 2.0 will still exist, but users can completely ignore their existence and only need to interact with the AI ​​Agent. API calls and other processes are It can be used in a conversational manner, and these API services can be of any type, including local data, network information, and external App data, as long as the other party opens the interface and the user has the permission to use it.

When change becomes revolution, the evolution of AI Agent x
Crypto

A complete AI Agent usage process is shown in the figure above. LLM can be regarded as a separate part from the AI ​​Agent, or it can be regarded as two sub-links of a process. However, no matter how it is divided, it serves the needs of users.

From the perspective of the human-computer interaction process, even if the user is having a conversation with himself, you only need to express your thoughts and thoughts, and the AI/LLM/AI Agent will guess your needs again and again. With the addition of feedback mechanisms, And requiring LLM to remember the current situation context (Context) can ensure that the AI ​​Agent will not suddenly forget what it is doing.

In short, AI Agent is a more personalized product. This is the essential difference between it and traditional scripts and automation tools. It is like a personal butler to consider the real needs of users. However, it must be pointed out that this kind of personality is still a kind of probability. As a result of speculation, L3 level AI agents do not have human understanding and expression capabilities, so connecting them with external APIs is full of dangers.

After monetizing the AI ​​framework

The fact that the AI ​​framework can be monetized is an important reason why I remain interested in Crypto. In the traditional AI technology stack, the framework is not very important, at least not as important as data and computing power. It is also difficult to monetize AI products from the framework. Start with it. After all, most AI algorithms and model frameworks are open source products. What is truly closed source is sensitive information such as data.

In essence, an AI framework or model is a container and combination of a series of algorithms, which is equivalent to an iron pot for stewing a goose. However, the type of goose and the control of the heat are the key to distinguishing the taste. The products sold are inherently different. It should be the big goose, but now there are Web3 customers. They want to buy the casket for the pearl, but buy the pot and abandon the goose.

The reason is not complicated. Web3's AI products are basically based on others' wisdom. They all improve on existing AI frameworks, algorithms and products to create their own customized products. Even the technical principles behind different Crypto AI frameworks are not very different. Since it is technically indistinguishable, it is necessary to make a fuss about the name, application scenarios, etc. Therefore, some minor adjustments to the AI ​​framework itself have become the support of different tokens, thus causing the framework bubble of Crypto AI Agent.

Since there is no need to invest heavily in training data and algorithms, the name distinction method is particularly important. No matter how cheap DeepSeek V3 is, it still requires a lot of doctor's hair and GPU and electricity consumption.

In a sense, this is also the consistent style of Web3 recently, that is, the token issuance platform is worth more than the token, and this is also true for Pump.Fun/Hyperliquid. Agents should originally be applications and assets, but the Agent issuance framework has become the most popular product.

In fact, this is also a value anchoring idea. Since there is no distinction between various types of Agents, the Agent framework is more stable and can produce the value siphon effect of asset issuance. This is the current 1.0 version of the combination of Crypto and AI Agent.

The 2.0 version is emerging, typically the combination of DeFi and AI Agent. The concept of DeFAI is of course a market behavior stimulated by the heat, but if we take the following situations into consideration, we will find that it is different:

  • Morpho is challenging old lending products such as Aave;
  • Hyperliquid is replacing dYdX’s on-chain derivatives and even challenging Binance’s CEX listing effect;
  • Stablecoins are becoming a payment tool for off-chain scenarios.

It is against the background of the evolution of DeFi that AI is improving the basic logic of DeFi. If the biggest logic of DeFi before was to verify the feasibility of smart contracts, then AI Agent changes the manufacturing logic of DeFi. You do not need to understand DeFi Only in this way can you create DeFi products, which is a further underlying empowerment than chain abstraction.

The era is coming when everyone is a programmer. Complex calculations can be outsourced to the LLM and API behind the AI ​​Agent. Individuals only need to focus on their own ideas, and natural language can be efficiently converted into programming logic.

Conclusion

This article does not mention any Crypto AI Agent tokens and frameworks, because Cookie.Fun has done a good enough job, the AI ​​Agent information aggregation and token discovery platform, then the AI ​​Agent framework, and finally the Agent generation that suddenly comes and goes. coins, there is no value in continuing to list information in the article.

However, during this period of observation, there is still a lack of real discussion on what Crypto AI Agent points to in the market. We cannot always discuss pointers, memory changes are the essence.

It is precisely the ability to continuously convert various underlying assets into assets that is the charm of Crypto.

more