Trading with AI: A first look at the DeFai ecosystem

trendx logo

Reprinted from jinse

01/14/2025·25days ago

Author: Henry @IOSG

1. Preface

In just 3 months, the market value of AI x memecoin has reached $13.4 billion, which is comparable in size to some mature L1s (such as AVAX or SUI).

In fact, the relationship between artificial intelligence and blockchain has a long history, from the early decentralized model training on the Bittensor subnet, to the decentralized GPU/computing resource markets such as Akash and io.net, to the current Solana AI x memecoins and framework wave. Each stage demonstrates the extent to which cryptocurrencies can complement AI through resource aggregation, enabling sovereign AI and consumer use cases.

Of the first wave of Solana AI coins, some brought meaningful utility rather than just pure speculation. We have seen the emergence of frameworks like ai16z's ELIZA, AI agents such as aixbt that provide market analysis and content creation, or toolkits that integrate AI with blockchain capabilities.

In the second wave of AI, as more tools mature, applications have become key value drivers, and DeFi has become the perfect proving ground for these innovations. To simplify the expression, in this study, we call the combination of AI and DeFi “DeFai”.

According to data from CoinGecko, DeFai has a market capitalization of approximately $1 billion. Griffian dominates the market with 45% share, while $ANON has 22%. This track began to experience rapid growth after December 25, and during the same period, frameworks and platforms such as Virtual and ai16z experienced strong growth after the Christmas holiday.

▲ Source: Coingecko.com

This is just the first step, DeFai’s potential goes far beyond that. Although DeFai is still in the proof-of-concept stage, we cannot underestimate its potential, which will leverage the intelligence and efficiency that AI can provide to transform the DeFi industry into a more user-friendly, intelligent and efficient financial ecosystem.

Before diving into the world of DeFai, we need to understand how agents actually operate in DeFi/blockchain.

2. How Agent works in DeFi system

Artificial intelligence agents (AI Agents) refer to programs that can perform tasks on behalf of users according to workflows. The core behind AI Agent is LLM (Large Language Model), which can respond based on its training or learned knowledge, but this response is often limited.

Agents are fundamentally different from robots. Bots are typically task- specific, require human supervision, and need to operate within predefined rules and conditions. In contrast, agents are more dynamic and adaptive, able to learn autonomously to achieve specific goals.

To create a more personalized experience and more comprehensive responses, the agent can store past interactions in memory, allowing the agent to learn from the user's behavioral patterns and adapt its responses, generating tailored recommendations and responses based on historical context. Strategy.

In a blockchain, agents can interact with smart contracts and accounts to handle complex tasks without constant human intervention. For example, in terms of simplifying the defi user experience, including performing multi-step bridging and farming with one click, optimizing farming strategies for higher returns, executing transactions (buy/sell) and conducting market analysis, all of these steps are completed autonomously of.

Referring to @3sigma’s research, most models follow 6 specific workflows:

  • data collection

  • Model reasoning

  • decision making

  • Host and run

  • interoperability

  • wallet

1. Data collection

First, the model needs to understand the environment in which it works. They therefore require multiple data streams to keep the model in sync with market conditions. This includes: 1) On-chain data from indexers and oracles 2) Off- chain data from price platforms such as CMC/Coingecko/other data providers’ data APIs.

2. Model reasoning

Once models learn the environment, they need to apply this knowledge to predict or perform based on new, unidentified input data from the user. Models used by Agent include:

  1. Supervised and unsupervised learning: Models trained on labeled or unlabeled data to predict outcomes. In a blockchain context, these models can analyze governance forum data to predict voting outcomes or identify transaction patterns.

  2. Reinforcement learning: A model that learns through trial and error by evaluating the rewards and punishments of its actions. Its applications include optimizing token trading strategies, such as determining the best buy point for purchasing tokens or adjusting farming parameters.

  3. Natural Language Processing (NLP): Technology that understands and processes human language input, which is valuable for scanning governance forums and proposals for opinions.

▲ Source: https://www.researchgate.net/figure/The-main-types-of-machine- learning-Main-approaches-include-classification-and-regression_fig1_354960266

3. Decision Making

With trained models and data, agents can take action using their decision- making capabilities. This includes interpreting the situation and responding appropriately.

At this stage, the optimization engine plays an important role in finding the best results. For example, an agent needs to balance multiple factors such as slippage, spreads, transaction costs, and potential profits before executing a yield strategy.

Since a single agent may not be able to optimize decisions in different areas, a multi-agent system can be deployed for coordination.

4.Host and run

Due to the computationally intensive nature of the task, AI Agents often host their models off-chain. Some agents rely on centralized cloud services such as AWS, while those that favor decentralization use distributed computing networks such as Akash or ionet and Arweave for data storage.

Although the AI ​​Agent model runs off-chain, the agent needs to interact with on-chain protocols to execute smart contract functions and manage assets. This interaction requires a secure key management solution, such as an MPC wallet or a smart contract wallet, to handle transactions securely. Agents can operate through APIs to communicate and interact with their communities on social platforms such as Twitter and Telegram.

5. Interoperability

Agents need to interact with various protocols while staying updated across different systems. They often use API bridges to obtain external data, such as price feeds.

In order to keep abreast of the current protocol status and respond appropriately, the agent needs real-time synchronization through decentralized messaging protocols such as webhooks or IPFS.

6.Wallet

Agents need a wallet or access to private keys to initiate blockchain transactions. There are two common wallet/key management methods on the market: MPC-based and TEE-based solutions.

For portfolio management applications, MPC or TSS can split the keys between the agent, the user, and the trusted party, while the user can still maintain a certain degree of control over the AI. The Coinbase AI Replit wallet effectively implements this approach, demonstrating how to leverage an AI agent to implement an MPC wallet.

For fully autonomous AI systems, TEE provides an alternative by storing private keys in a secure enclave, enabling the entire AI agent to operate in a hidden and protected environment without interference from third parties. However, TEE solutions currently face two major challenges: hardware centralization and performance overhead.

After overcoming these difficulties, we will be able to create an autonomous agent on the blockchain, and different agents can perform their respective duties in the DeFi ecosystem to increase efficiency and improve the on-chain transaction experience.

In general, I will temporarily divide DeFi x Ai into 4 major categories:

  1. Abstract/UX friendly AI

  2. Yield Optimization or Portfolio Management

  3. Market analysis or prediction robot

  4. DeFai Infrastructure/Platform

3. Open the door to the world of DeFi x AI—DeFai

▲ Source: IOSG Venture

#1 Abstract/UX Friendly AI

The purpose of artificial intelligence is to increase efficiency, solve complex problems and simplify complex tasks for users. Abstraction-based artificial intelligence can simplify access to the complexities of DeFi for both new and existing traders.

In the blockchain field, effective AI solutions should be able to:

  • Automatically perform multi-step transactions and staking operations without requiring users to have any industry knowledge;

  • Perform real-time research to provide users with all the necessary information and data needed to make informed trading decisions;

  • Obtain data from various platforms, identify market opportunities, and provide users with comprehensive analysis.

Most of these abstract tools take ChatGPT as the core. While these models need to be seamlessly integrated with blockchain, it seems to me that none of the models are specifically trained or adapted based on blockchain data.

Griffin

Griffin founder Tony proposed this concept at the Solana Hackathon. He later turned this idea into a functional product and gained the support and recognition of Solana founder Anatoly.

Simply put, griffain is currently the first and most powerful abstract AI on Solana. It can perform functions such as swap, wallet management, NFT minting, and token sniping.

The following are the specific functions provided by griffain:

  • Execute transactions in natural language

  • Use pumpfun to issue tokens, mint NFTs, and select addresses for airdrops

  • Multi-agent coordination

  • Agents can tweet on behalf of users

  • Snipe newly launched memecoins on pumpfun based on specific keywords or conditions

  • Staking, automating and executing DeFi strategies

  • Scheduling tasks, users can input input to the agent to create a tailor-made agent

  • Obtain data from the platform for market analysis, such as identifying the holder distribution of tokens

Although griffain provides many functions, users still need to manually enter the token address or provide specific execution instructions to the agent. Therefore, for beginners who are not familiar with these technical instructions, the current products still have room for optimization.

So far, griffain provides two types of AI agents: personal artificial intelligence agents and special agents.

Personal AI Agent is controlled by the user. Users can customize instructions and enter memory settings to tailor the agent to their personal circumstances.

Special agents are agents designed for specific tasks. For example, an "Airdrop Agent" is trained to find addresses and allocates tokens to designated holders, while a "Staking Agent" is programmed to pledge SOL or other assets to an asset pool. in order to implement the mining strategy.

Griffin's multi-agent collaboration system is a significant feature, multiple agents can work together in a chat room. These agents can solve complex tasks independently while maintaining collaboration.

▲ Source: Source: https://griffain.com

After the account is created, the system will generate a wallet, and the user can entrust the account to the agent, which will execute transactions and manage the investment portfolio independently.

Among them, the key is split through Shamir Secret Sharing, so that neither griffain nor privy can host the wallet. According to Slate, the working principle of SSS is to split the key into three parts, including:

  1. Device Sharing: Stored in the browser and retrieved when a tab is opened

  2. Authorized shares: stored on the Privy server and retrieved upon authentication and login to the application

  3. Recovery Sharing: The encryption is stored on the Privy server and can only be decrypted and retrieved when the user enters their password to log in to the tab.

In addition, users can also choose to export or export in the griffain front- end.

Anon

Anon was founded by Daniele Sesta, known for creating the DeFi protocols Wonderland and MIM (Magic Internet Money). Similar to Griffin, Anon is also designed to simplify user interaction with DeFi.

While the team has introduced its future features, none of the features have been verified yet as the product is not public yet. Some features include:

  • Execute trades using natural language (multiple languages ​​including Chinese)

  • Cross-chain bridging through LayerZero

  • Lending and supplying agreements with partners such as Aave, Sparks, Sky and Wagmi

  • Get real-time price and data information through Python

  • Provide automatic execution and triggers based on time and gas price

  • Provide real-time market analysis, such as sentiment detection, social profile analysis, etc.

In addition to core functionality, Anon supports various AI models, including Gemma, Llama 3.1, Llama 3.3, Vision, Pixtral and Claude. These models have the potential to provide valuable market analysis, providing information that helps users save research time and make informed decisions, which is especially valuable in today's market where new tokens with a market capitalization of 100 million are emerging every day.

Wallets can be exported and authorization revoked, but specific details about wallet management and security protocols have not been made public.

In addition to core functionality, Anon supports various AI models, including Gemma, Llama 3.1, Llama 3.3, Vision, Pixtral and Claude.

In addition, daniele recently posted 2 updates about Anon:

  • Automate framework:

A typeScript framework that helps more projects integrate with Anon faster. The framework will require all data and interactions to follow a predefined structure so that Anon can reduce the risk of AI being hallucinated and be more reliable.

  • Gemma:

A Research agent that can collect real-time data from on-chain defi metrics (such as TVL, transaction volume, prepdex funding rate) and off-chain data (such as Twitter and Telegram) for social sentiment analysis. This data will be transformed into opportunity alerts and customized insights for users.

Judging from the documentation, this makes Anon one of the most anticipated and powerful abstraction tools in the entire field. This is especially valuable in today’s market where new tokens with market caps of $100 million are emerging every day.

Slate

Backed by BigBrain Holdings, Slate is positioning itself as an “Alpha AI” capable of autonomous trading based on on-chain signals. Currently Slate is the only abstract AI capable of automating transactions on Hyperliquid.

Slate prioritizes price routing, fast execution, and the ability to simulate before trading. Key features include:

  • Cross-chain swap between EVM chain and Solana

  • Automated trading based on price, market cap, gas fees and profit and loss indicators

  • Natural language task scheduling

  • On-chain transaction aggregation

  • Telegram notification system

  • Can open long and short positions, liquidate under specific conditions, LP management + mining, including execution on hyperliquid

Generally speaking, its fee structure is divided into two types:

  1. Regular operations: Slate charges no fees for regular transfers/withdrawals, but charges a 0.35% fee for swap, bridge, claim, borrow, lend, repay, pledge, unpledge, long, short, lock, unlock and other operations.

  2. Conditional operation: If a conditional order is set (such as a limit order). If based on gas fee conditions, Slate will charge a 0.25% handling fee; all other conditions charge a 1.00% fee.

In terms of wallets, Slate integrates Privy's embedded wallet architecture to ensure that neither Slate nor Privy will host users' wallets. Users can either connect their existing wallets or authorize an agent to perform transactions on their behalf.

▲ Source: https://docs.slate.ceo

Comparative analysis of abstract AI

Compare mainstream abstract AI:

▲ Source: IOSG Venture

Currently, most AI abstraction tools support cross-chain transactions and asset bridging between Solana and EVM chains. Slate offers Hyperliquid integration, while Neur and Griffin currently only support Solana, but are expected to add cross-chain support soon.

Most platforms integrate Privy embedded wallets and EOA wallets, allowing users to manage funds independently, but require users to authorize agent access to perform certain transactions. This provides an opportunity for TEE (Trusted Execution Environment) to ensure the tamper resistance of AI systems.

Although most AI abstraction tools share functionality such as token issuance, trade execution, and natural language conditional orders, their performance varies significantly.

At a product level, we are still in the early stages of abstract AI. Comparing the five projects mentioned above, Griffin stands out for its rich feature set, extensive collaboration network, and workflow handling for multi-agent collaboration (Orbit is also another project that supports multi-agent). Anon excels thanks to fast responses, multi-language support, and Telegram integration, while Slate benefits from its sophisticated automation platform and is the only proxy to support Hyperliquid.

However, among all the abstract AI, some platforms still face challenges when processing basic transactions (such as USDC Swap), such as being unable to accurately obtain the correct token address or price, or failing to analyze the latest market trends. Response time, accuracy, and result correlation are also important differentiators in measuring a model's underlying performance. In the future, we hope to work with the team to develop a transparent dashboard to track the performance of all abstract AI in real time.

#2 Autonomous Income Optimization and Portfolio Management

Unlike traditional yield strategies, protocols in this space use AI to analyze on-chain data for trend analysis and provide information that helps teams develop better yield optimization and portfolio allocation strategies.
To reduce costs, models are usually trained on the Bittensor subnet or off- chain. In order for AI to execute transactions autonomously, verification methods such as ZKP (zero-knowledge proof) are used to ensure the honesty and verifiability of the model. Here are a few examples of DeFai protocols benefiting from optimization:

e9G6IU8v3hhvgfzlCM4eXZZqKV6XDU5q81HEFFzZ.jpeg

T3AI is a lending protocol that supports under-collateralization by using AI as an intermediary and risk engine. Its AI agent monitors loan health in real time and ensures the loan is repayable through T3AI's risk indicator framework. At the same time, AI provides accurate risk predictions by analyzing the relationship between different assets and their price trends. The specific performance of T3AI’s AI is as follows:

  • Analyze price data of major CEX and DEX;

  • Measuring the volatility of different assets;

  • Study the correlation and linkage of asset prices;

  • Discover hidden patterns in asset interactions.

AI will recommend optimal allocation strategies based on the user's investment portfolio, and potentially achieve autonomous AI portfolio management after model adjustment. In addition, T3AI ensures the verifiability and reliability of all operations through ZK proofs and a network of verifiers.

▲ Source: https://www.trustinweb3.xyz/

Kudai

Kudai is an experimental GMX ecosystem agent developed by the GMX Blueberry Club using the EmpyrealSDK toolkit, and its tokens are currently traded on the Base network.

Kudai’s philosophy is to use all transaction fees generated by $KUDAI to fund agents that operate autonomous trading operations, and distribute profits to token holders.

In the upcoming Phase 2/4, Kudai will be able to interpret natural language on Twitter:

  • Buy and stake $GMX to generate new revenue streams;

  • Invest in the GMX GM pool to further increase your returns;

  • Buy GBC NFT at rock bottom prices to expand your portfolio.

After this stage, Kudai will be fully autonomous and can independently execute leveraged transactions, arbitrage and earn returns on assets (interest). The team has not disclosed any further information.

Sturdy Finance V2

Sturdy Finance is a lending and yield aggregator that utilizes AI models trained by Bittensor SN10 subnet miners to optimize yields by moving funds between different whitelisted silo pools.

Sturdy adopts a two-layer architecture, consisting of independent asset pools (silo pools) and an aggregator layer:

  1. Silo Pools
    These are single-asset segregated pools where users can only borrow from a single asset or use a single collateral.

  2. Aggregator Layer
    The aggregation layer is built on Yearn V3 and allocates user assets to independent asset pools that have been whitelisted and reviewed based on utilization and rate of return. Bittensor subnets provide the aggregator with an optimal distribution strategy. When users deposit assets into an aggregator, they are only exposed to the selected collateral type, completely avoiding risks from other lending pools or collateral assets.

▲ Source: https://sturdy.finance

As of this writing, Sturdy V2’s TVL has been declining since May 2024, and the aggregator’s TVL is currently approximately $3.9 million, accounting for 29% of the protocol’s total TVL.

Sturdy’s daily active users have remained in the double digits (>100) since September 2024, with pxETH and crvUSD being the main lending assets on the aggregator. However, the agreement's performance has stalled significantly over the past few months. The integration of AI appears to be introduced in hopes of reigniting the protocol’s growth momentum.

▲ Source: https://dune.com/tk-research/sturdy-v2

#3 Market Analysis Agency

#Aixbt

Aixbt is a market sentiment tracking agent that aggregates and analyzes data from more than 400 Twitter KOLs. With its proprietary engine, AixBT is able to identify real-time trends and publish market observations around the clock.

Among existing AI agents, AixBT holds a significant 14.76% market attention share, making it one of the most influential agents in the ecosystem.

▲ Source: Kaito.com

Aixbt is designed for social media interaction and publishes insights that directly reflect where the market is focusing its attention.

Its function is not limited to providing market insights (alpha), but also includes interactivity. AixBT is able to respond to user questions and even conduct token issuance via Twitter using a professional toolkit. For example, the $CHAOS token was created in collaboration with Simi, another interactive robot, using the @EmpyrealSDK toolkit.

As of now, users holding 600,000 $AIXBT tokens (worth approximately $240,000) can access its analytics platform and terminal.

#4 Decentralized AI Infrastructure and Platforms

The existence of Web3 AI Agent cannot be separated from the support of decentralized infrastructure. These projects not only provide support for model training and inference, but also provide data, verification methods, and coordination layers to drive the development of AI agents.

Whether it is Web2 or Web3 AI, models, computing power, and data are always the three cornerstones that promote the excellent development of large language models (LLM) and AI agents. Open source models trained in a decentralized manner will be favored by agent builders because this approach completely eliminates the single point of risk caused by centralization and opens up the possibility of user-owned AI. Developers do not need to rely on the LLM APIs of Web2 AI giants such as Google, Meta and OpenAI.

The following is an AI infrastructure diagram drawn by Pinkbrains:

▲ Source: Pink Brains

Model creation

Pioneers like Nous Research, Prime Intellect, and Exo Labs are pushing the boundaries of decentralized training.

Nous Research's Distro training algorithm and Prime Intellect's DiLoco algorithm have successfully trained models with more than 10 billion parameters in a low-bandwidth environment, which shows that large-scale training can also be achieved outside of traditional centralized systems. Exo Labs has further launched the SPARTA distributed AI training algorithm, which reduces the communication volume between GPUs by more than 1,000 times.

Bagel is working to become a decentralized HuggingFace, providing models and data for AI developers, while solving the ownership and monetization issues of open source data through encryption technology. Bittensor has built a competitive market where participants can contribute computing power, data and intelligence to accelerate the development of AI models and agents.

Data and computing power service provider

Many believe that AixBT stands out in the utility agent category mainly due to its ability to obtain high-quality data sets.

Providers such as Grass, Vana, Sahara, Space and Time, and Cookie DAOs provide high-quality, domain-specific data or allow AI developers to access “walled gardens” of data to enhance their capabilities. Leveraging more than 2.5 million nodes, Grass can crawl up to 300 TB of data daily.

Currently, Nvidia can only train its video models on 20 million hours of video data, while Grass's video data set is 15 times larger (300 million hours) and growing by 4 million hours per day - that's 10% of Nvidia's total data set. 20% is collected by Grass every day. In other words, it only took Grass 5 days to acquire the equivalent of Nvidia's total video dataset.

Without computing resources, the agent cannot run. Computing power markets such as Aethir and io.net provide cost-effective options for agent developers by aggregating various GPUs. Hyperbolic's decentralized GPU marketplace cuts computing costs by up to 75% while hosting open source AI models and delivering low-latency inference capabilities comparable to Web2 cloud providers.

Hyperbolic further enhances its GPU marketplace and cloud services with the launch of AgentKit. AgentKit is a powerful interface that allows AI agents full access to Hyperbolic's decentralized GPU network. It features an AI- readable computing resource map that scans and provides detailed information on resource availability, specifications, current load, and performance in real time.

AgentKit opens up a revolutionary future where agents can independently obtain the required computing power and pay related fees.

Verification mechanism

Through the innovative Proof of Sample verification mechanism, Hyperbolic ensures that every reasoning interaction in the ecosystem is verified, establishing a foundation of trust for the future agent world.

However, verification only solves part of the problem of trust in autonomous agents. Another dimension of trust involves privacy protection, which is the strength of TEE (Trusted Execution Environment) infrastructure projects such as Phala, Automata and Marlin. For example, proprietary data or models used by these AI agents can be securely protected.

In fact, a truly autonomous agent cannot fully operate without a TEE, as TEE is critical to protecting sensitive information, such as protecting wallet private keys, preventing unauthorized access, and ensuring Twitter account login security.

How TEE works

TEE (Trusted Execution Environment) isolates sensitive data within a protected CPU/GPU enclave (secure enclave) during processing. Only authorized program code can access the contents of the enclave; cloud service providers, developers, administrators, and other parts of the hardware cannot access this area.

The main use of TEE is to execute smart contracts, especially in DeFi protocols involving more sensitive financial data. Therefore, the integration of TEE and DeFai includes traditional DeFi application scenarios, such as:

  1. Transaction privacy: TEE can hide transaction details such as sender and receiver addresses and transaction amounts. Platforms such as Secret Network and Oasis use TEE to protect transaction privacy in DeFai applications, enabling private payments.

  2. Resistant to MEV: By executing smart contracts in TEE, block builders cannot access transaction information, thus preventing front-running attacks that generate MEV. Flashbots leveraged TEE to develop BuilderNet, a decentralized block building network that reduces censorship risks associated with centralized block building. Chains such as Unichain and Taiko also use TEE to provide users with a better trading experience.

These capabilities also apply to alternative solutions such as ZKP or MPC. However, TEE is currently the most efficient among the three solutions for executing smart contracts simply because the model is hardware-based.

On the agent side, TEE provides agents with various capabilities:

  1. Automation: TEE can create an independent operating environment for the agent to ensure the execution of its policies without human interference. This ensures that investment decisions are entirely based on the independent logic of the agent.

  2. TEE also allows agents to control social media accounts to ensure that any public statements they make are independent and free from outside influence, thereby avoiding suspicion of advertising and other propaganda. Phala is working with the AI16Z team to enable Eliza to run efficiently in a TEE environment.

  3. Verifiability: People can verify that the agent is performing calculations using the promised model and producing valid results. Automata and Brevis are working together to develop this functionality.

AI Agent Cluster

As more and more professional agents with specific use cases (DeFi, gaming, investing, music, etc.) enter the field, better agent collaboration and seamless communication become crucial.

Infrastructure for agent swarm frameworks has emerged to address the limitations of monolithic agents. Swarm intelligence allows agents to work together as a team, pooling their capabilities to achieve a common goal. The coordination layer abstracts complexity and makes it easier for agents to collaborate under common goals and incentives.

Several Web3 companies, including Theoriq, FXN and Questflow, are moving in this direction. Of all these players, Theoriq, which originally launched in 2022 as ChainML, has been working toward this goal the longest, with the vision of becoming a universal base layer for agent artificial intelligence.

To realize this vision, Theoriq handles agent registration, payment, security, routing, planning and management in the underlying modules. It also connects supply and demand, offering an intuitive agent-building platform called Infinity Studio that allows anyone to deploy their own agents, as well as Infinity Hub, a marketplace where customers can browse all available agents. In its swarm system, meta-agents select the most appropriate agent for a given task, creating “swarms” to achieve a common goal while tracking reputation and contributions to maintain quality and accountability.

Theoriq tokens provide economic security that agent operators and community members can use to express quality and trust in agents, thereby incentivizing quality service and deterring malicious behavior. Tokens also serve as a medium of exchange, used to pay for services and access data, and reward participants for contributing data, models, etc.

▲ Source: Theoriq

As the discussion around AI Agents becomes a long-term industry segment, spearheaded by clear utility agents, we may see a resurgence of Crypto x AI infrastructure projects, leading to strong price performance. These projects have the potential to leverage their venture capital funding, years of R&D experience and domain-specific technical expertise to expand across the value chain. This allows them to develop their own advanced practical AI agents that can outperform 95% of other agents currently on the market.

4. The evolution and future of DeFai

I always believe that the development of the market will be divided into three stages: first is the requirement for efficiency, then is decentralization, and finally is privacy. DeFai will be divided into 4 stages.

The first phase of DeFi artificial intelligence will focus on efficiency, improving user experience through various tools to complete complex DeFi tasks without the need for solid protocol knowledge. Examples include:

  • Artificial intelligence that understands user prompts even if the format is imperfect

  • Quickly perform swap in the shortest block time

  • Real-time market research to help users make favorable decisions based on their goals

If the innovation is realized, it can help users save time and energy while lowering the threshold for on-chain transactions, potentially creating a "phantom" moment in the next few months.

In the second phase, the agent will trade autonomously with minimal human intervention. Trading agents can execute strategies based on third-party opinions or data from other agents, which will create a new DeFi model. Professional or mature DeFi users can fine-tune their own model-building agents to create optimal returns for themselves or their clients, thereby reducing manual monitoring.

In the third phase, users will start to focus on wallet management issues and AI verification as users demand transparency. Solutions such as TEE and ZKP will ensure that AI systems are tamper-proof, immune to third-party interference and verifiable.

Finally, once these stages are completed, a no-code DeFi AI engineering toolkit or AI-as-a-service protocol can create an agent-based economy that uses models trained on cryptocurrencies to conduct transactions.

While this vision is ambitious and exciting, several bottlenecks remain to be resolved:

  • Most current tools are just ChatGPT wrappers without clear benchmarks to identify high-quality projects

  • On-chain data fragmentation pushes AI models towards centralization rather than decentralization, and it’s unclear how on-chain agents will solve this problem

more