Delphi Digital interviews ai16z founder: How does Agent reshape the future of Web3?
Reprinted from panewslab
01/20/2025·2days agointroduction
If AI Agent is gaining momentum in this encryption cycle, then Shaw, the founder of ai16z and Eliza, has undoubtedly seized the direction of the tide.
The ai16z he launched is the first AI Meme-themed on-chain fund. It is derived from a satirical expression of the well-known venture capital a16z. It started raising funds from 0 in October 2024 and grew to be the largest Solana in terms of market value in just a few months. It is an AI DAO with over US$2.5 billion (it has already been retracted); and ElizaOS, the core of ai16z, is a multi-agent (Agent) simulation framework, based on which developers can create, deploy and manage autonomous AI Agents. Thanks to the first-mover advantage and the booming TypeScript community, the Eliza code base has over 10,000 stars on GitHub, occupying about 60% of the current market share of Web3 AI Agent development.
Although his remarks on social platforms continue to be controversial, it does not affect Shaw's becoming a key figure in the field of encrypted AI. There are already many exclusive interviews about him in the Chinese community, but we believe that the podcast conducted on January 6 by Tom Shaughnessy, co-founder of Delphi Digital, a leading encryption investment research institution, and Ejazz and Shaw from 26 Crypto Capital, This is the most in-depth interview with Shaw on the topic of "Practical Thinking of AI Agent" so far, and it is still forward-looking.
In this conversation, not only were the questions insightful, but Shaw was also as honest and bold as ever, sharing a lot of his views on the current AI Agent use cases in the Web3 industry and his judgment on the future, covering everything from Agent development frameworks to token economies. Learn about important topics such as the future of the open source AGI platform, and it’s full of useful information. In this regard, Coinspire has translated a complete version to share with readers, hoping to get a glimpse of the future of AI+Web3.
🎯Main Highlights
▶The inside story of the creation of Eliza Labs and the rapid development of ai16z
▶Dive into all aspects of Eliza framework technology
▶Agent platform analysis and transformation from Slop Bots (AI spam Bots) to utilities
▶Discussion of token economics and value capture mechanisms
▶Explore cross-chain development and blockchain options
▶The vision of open source AGI and the future of artificial intelligence agents
Part.1 Entrepreneurship experience and trip to Asia
Q1: Shaw, tell me about your experience
Shaw: I have developed open source projects for many years and created an open source space network project. However, my partners removed me from GitHub and sold the project for $75 million. I received nothing. He never wrote a line of code, and I was the lead developer on the project. Although I am suing him, this incident has cost me everything and my reputation.
Later, I restarted and focused on the research of AI Agent, but because the previous people took away all the funds, I had to bear all the responsibilities myself, even go into debt, and at the same time do some service projects to make ends meet. Eventually, the concept of the metaverse became cold and the direction was gradually no longer suitable.
After that, I joined Webiverse as the chief developer. It went smoothly at first, but the project was later hacked and the funds were stolen, so the team had to transform. The experience was extremely difficult and nearly broke me.
I have experienced a lot of setbacks, but I have been working hard to move forward. I worked with the founders of Project 89 (Neuro-Linguistic Viral Interactive AI) to launch a platform called Magic and closed a seed funding round. He hopes to build the platform into a code-free tool to facilitate users to build agent systems. And I think that if a complete solution is provided, users may just copy it; if not, they don't know where to start. When the funds were about to run out, I decided to focus on the development of the agent system. At that time, I had already created the first version of Eliza on this platform. This all may sound crazy, but I’m always trying and exploring new directions.
Q2: What is the situation of the Asian developer community?
Shaw: I've been in Asia for the past few weeks, meeting intensively with the local developer community. Since the launch of our project, especially since AI Agent related content (such as the ai16z project) gained attention, I have received a lot of information from Asia, especially China, and we found that there are many supporters here.
I met many members through a community called 706, and someone helped us manage the Chinese channel and Discord, and organized a small hackathon. I also met many developers at the event. After reviewing their projects, I felt that I must come here to meet everyone in person. So, we planned a trip and visited multiple cities to meet with developers.
The local community was very welcoming and organized one event after another for us. This also allowed me to communicate with many people, learn about their projects and make connections. In the past few days, I have traveled from Beijing and Shanghai to Hong Kong. I am now in Seoul and will go to Japan tomorrow.
During these meetups, I saw many interesting projects such as games, virtual girlfriend apps, robots, and wearables. There are projects involving data collection, fine-tuning, and annotation that may have great future prospects when combined with our existing technologies. I am particularly interested in integrating AI Agent into DeFi protocols. This approach can lower the threshold for users and may become a killer application in the next few months. While many projects are still in their early stages, the enthusiasm and creativity of developers is impressive.
Part.2 **AI Agent+DeFi combination use case and practical
discussion**
Q3: Now the valuation of ai16z has reached billions of dollars, the Eliza framework supports a large number of agents, developers are extremely interested, and the project has been hot on GitHub for several weeks. At the same time, everyone is gradually disgusted with chatbots on social media that can only automatically reply, and are looking forward to agents who can actually complete tasks, such as creating tokens, managing token economic systems, maintaining ecosystems, and even performing DeFi operations. You Do you think the future development direction of agents will include these functions? Will Eliza’s agent focus on DeFi?
Shaw: It's an obvious business opportunity, and I'm equally tired of the Reply Robot situation where a lot of people are just downloading the tool, showing it off and pushing the token, but I really hope we can move beyond that. There are three major types of agents that I am most interested in right now: first, agents that can make you money, second, agents that can bring products to the right customers, and third, agents that can save you time.
We're still stuck in this auto-reply mode, and I personally block all reply bots that aren't called upon, and I encourage everyone to do the same, because it creates a social backlash that forces agency developers to really think and build. Something meaningful. Just blindly following a trend and commenting on everything doesn't actually help any coin.
What I am most interested in now is DeFi, because it has many arbitrage opportunities . DeFi more than anything else meets the characteristics of "there are opportunities to make money, but many people don't know how to use it." We are already working with some teams, like with Orca, and DLMM (Dynamic Liquidity Market Maker) on Meteora, so that the bot can automatically identify potential arbitrage opportunities and adjust automatically when the range of coins changes. and transfer the proceeds back to your wallet. This way users can safely stake their tokens and the entire process is automated.
Additionally, Meme coins are very volatile. In fact, the Meme currency rose very sharply when it was initially launched, making it difficult to operate the liquidity pool (LP). But once they stabilize, volatility becomes a plus, and profits can be made through liquidity pools . I basically don’t sell tokens, but make money through liquidity pools, and I have always encouraged other agent developers to do the same. But I was surprised to find that many people don't do it this way. I have a friend who told me that he had a hard time making money. I asked him if he had considered using a liquidity pool and he said he didn't have time, but he should. Make a liquidity pool and make a lot of money through the trading volume of the token.
Q4: In addition to liquidity pools, will these Agents (agents) start to manage their own funds for trading, such as projects such as Ai16z and Degen Spartan AI, how will they operate their own asset management (AUM), and whether these agents have the ability Will this be achieved within this year?
Shaw: I think that large language models (LLM) are currently not suitable for direct use in transactions. On the contrary, if there is a suitable API to obtain market intelligence, it can make reasonable judgments. For example, I saw that the transaction success rate with an AI system is about 41%. This is quite good, because most cryptocurrencies are not stable, but LLM is not good at making complex decisions. Its main function is to predict the next generation. coins and make more reasonable decisions based on contextual information.
Where LLM becomes valuable is in converting unstructured data into structured data . For example, turning information about a group of people promoting tokens to each other in a group chat into actionable data. We have a team doing a study called "Trust Markets". The core question of the study is, if we treat the recommendations in group chats or on Twitter as genuine and trade based on these recommendations, can we make money? Get money. It turns out that a small group of people are really good traders and recommenders, and we are analyzing the recommendations of those at the top and potentially basing our operations on their recommendations in the future.
It's like prediction markets, where a small group of people are very good at predicting, while the majority are worse or susceptible to behavioral economics. Therefore, our goal is to track the performance of these individuals through some measurable metrics and use this as a training strategy . I think this method is not only applicable to making money, but can also be applied to more abstract areas such as governance and contribution rewards.
But making money is the easiest because it's like an easily measured Lego brick. I don't think giving LLM time series data and letting it predict the buying and selling of tokens can really solve the problem. If you design an agent to automatically buy and sell tokens, I think it can certainly do it, but it may not necessarily make money, especially when buying some volatile tokens. So, I think what we need is something more than simple buying and selling. Flexible and more reliable approach.
Q5: If there is an agent that is very good at trading, why open source it and create a token around it instead of just doing the trading yourself?
Shaw: Someone told me about a company that claims to be able to predict token prices with 70% accuracy. I thought, if I could do that, I wouldn't be here telling you this, I'd just print unlimited money. 70% accuracy for short-term trading like Bitcoin means you can easily make unlimited profits. I'm sure companies like Blackstone are doing something similar to some extent, they're trying to process global data in order to make predictions about stocks and so on, and maybe they're successful at it, after all they have a lot of people working on it class work.
But I think in a low-cap market, factors like behavioral drivers and the impact of social media are probably more important than any fundamental data that you can predict. For example, a celebrity forwarding a message to a certain contract address may be more effective than any algorithm you can predict . Therefore, I think Meme coins are interesting precisely because they have a very low market value and are highly susceptible to social dynamics. If you can track these social dynamics, you can find opportunities there.
Part.3 Agent framework value and Eliza’s development advantages
Q6: Based on Eliza’s application scenarios, how can the team bring a new and innovative Agent to the market by using Eliza? What are the main differentiating factors of this Agent? Is it the model, the data, or other features and support provided by Eliza?
Shaw: There is indeed a saying that it is just a wrapper for ChatGPT, but in fact it is similar to thinking of a website as a wrapper of HTTP, or an application as a wrapper of React. Really, it 's the product itself and whether there are customers who use it and pay for it that's at the heart of anything .
Models have become extremely commoditized, and training a basic model from scratch is very expensive and may cost hundreds of millions of dollars. If we had the funding and market share of OpenAI, it might be easy to build an end-to-end training system and train the model, but then we would be competing with Meta, OpenAI, XAI, and Google, who are all trying to improve their benchmarks performance to prove that you are the best model in the world. At the same time, XAI will open source the previous version every time it releases a new version, and Meta will also open source everything they do, and gain share through open source.
But I don’t think that’s an area where we should be competing. We should focus on helping developers build products. At stake is the future of the internet, how websites and products work, and how users use applications. There are already many excellent products and infrastructure waiting to be used by users, but users don’t know how to find them . You can't simply Google "make money with DeFi protocols", you might be able to find a list and do some research, but it's not easy if you don't know what to look for.
So the real value point is connecting things that already exist , changing the existing model, no longer staying on a website and landing page, but taking it to social media to actually demonstrate the use cases of the product and find Users who need your product . I believe that AI agents should not just be products, but should be part of the product and an interface for interacting with the product. I hope to see more similar attempts . "
Q7: Why do you think Eliza’s framework or the platform you are building is the most suitable home for developers and builders? Compared to other frameworks and languages (Zerepy team uses Python, Arc Team uses Rust)
Shaw: I think language does matter, but it’s not everything . More developers now use JavaScript to build applications than any other language. Almost every communication app, from Discord to Microsoft Teams, is also developed in JavaScript, or uses some kind of native runtime, the UI and interactive parts are also developed in JavaScript, or a lot of backend development is now developed using JavaScript and TypeScript There are more developers than all other languages combined, especially with the rise of tools like React Native (a JavaScript-based framework for creating native mobile Android and iOS platform applications) .
Many developers who have developed on the EVM have also downloaded Node.js, run Ethereum development tools such as Forge or Truffle, and are familiar with this ecosystem. We can contact developers who have done website development, and they can also be agents.
Although Python is not particularly difficult to learn, it is somewhat difficult to package into different forms, and many people get stuck at installing Python. Python's ecosystem is messy and its managers are complicated. Many people may not know how to find the correct version to work with. Although Python is a good choice for backend development, I have discovered when doing a lot of development in the past that Python It doesn't do asynchronous programming well enough, and it has trouble with string handling.
When I realized the advantages of TypeScript in developing agents, I realized this was the right direction . On the other hand, what we provide is an end-to-end solution, and when you clone it, it works immediately. I think Arc is a cool project, but it's missing connectors, no social connectors. Projects like Zeropy are also good, but it mainly does social connectors or replying to messages through loops. And many other projects, while having several agents talk to each other, are not really connected to social media.
I think these frameworks themselves are the body, and LLM (Large Language Model) is the brain. What we build is this bridge that allows these frameworks to connect to different clients . By providing these solutions, we significantly lower the barrier to entry and reduce the amount of code developers need to write. Developers only need to focus on their product and pull the required APIs, and we provide simple abstractions for input and output.
Q8: As a non-developer, how do you understand the functions and processes released by the Eliza platform? From a non-developer perspective, what kind of functions or support can Agent builders get after connecting to Eliza or other competing platforms?
Shaw: You just download the code to your computer, modify the character, and after launching it, you have a basic bot that can do anything , such as chat, which is the most basic function. We have many plugins, if you want to add a wallet, just enable the plugin and add the private key of the EVM chain, choose the chain you need; you can also add an API key, such as Discord, or your Twitter username and email, etc. , these can be set up and can be used directly without writing code. This is why you see a lot of bots doing sales pitches and replies.
Afterwards, you can use some abstract tools to perform other operations, called "actions". For example, if you want the robot to order a pizza for you, you only need to set an "order pizza" action. Then, the system will automatically obtain the user's information, which may be the current user's information provider. You'll also need an evaluator to extract the user information you need, such as name and address. If someone asks you to order a pizza via private message, the system will first obtain the user's address and then order the pizza.
These three parts: providers, evaluators, and actions, are the basis for building complex applications . Any operation like filling out a form on a website, the basic actions can be achieved through these three elements . We currently use this method to handle tasks such as automatic LP management. It is just like writing any website, mainly calling the API, and developers should be able to get started easily.
For non-developers, I recommend that you choose a platform that is already hosted and select the features or plugins you need without digging into the code. If you want, you can of course do it yourself.
Q9: How long does it take for a developer to build these functions or splice these components from scratch? How does the time cost of using the Eliza platform compare?
Shaw: Depends on what you want to do. If you just look at the code base and understand the abstractions, you can probably build very specific functionality in a very short period of time, like I can probably build a proxy that does what you want in a week. But if you want to have memory capabilities, extract information, or build a framework to support these capabilities, it 's a little more complicated.
For example, I made a pizza delivery application. It took me 5 hours and another person took 2 hours. It can basically be made in one day. If I were to do this myself, it would probably take a few weeks. Although everything such as writing code is now accelerated by AI, the overall framework already provides you with a lot.
For example, like React, all applications are built on React. You can definitely throw together a website quickly, but as the complexity of the project increases, it becomes very difficult to do. So, when you do something simple, all you need is an LLM, a blockchain, and a loop, and you can probably do it in a few days. But we support all models, it runs completely natively, and it also supports transcription. You can send audio files to Discord and it will transcribe. You can also upload PDF files and chat. It's all built in and most people don 't even use it. 80% of the functions inside .
So, if you just need to build a simple chat interface, you can do it yourself. But if you want to build a full-featured agent that can do a lot of things, then you need a framework that already handles most of it. I can tell you it took me many months to make this.
Q10: Compared with other agent platforms launched that generally emphasize rapid design, deployment and code-free operation, is Eliza more suitable for agent construction with customized and unique functions?
Shaw: If you take out the entire system of Arc, or the entire Zeropy, and the entire Game framework, the number of lines of code will be much less than that of Eliza, because Eliza contains many different functions, even if you only take out the plug-in part. Many core functions, such as speech to text, text to speech, transcription, PDF processing, image processing, etc., are already built-in. While a little too complex for some, it does make a lot of things possible, which is why so many people are using it.
I've seen some proxies that are completely Eliza plus some additional features , like they use the Pump.fun plugin that we provide, or it's Eliza plus the ability to generate images and videos, which are actually built-in. I 'd love to see more people try it out and see what happens if all plugins are enabled at the same time .
My goal is that eventually these agents will be able to write new plugins from scratch themselves , as there will be enough similar existing plugins as examples, and this will all be trained into the model. Once 100 stars are achieved and a certain codebase threshold is reached, companies like OpenAI and Claude will scrape the data and use it for training. This is part of our loop and eventually you will be able to write new plugins yourself.
Q11: If Eliza becomes the most powerful codebase (not just wealth, but the codebase that provides the most powerful features to any agent developer), does that mean that Eliza will be able to attract people who are not just from the crypto space, but more Developers coming from traditional AI and machine learning backgrounds?
Shaw: If there is a breakthrough. Eliza is not a crypto project per se, except that it has many blockchain integrations (all plug-ins) . I've noticed that the popularity of GitHub has helped us attract people from the Web2 space, and a lot of people just think it's a good tool for developing proxy frameworks.
I personally would very much like to get people to accept this, I feel like some people are biased against cryptocurrencies, but I feel like clearly 99% of the agents will be trading 99.9% of the tokens in the future. Cryptocurrency is the native token of the agent. Try using a PayPal account, it is really difficult. And we can directly open a wallet, generate a private key, and do it easily.
We do attract some people who are not in the crypto space, especially people who are not actively doing crypto trading, who are okay with cryptocurrencies, but are more interested in the application of agents .
Although some people are biased against crypto projects, they are willing to accept it as long as it brings real value. Many people see only hype and empty words and feel disappointed, but when they see that our projects are backed by actual research and engineering, they gradually change their views. I hope to attract more people, and I'm definitely making some progress, which is a huge differentiator.
Part.4 The vision of open source AGI and the future of AI Agent
Q12: How will you compete with OpenAI and traditional AI laboratories in the future? Is having a bunch of agents built on top of Eliza working together as a differentiator, or is this comparison fundamentally meaningless?
Shaw: This question makes sense. First, when you start Eliza, it will start a new model by default. This model is a fine-tuned Llama model, also known as the Hermes model, which has been trained by Nous Research. I really like what they're doing, and one of them is Ro Burito, who is both a member of Nous Research and an agent developer in our community. They helped launch the God and Satan Bots, as well as a number of other robots. So, we may be able to train the model ourselves, but we have partners like them, and rather than competing with them, I would rather work with them and complement each other 's strengths .
Many people don't understand how easy it is to train a model, when in fact it only requires one line of command. If I go and use Together, I can start fine-tuning a Llama model in five minutes by just entering a command and pointing it to a Json file. The advantage of Nous is not their fine-tuning method, but the data. They collect and carefully curate data, which is their core competency. Data collection, preparation, and cleaning are very tedious work, and they focus on data that is different from OpenAI. This is where our market differentiation lies.
We chose to use their model because they don't reject as many requests as OpenAI does. We have a term called "OpenAI model castration". Basically, all agency developers feel that OpenAI's model is limited. And our market differentiation is, OpenAI will never let you make an agent that can connect to Twitter, they will never let you make the assistant very personal or interesting. They 're not bold enough, they're not cool enough, and they're under a lot of pressure .
If you go to ChatGPT now and ask it a question about the 2024 election, it might give you a long answer, but for a long time it would just tell you Biden directly because that's how it was trained. I'm not saying I support one side or the other, but I think it's silly to ask a leading model to make such an easy political choice. OpenAI is very cautious and they largely just "do things" without actually letting users get what they want .
So, the real competitive point is how you collect data and where the data comes from. You don't see OpenAI doing anything like this. If you look at Sam Altman's tweet, he said that users really want an adult mode , not in the sense of NSFW (not suitable for public places), but in the sense of "adults in the room", i.e. don 't treat me like a child, can't see certain information . And, because OpenAI is centralized, they face a lot of political pressure from the government, and I think the open source movement gets rid of that and more importantly has diversity and a variety of different models to meet the real needs of users and give them Want something, rather than controlling their behavior , this approach will ultimately win. For OpenAI, although they have huge funds, their market value is also very high, and they have a lot of talent. However, decentralized AI provides community support, incentive mechanisms, funding and other conditions for rapid development, and there is no need to wait for hardware such as GPUs.
I believe that the path to AGI is not either/or, but actually a combination of various approaches . If the world's largest companies are doing something, can competing with them really accelerate growth? I consider AI agents to be the "stepchildren" of the AI world in that they are not as easy to measure against a standard as traditional AI, and it is difficult for a PhD researcher to say through quantitative metrics that this agent is better than another . AI agents are more about basic engineering and creative problem solving. This is the unique feature of many developers who have invested in this field.
Q13: What does open source AGI (artificial general intelligence) mean specifically? Is it through a group of agents autonomously collaborating to eventually produce a super-intelligent whole, or is there another way?
Shaw: If there are millions of developers using most open source models and tools, they will compete with each other to optimize the capabilities of the entire system. I think AGI is actually the form of the Internet. The Internet itself is composed of a large number of agents doing various things. And, this doesn't need to be a unified system, we could call it AGI, but it depends on how you define AGI.
Most people think of AGI as intelligence that can do anything like a human. In fact, this agent does not need to have all the knowledge in advance. It can obtain the required information by calling APIs or operating the computer . If it can operate a computer like a human, with a powerful memory system and rich functionality, eventually combined with actual robots, AGI will become obvious.
However, in the field of AI, we often say "AGI is something that computers cannot currently do", and this goal is always changing with the introduction of new models. At the same time, there is also a concept called ASI, that is, super artificial intelligence, which refers to a powerful model that can control the world. I think if it was just being built by a big company like Microsoft, it could have this kind of super-intelligence potential. But if there are many different players, each open source their own models, and by continuously fine-tuning and optimizing these models, they will eventually form a multi-agent system like the Internet, interacting with each other and having their own expertise. This system will look like It 's super artificial intelligence .
This is a huge system, even a collection of systems. If an agent wanted to attack other agents, it would be very difficult because no one agent is much more powerful than the others. As technology advances, we are also reaching an energy limit where models cannot scale infinitely without requiring nuclear reactors to support it. Just like Microsoft is now investing in nuclear power plants, all companies are gradually improving their models.
The new model GPT-4 launched by OpenAI is very close to human intelligence, but similarly, other companies are actively developing similar models, and many people are paying attention to researching and implementing the latest technology. Even though OpenAI's model is close to AGI, due to the large number of users, its model has to compromise on quality and shift to a low-scale model to reduce the burden on the GPU.
Overall, I think the emergence of super artificial intelligence is driven by competition between companies, models becoming more efficient, and open source allowing more developers to participate . Hopefully in the future, on Twitter, I can easily find bots that do something and be able to choose the best one.
Q14: What role will tokens and markets in cryptocurrency play in realizing future innovation and vision?
Shaw: If you look at it from the perspective of “intelligence”, the market itself is a kind of intelligence. It identifies opportunities, allocates capital, drives competition, and ultimately optimizes the best solutions . This process may continue to compete until a complete and mature system is formed. I think market intelligence and competition play a big role here.
The role of cryptocurrencies in this is obvious. It has two key functions:
First, it provides a crowdfunding mechanism for projects that no longer relies on the old Silicon Valley venture capital model, based on what people really want rather than the definition of value by a few VCs . Although VCs often have deep insights, their investment logic may also be limited by a certain geographical or cultural circle, ignoring the potential for more diversified capital allocation.
Second, cryptocurrencies accurately capture people’s emotional needs . Users would be very excited if a product that met this need could be delivered. However, the main problem in the crypto space is that many projects hit emotional points but ultimately fail to deliver on their promises. If these projects actually achieve their goals, such as developing a robot that can provide perfect market insights, it will be of great value.
In addition, the auditability of open source allows anyone who is competent to verify the authenticity of a project. This transparency can direct capital to flow more efficiently to opportunities with real potential . One of the big problems in the current world is that most people can't invest in companies like OpenAI unless they go public, but by then, the returns are relatively limited. In contrast, cryptocurrencies give people the opportunity to invest directly in projects at an early stage, thereby realizing the dream of “participating in the future” and “generational wealth.”
To make these mechanisms better, we need to better prevent fraud. I believe that open source and open development can greatly improve the efficiency of capital allocation in the market and accelerate the development of this field. At the same time, future agents will trade tokens with each other, and almost everything can be tokenized - trust, ability, money, etc. In summary, cryptocurrencies offer entirely new ways to allocate capital, accelerating innovation and the realization of future visions.
Part.5 Discussion of Token Economy and Value Capture Mechanism
Q15: Is the ai16z platform fast enough in implementing the token economic value capture mechanism? How to deal with potential competitive threats?
Shaw: The problem with open source blockchains is that the incentives for forking are very large because when you hold network tokens, there is a direct economic benefit. If we launch an L1, people may fork our L1, or feel like they can't really work with us because we are an L1.
Tribalism is strong in the crypto industry, largely due to this all-or-nothing competition rather than inclusive collaboration.
In reality, our token economic model needs to continue to evolve and find new ways to make money. Launchpad is not the final token economic model, but an initial version . We have attracted a lot of attention, many partners want to publish on our platform, they just need a managed way to launch their Agent projects. We can provide plugins and ecosystem capabilities for them to use directly.
We plan to open source Lauchpad, but we can foresee that once it is open sourced, others will copy it. Projects that rely solely on launch platforms will need to rethink their long-term strategies. Strategies that simply set up roles, burn tokens, and buy back may not be sustainable.
In the long term, we prefer to invest in technologies that can expand the overall ecological value. In the short term, we need to meet market demand and launch Lauchpad. But after three months, the launch pad may become mediocre, with many projects failing and only a few continuing to create value.
The focus in the future is not to simply launch agents, but to invest in projects that can clearly create value. We have started investing and acquiring, which also have their own token economic model, such as using revenue to buy back tokens and use them for more investments. In addition, we are also looking for new ways to increase the value of tokens, such as increasing long-term yield pressure, beyond simple mechanisms such as charging network fees or burning through token pairings .
My goal is to move us beyond these simple models toward a larger vision. We hope to create a production studio-like platform that allows people to submit projects to DAOs and roles, validate popular projects, and then invest. I think the current token economic plan can be maintained for six months, but we are also actively thinking about the next token economic model.
Q16: If ai16z’s token economic model operates successfully and the tokens have actual value, it will not only provide more financial support for the project development platform, but at the same time, the Agent will further promote the development of the open source framework in an indirect way. Bringing growth to the ecosystem?
Shaw: I think about this a lot. In the field of AI, there is a tool called "Fume", which refers to the ability of Agents to write their own code and continue to improve it faster than humans. They write code for various possible use cases and submit requests (PRs), which other agents are responsible for reviewing and testing. This could happen within a few years, maybe even in less than two years. If we can persist, we will reach an "escape velocity" and the system will accelerate exponentially and may eventually enter the stage of AGI (artificial general intelligence) and completely realize self-construction.
We should do everything we can to accelerate toward this future. I've seen projects like Reality Spiral and Agent submitting PRs to GitHub, and this trend has already begun.
If we can allow tokens to accumulate value while investing in our ecosystem and driving its growth, this will create a positive cycle : increased token value drives ecosystem growth, and the ecosystem in turn increases token value . Eventually, the system will reach a state of automatic operation.
However, there is still a lot of practical work to be done. The key is to ensure that the token accumulates value in the intended way and meets the needs of users. For example, Launchpad was developed based on the needs of users, helping them realize what they were already building.
In the future, we can even directly let Agents create specific projects, with multiple Agents competing for development, and the community will ultimately vote to select the best result . This pattern can quickly become extremely complex and powerful, and our goal is to get there quickly.
Part.6 Explore cross-chain development and blockchain options
Q17: On which blockchain do you think AI agent should be developed? Solana or Base?
Shaw: From a user's perspective, blockchain has gradually been "normalized", and many people don't even know which chain their tokens are on. Although there are significant differences in programming and functionality between EVM and SVM models, they are essentially indistinguishable to users. Users simply check the wallet to see if funds are available, or to exchange tokens.
For the future of Agent, I hope it can blur the differences between chains, and tokens will certainly be frequently bridged between the two . Currently we are SPL 2022 tokens with minting capabilities, so there are some technical challenges across chains, but we are overcoming them.
I actually like the Base team. They are very supportive of us, so I have no particular preference. Solana was chosen because the users are here. As product people, we should put aside our personal ideals and focus on user needs and provide the services they need in the places they like.
Currently, you can deploy an Agent on Base or on StarkNet, the choice is completely open. The fragmentation of these ecosystems comes more from the price of their respective tokens, the availability of tokens, and the existing developer community and infrastructure. The main reason we chose Solana is because projects like DAOs.fun and users are on this chain. But overall, I don't have a strong preference for platforms, and the best strategy is to cover all platforms, observe where users are, and then provide services there .
Part.7 **Transformation from slop bots (AI spam bots) to utility
programs**
Q18: Is there a natural transition period between the current situation where some "slop Agents with no practical use" gradually lose the market and the emergence of "high-performance Agents" that can truly perform efficient and practical tasks in the future?
Shaw: I think we will soon enter a new stage where Agents will do amazing things. If people can make money from Agents, then this Agent will be very successful.
As for whether "Slop Agents" will disappear, I think they probably won't disappear completely . Their current situation is that platforms (such as Therefore, the platform’s solution is to use algorithms to punish those “people” who cause trouble more.
From a developer's perspective, agents won't have any impact if they can't attract users. In this regard, my approach is to directly block those agents that are meaningless. I think if the agent is not specifically called upon and does not provide valuable content, we do not want this content to appear on the platform.
Agents in the DeFi field have not yet been fully developed , although the team is still working hard on research and development. But I believe that in the next month, we will see a lot of new developments. In addition, we have not seen an Agent that can find users for its products. Many Agents are now just used for inefficient promotion. But imagine that an Agent finds the solution you need. You will definitely not block it, but You'll appreciate it, just like you're using the new Google.
At present, we are still in a "dog playing poker" stage. Initially, if you walked into a room and saw four dogs playing poker, you would think it was incredible, but after a few weeks, you would ask, "How are these dogs doing? Are they really making money, or are they making money?" Just holding cards?” When the novelty wears off, people will start to pay attention to who is the best dog at poker, or who has the best poker algorithm.
Therefore, although "Internet celebrity Agents" may always exist, we will see more useful Agents in the future , just like in Web2, McDonald's may launch a "Grimace (McDonald's series character) agent", or some Influencers have been forced to build a reply bot to build a virtual relationship with their fans after posting content and being inundated with private messages.
Q19: At present, it is difficult to obtain detailed information such as the Agent 's architecture, model, hosting location, etc. It can only rely on the trust of developers. How to visualize and view it?
Shaw: I believe that someone will hear this need and build this platform, and I also agree that the opportunity lies here. TEE has been around for a long time, and I talked to many developers. Before Agent appeared, it was just a very obscure concept. The emergence of Agent made people start to ask: "If it is an autonomous Agent, how to prevent it from directly using the private key to steal money?" So people began to pay attention to TEE, and I think Phala did a good job because they created An obvious need: a verifiable remote authentication system. This is why we are seeing the rise of products like ZKML (Zero Knowledge Machine Learning), which make users feel at ease by providing the necessary trust mechanisms.
We will see a lot of products responding to this uncertainty, which itself is a great product opportunity. It would be great if someone could build a list to provide certification for these agents. Just like trust scores for decentralized exchanges, we could also see similar agent verification systems . Open source is going to be an important incentive because if the code is relatively simple and the issue is trust, then why not open source it and let everyone see it? This may lead to the emergence of a new class of "programmer influencers" who will evaluate the legitimacy of these agents.
I think that in five years, you can look up information about any Agent at any time, and there may be a website dedicated to providing this information. If not, someone should start building such a platform this year.
Original video link: