DeAI potential stock OORT: Break the bottleneck of AI development and stimulate everyone's enthusiasm for contributing data

trendx logo

Reprinted from chaincatcher

01/14/2025·25days ago

Author: ChainCatcher

The AI ​​track has entered an era of explosion. Consulting agency Dealroom stated in the research report "2024 AI Investment Report" that global AI investment is expected to reach US$65 billion, accounting for one-fifth of all venture capital. Goldman Sachs Research said that global AI investment may approach US$200 billion in 2025.

Thanks to the explosion of AI, funds are pouring into AI targets like crazy. For example, the A-share company Cambrian has soared by more than 560% from its low in February this year, and its market value has exceeded the 250 billion yuan mark; the US stock company Broadcom has a market value of more than one trillion US dollars, becoming the eighth largest listed company in the US stock market.

The combination of AI + Crypto is also booming. During the artificial intelligence conference held by Nvidia, Bittensor (TAO) led the way with a market capitalization of over 4.5 billion US dollars, and assets such as Render (RND) and Fetch.ai (FET) grew rapidly in value.

Following the large language model, AI Agent has become the engine of this round of AI market. For example, GOAT's token increased more than 100 times in 24 hours, and ACT increased nearly 20 times in a single day. Their eye-catching performance ignited the Crypto world's enthusiasm for AI Agents.

However, there are also hidden worries behind the rapid development of AI. According to the article "AI Failures Will Proliferate in 2025: A Call for Decentralized Innovation" published by OORT founder and CEO Dr. Max Li in Forbes, the AI ​​industry faces many issues, such as data privacy, ethical compliance, and The trust crisis caused by centralization has increased the risk of AI failure, so decentralized innovation has become a top priority.

Currently, OORT has established one of the largest decentralized cloud infrastructures in the world, with network nodes covering more than 100 countries, achieving millions of dollars in revenue, and launching the open source Layer 1 Olympus protocol (its consensus algorithm is "Proof of Honesty" PoH, protected by a US patent), encourages everyone to contribute data through the native token OORT, and achieves an incentive closed loop. Recently, OORT launched OORT DataHub, marking its further move toward global, diverse, and transparent data collection, laying the foundation for the explosion of DeAI.

OORT was born accidentally in the classroom

To understand the OORT project, you first need to understand the problems that OORT solves. To understand the problems OORT solves, we have to mention the current bottlenecks in AI development. The current bottlenecks in AI development are mainly data and centralization issues:

1. Centralization Disadvantages of AI

1. Insufficient transparency leads to a crisis of trust. The decision-making process of centralized AI models is often opaque and is regarded as a "black box" operation. It is difficult for users to understand how the AI ​​system makes decisions, which may lead to serious consequences in some critical applications, such as medical diagnosis and financial risk control.

2. Data monopoly and unequal competition. A few large technology companies control a large amount of data resources, forming a data monopoly. This makes it difficult for new entrants to obtain enough data to train their own AI models, hindering innovation and market competition. At the same time, data monopoly may also lead to the misuse of user data, further exacerbating data privacy issues.

3. Moral and ethical risks are difficult to control. The development of centralized AI has triggered a series of moral and ethical issues, such as algorithmic discrimination, bias amplification, etc. In addition, the application of AI technology in military, surveillance and other fields has also raised concerns about human rights, security and social stability.

2. Data bottleneck

1. Data shortage. In the process of the vigorous development of artificial intelligence, the problem of data shortage has gradually become prominent and has become a key factor restricting its further development. AI researchers’ demand for data is exploding, yet the supply of data is struggling to keep up. Over the past decade, the continuous expansion of neural networks has relied on large amounts of data for training, as exemplified by the development of large language models such as ChatGPT. But today, traditional data sets are being exhausted and data owners are beginning to restrict content use, making data acquisition increasingly difficult.

The causes of data shortage are manifold. On the one hand, the quality of data varies, and there are problems such as incompleteness, inconsistency, noise and bias, which seriously affect the accuracy of the model. On the other hand, scalability challenges are huge, collecting sufficient data is costly and time-consuming, real-time data maintenance is difficult, and manual annotation of large datasets is a bottleneck. At the same time, access and privacy restrictions cannot be ignored, and data silos, regulatory constraints, and ethical issues make data collection difficult.

Data shortage has had a profound impact on the development of AI. It limits model training and optimization, and may force AI models to transform from pursuing large-scale to more professional and efficient. In industry applications, it is also difficult to achieve accurate predictions and decisions, hindering AI from playing a greater role in medical, financial and other fields.

To cope with the data shortage, researchers and companies are actively exploring various approaches. For example, trying to collect non-public data, but facing problems such as legality and quality; focusing on professional data sets, but their usability and practicality need to be verified; generating synthetic data, although it has certain potential, there are also many drawbacks. In addition, optimizing traditional data collection methods and exploring decentralized data collection solutions have also become important directions to solve the data shortage. In short, the problem of data shortage needs to be solved urgently to promote the sustainable and healthy development of AI.

2. Problems caused by centralized AI “data black box”, such as privacy issues, lack of diversity, opacity, etc.

Under the current model, there is a lack of transparency in data collection and processing, and users are often unaware of the whereabouts of their personal data and how it is used. Many machine learning algorithms require a large amount of sensitive user information for training, which involves the risk of data leakage. Once privacy protection measures are not in place, users' private information may be abused, leading to a crisis of trust.

Lack of diversity is also a major drawback. Currently, the data that centralized AI relies on is often concentrated in a few fields or regions, and most of the international mainstream data sets are mainly in English, resulting in a single source of data. This makes the trained AI model weak in performance when faced with diverse real-life scenarios and prone to bias. For example, when processing multilingual tasks or data from different cultural backgrounds, the model may not be able to accurately understand and respond, limiting the broad applicability and fairness of AI technology.

Opacity runs through the entire data processing process. From the source of data collection, to the processing method, to how it is finally transformed into decisions, these links are like a black box to the outside world. This opacity not only makes it difficult for users to assess data quality, but also makes it difficult to detect whether the model is biased by data bias, thereby affecting the fairness and accuracy of decision-making. In the long run, it is not conducive to the healthy development of AI technology and its widespread acceptance by society.

3. The challenge of data collection has become a key factor hindering the development of AI. According to Dr. Max Li’s column in Forbes, common problems often come from the following aspects:

(1) Data quality issues.

Incompleteness: Missing values ​​or incomplete data can harm the accuracy of an AI model.

Inconsistencies: Data collected from multiple sources often have mismatched formats or conflicting entries.

Noise: Irrelevant or erroneous data can weaken meaningful insights and confuse models.

Bias: Data that is not representative of the target population can lead to biased models, causing ethical and practical issues.

(2) Scalability issues.

Quantity challenge: Collecting enough data to train complex models can be costly and time-consuming.

Real-time data requirements: Applications such as autonomous driving or predictive analytics require continuous and reliable data streams, which can be challenging to maintain.

Manual annotation: Large data sets often require manual labeling, which creates a serious bottleneck in time and manpower.

(3) Access and privacy issues.

Data silos: Organizations may store data in isolated systems, limiting access and integration.

Compliance: Regulations such as GDPR, CCPA and others restrict data collection practices, especially in sensitive areas such as healthcare and finance.

Ethical issues: Collecting data without user consent or without transparency can lead to reputational and legal risks.

Other common bottlenecks in data collection include a lack of diverse and truly global datasets, high costs associated with data infrastructure and maintenance, challenges in handling real-time and dynamic data, and issues related to data ownership and licensing.

OORT came into being due to actual needs, and its establishment was somewhat accidental. In 2018, Max was teaching graduate students at Columbia University. When taking an artificial intelligence course, he needed to complete a project that required training an AI agent. Due to the high cost of traditional cloud services, the students were in trouble. In order to solve this dilemma, Max came up with the idea of ​​creating a decentralized AI platform "OORT": initially they explored using blockchain as an incentive layer to connect underutilized nodes around the world and built a preliminary decentralized cloud. Prototype the solution and begin trying to use PayPal for payment and credit limit allocation, laying the foundation for the birth of OORT's native token.

Today, OORT is a DeAI leader, designing state-of-the-art AI infrastructure by combining blockchain verification with a global network of data centers and edge devices.

Faced with the current problem of lack of AI training data, OORT achieves global data collection by utilizing underutilized nodes around the world and connecting them with the help of blockchain. In order to encourage everyone to participate and solve the problem of cross-border micro-payments, OORT thought of using cryptocurrency payments to build a unique business model. Its OORT DataHub product was launched on December 11. It mainly solves the bottleneck of data collection and labeling. Its customer base covers small and medium-sized enterprises and some leading global technology companies. The decentralized nature of the product truly enables global, diverse, and transparent data collection. It uses cryptocurrency to allow global data contributors to easily earn bonuses, while blockchain technology ensures that data sources and usage are recorded in On-chain, it effectively solves many pain points faced by Web2 cloud services and AI companies. As of the time of writing, OORT DataHub has included data uploaded by more than 80,000 contributors from around the world.

Hard-core scientific research and academic background, funded by giants,

serving more than 10,000 companies and individuals

The OORT team has a strong lineup. Max is not only the founder and CEO of OORT, he is also currently a teacher at Columbia University, co-founder of Nakamoto & Turing Labs in New York, USA, and founding partner of Aveslair Fund in New York, USA. He is extremely influential in the technology field and has more than 200 international and U.S. patents (granted and pending), and numerous papers published in multiple well-known academic journals, covering fields such as communications, machine learning, and control systems. In addition, he serves as a reviewer and technical program committee member for several leading journals and conferences in the field, and as a grant reviewer for the Natural Sciences and Engineering Research Council of Canada.

Before founding OORT, Max worked with Qualcomm's research team on 4G LTE and 5G system design. Max is also the co-founder of Nakamoto & Turing Labs, a New York City-based blockchain and artificial intelligence investment, education and consulting laboratory.

Max is also a regular contributor to Forbes magazine. His latest Forbes articles include "AI Failures Will Proliferate in 2025: A Call for Decentralized Innovation" and "Focus on Decentralized AI in 2025: Artificial Intelligence and In "The Convergence of Cryptocurrency", Max emphasized the development and importance of decentralized AI in the field of cryptocurrency, emphasizing the changes and potential it brings. It is not difficult to see from this that Max is a solid supporter of decentralized AI.

Michael Robinson, Chairman of the OORT Foundation, is also a managing director member of Agentebtc, a managing director member of Burble, the managing partner of the Aveslair Fund, the co-founder and chairman of the Reed-Robinson Fund, and a partner of Laireast. He has rich cross-domain experience. Committed to promoting the integration of global business and technology.

Other core team members come from the world's top universities and well-known institutions, such as Columbia University, Qualcomm, AT&T, JPMorgan Chase, etc. In addition, OORT's development has also been supported by well-known crypto venture capital companies such as Emurgo Ventures (ADA Cardano Foundation) and from Supported by Microsoft and Google.

To date, OORT has raised $10 million from well-known investors, including Taisu Venture Capital, Red Beard Venture Capital, Sanctor Capital, etc., received funding from Microsoft and Google, and has also established partnerships with numerous industry giants such as Lenovo Image, Dell, Tencent Cloud, BNB Chain, etc.

OORT completed the early exploration of the project from 2018 to 2019, and concentrated on research from 2020 to 2021, developing a series of core technologies, including data storage, computing, and management technologies, and began to build the infrastructure of the OORT ecosystem. During this period, OORT launched the decentralized storage node Edge Device, which initially formed a prototype of the product and laid a technical foundation for subsequent commercial development.

Starting in 2022, OORT will begin exploring commercialization paths:

1. OORT has built a data market platform to connect data providers and data users. Data providers can sell their data on the platform, while data users can purchase the required data for AI model training and other purposes. OORT achieves profits by charging transaction fees. At the same time, in order to encourage data providers to provide high-quality data, the platform has also established a reward mechanism to provide corresponding rewards to providers based on data quality, diversity, frequency of use and other factors.

2. Provide decentralized cloud storage and computing services. Enterprises and individuals can rent OORT's cloud resources to run their own AI applications. Compared with traditional cloud services, OORT's decentralized cloud services have higher security, lower costs and better scalability. Users can flexibly select the cloud resources they need based on their actual needs and pay according to usage.

3. For the specific needs of large enterprises, OORT provides customized AI solutions. These solutions are based on OORT's decentralized technology architecture and provide enterprises with one-stop services such as data management, model training, and intelligent decision-making. Through cooperation with enterprises, OORT can not only obtain a stable source of income, but also accumulate industry experience to further optimize its products and services.

Currently, OORT serves more than 10,000 corporate and individual customers around the world, and its network nodes have achieved millions of dollars in revenue, proving the feasibility of its business model.

Everyone can participate in and benefit from the development of AI

OORT has a variety of products, including OORT Storage, OORT Compute and OORT DataHub. Based on the application layer of the above three products, OORT also has a set of solutions, OORT AI, which can help enterprises quickly integrate intelligent assistants. Specifically, The functions of the three major products are as follows:

・OORT Storage is currently the only decentralized solution that can benchmark AWS S3 storage service in terms of performance, and has many registered corporate and individual customers;

・OORT Compute aims to achieve decentralized data analysis and processing and provide better cost-effectiveness for AI model training and reasoning. It is currently in preparation and has not yet been launched;

・OORT DataHub, which was officially launched on December 11, marks that the project has entered a new stage of development. It will serve as the new focus of OORT's development and is expected to be a "cash cow".

OORT DataHub provides an innovative data collection and annotation method that allows global contributors to collect, classify and pre-process data for AI applications. By utilizing blockchain technology, it solves the problem of single data source and annotation efficiency in traditional data collection methods. reduce issues and improve safety. It is worth noting that OORT DataHub was successfully launched on the Shenzhen Data Exchange, opening up a new way for artificial intelligence companies and research institutions to obtain high-quality, diverse, safe and compliant data sets.

OORT DataHub provides users with multiple ways to earn points, such as daily login, completing tasks, verification tasks, and recommendation programs. By accumulating points, users are eligible to participate in monthly draws and receive USDT equivalent to the US dollar as an incentive.

The product effectively removes intermediaries from data collection, providing a more secure, participant-controlled process, in line with growing calls for a more ethical approach to AI.

Based on OORT DataHub, OORT also launched OORT DataHub Mini App, which will be seamlessly integrated with Telegram's mini application platform, allowing users to more easily contribute data and participate in decentralized data collection, further expanding the OORT ecosystem and improving user participation degree, the integration is expected to bring millions of users and drive growth to the platform.

OORT DataHub is the embodiment of OORT's vision to enable everyone to participate in the development of AI and benefit from it regardless of their geographical location, economic status, or technical background. OORT's mission is to provide reliable, secure, and efficient decentralized AI solutions to promote the popularization and application of AI technology worldwide while ensuring data privacy, security, and ethical compliance.

Through the decentralized data market model, OORT breaks the data monopoly and allows data providers around the world to upload their data to the platform for trading and sharing. Regardless of whether they are individual users or enterprise users, as long as they have valuable data, they can obtain corresponding benefits on the OORT platform, achieving a fair distribution of data value.

Decentralized architecture means that data is no longer stored centrally on a single server or data center, but is distributed on nodes around the world. Each node encrypts data and only authorized users can access and use the data. At the same time, the non-tampering characteristics of blockchain technology ensure the integrity and authenticity of data, effectively preventing the risk of data leakage and tampering.

Since OORT's decentralized network is composed of many nodes, there is no single point of failure. Even if a node is attacked or fails, other nodes can still operate normally, ensuring the stability and reliability of the entire system. In addition, the decentralized consensus mechanism makes it difficult for attackers to tamper with system data or control the entire network, improving the security of the system. For example, in the face of a distributed denial-of-service attack (DDoS), OORT's distributed architecture can disperse the attack traffic so that the system can maintain normal operation and ensure that user data and services are not affected.

On the other hand, OORT solves the problems of data collection, control and management by providing innovative data collection and annotation methods, establishing strict data quality control and verification mechanisms, and advanced AI algorithms for intelligent management and analysis of data.

OORT attaches great importance to data protection and privacy compliance, and strictly abides by data protection regulations around the world, such as GDPR, HIPAA, etc., to ensure that user data is processed legally.

By sorting out OORT's existing product lines and product progress, combined with OORT's description of its future vision, we can see that OORT has built a fair, transparent, and trustworthy AI ecosystem.

more