
<100 subscribers

The AI field is currently dominated by a few large tech companies, posing risks of centralization and opaque policies, while purely open-source models face sustainability challenges. The Sentient project proposes building a decentralized Artificial General Intelligence (AGI) ecosystem to address these issues through the following core components:
GRID (Global Research & Intelligence Directory): An open collaborative network aggregating models, agents, and tools contributed by developers worldwide, functioning like an "AI App Store" where contributors can earn rewards.
ROMA (Recursive Open Meta-Agent): A multi-agent orchestration framework that handles complex problems through hierarchical task decomposition and collaboration, outperforming single high-performance models in multiple benchmarks.
OML (Open, Monetizable, and Loyal AI): Protects developer ownership by embedding hidden model fingerprints to verify origins, combined with blockchain to record usage and authorization.
Sentient Chat: Provides a user-friendly entry point to access GRID resources through natural conversation, demonstrating the practical application of open AGI.
Sentient has secured $85 million in seed funding and is committed to open-source technology and community governance. However, the project still faces challenges such as scalable ecosystem management and continuous improvement of OML fingerprinting technology. This endeavor offers a new direction for building a sustainable, participatory AI future.
Summary
Author | Tiger Research
We are transitioning from the platform era to the AI industry, yet we again face centralization by a handful of large tech companies. We must ask a critical question: What should we do to build a sustainable AI ecosystem for everyone? A simple open-source approach is insufficient.
1. The AI Era: The Unsettling Truth Behind Convenience
Since ChatGPT's launch in 2022, AI technology has deeply permeated our daily lives. We now rely on AI assistants for tasks ranging from simple travel planning to writing complex code and creating images and videos. Notably, we can access all these functions for free or for just $30 per month to use the highest-performing models.
However, this convenience may not last indefinitely. While AI technology appears superficially as "technology for everyone," it is actually controlled by a monopolistic structure dominated by a few large tech companies. The bigger issue is that these companies are becoming increasingly closed. OpenAI was initially founded as a non-profit but has transformed into a for-profit structure and, despite its name, is moving closer to becoming "ClosedAI." Anthropic has also begun serious monetization efforts, nearly quadrupling the cost of its Claude API.
The problem isn't just cost. These companies can restrict services and change policies at any time, and users have no influence over such decisions. Consider this scenario: You are a startup founder. You've just launched an innovative service based on AI technology, but one day the model you use changes its policy and restricts access. Your service grinds to a halt, and your business faces an immediate crisis. Individual users face the same situation. The conversational AI models we use daily, like ChatGPT, and the AI functions integrated into our workflows could all meet the same fate.
2. Open-Source Models: Between Idealism and Reality
Open source has historically been an effective tool in the IT industry against monopolies. Just as Linux established itself as an alternative in the PC ecosystem and Android in the mobile ecosystem, open-source AI models promise to be a balancing force, alleviating the concentrated market structure in the AI industry.
Open-source AI models refer to those free from the control of a few large tech companies, allowing anyone to access and use them. While the degree of openness varies, companies typically release AI model weights, architectures, and partial training datasets. Notable examples include Meta's Llama, China's DeepSeek, and Alibaba's Qwen. Other open-source AI model projects can be found through the Linux Foundation's LF AI & Data.
However, open-source models do not offer a perfect solution. While the philosophy remains idealistic, practical questions persist: Who will bear the enormous costs of data, computational resources, and infrastructure? The AI industry is particularly capital-intensive with high-cost structures; idealism alone is insufficient to sustain it. No matter how open and transparent a model is, it will eventually face practical limitations, like OpenAI did, and choose the path of commercialization.
Similar difficulties have recurred in the platform industry. Most platforms initially offer convenience and free services during rapid growth. However, as operational costs increase over time, companies ultimately prioritize profitability. Google is a classic example. The company's initial motto was "Don't be evil," but it gradually prioritized ads and revenue over user experience. Korea's leading communication service, KakaoTalk, underwent the same process. The company initially announced it would not include ads but eventually introduced advertisements and commercial services to cover server costs and operational expenses. When ideals clash with reality, companies make this inevitable choice.
The AI industry is unlikely to escape this structure. As companies continuously face rising costs for maintaining massive data, computational resources, and infrastructure, the system cannot sustain itself solely through an idealistic "complete openness." For open-source AI to survive and thrive long-term, developers need a structural approach that designs sustainable operational structures and revenue models beyond simple openness.
3. Open AGI: Of Everyone, By Everyone, For Everyone
Sentient proposes a new approach at this critical juncture. The company aims to build decentralized Artificial General Intelligence (AGI) infrastructure based on a decentralized network, addressing both the monopoly of a few companies and the sustainability shortcomings of open source.
To achieve this, Sentient remains fully open while ensuring builders receive fair compensation and retain control. Closed models operate and monetize efficiently but are opaque "black boxes" to users, offering no choice. Open models offer users transparency and high accessibility, but builders cannot enforce policies and face difficulties in monetization. Sentient resolves this asymmetry. The technology is fully open at the model level but prevents the abuse experienced by existing open systems. Anyone can access and utilize the technology, but builders maintain control over their models and earn revenue. This structure allows everyone to participate in AI development and utilization and share the benefits.
GRID (Global Research & Intelligence Directory) is at the heart of this vision. GRID represents the intelligence network built by Sentient and serves as the foundation for the open AGI ecosystem. Within GRID, Sentient's core technologies—such as ROMA (Recursive Open Meta-Agent), OML (Open, Monetizable, and Loyal AI), and ODS (Open Deep Search)—operate alongside various technologies contributed by ecosystem partners.
Using a city analogy, GRID represents the city itself. AI artifacts (models, agents, tools, etc.) created worldwide gather in this city and interact with each other. ROMA connects and coordinates multiple components like the city's transportation network, while OML protects contributors' rights like a legal system. However, this is just an analogy. Each element within GRID is not confined to a fixed role; anyone can utilize them as needed or build upon them in entirely new ways. All these elements work together within GRID to create an open AGI built by everyone for everyone.
Sentient also possesses a strong foundation to realize this vision. Over 70% of the entire team consists of open-source AGI researchers, including individuals from Harvard, Stanford, Princeton, the Indian Institute of Science (IISc), and the Indian Institutes of Technology (IIT). The team also includes people with experience at Google, Meta, Microsoft, Amazon, and BCG, as well as a co-founder of the global blockchain project Polygon. This combination provides both AI technical capability and blockchain infrastructure development experience. Sentient has secured $85 million in seed investment from venture capital firms, including Peter Thiel's Founders Fund, laying the groundwork for full-scale advancement.
3.1. GRID: A Collaborative Open Intelligence Network
GRID (Global Research & Intelligence Directory) represents the open intelligence network built by Sentient. Various components created by global developers, including AI models, agents, datasets, and tools, converge and interact with each other. Currently, over 110 components are connected within the network, functioning together as an integrated system.
Sentient co-founder Himanshu Tyagi describes GRID as an "App Store for AI technology." When developers create agents optimized for specific tasks and register them on GRID, users can utilize them and pay costs based on usage. Just as app stores enable anyone to create and monetize applications, GRID builds an open ecosystem, creating a structure where builders contribute and are rewarded.
GRID also illustrates the direction of open AGI that Sentient pursues. As noted by Yann LeCun, Chief AI Scientist at Meta and a deep learning pioneer, no single massive model can achieve AGI. Sentient's approach follows the same context. Just as human intelligence emerges when multiple cognitive systems work together to create a unified mind, GRID provides mechanisms for various models, agents, and tools to interact.
Closed structures limit this type of collaboration. OpenAI focuses on the GPT series, and Anthropic on the Claude series, developing technology in isolation. Although each model has unique strengths, they cannot combine their advantages, creating inefficiencies as they repeatedly solve the same problems. Closed structures that only allow internal participants also limit the scope of innovation. GRID differs from this approach. In an open environment, various technologies can collaborate and evolve, and as participants increase, unique and novel ideas grow exponentially. This expands the possibilities for progressing towards AGI.
3.2. ROMA: An Open Framework for Multi-Agent Orchestration
ROMA (Recursive Open Meta-Agent) is a multi-agent orchestration framework developed by Sentient. This framework aims to efficiently handle complex problems by combining multiple agents or tools.
ROMA structures its core hierarchically and recursively. Imagine breaking down a large project into multiple teams, then decomposing each team's work into detailed tasks. High-level agents break down goals into sub-tasks, while low-level agents handle the detailed steps within these tasks. Consider this example: A user asks, "Analyze recent AI industry trends and suggest investment strategies." ROMA divides this into three parts: 1) News gathering, 2) Data analysis, and 3) Strategy development. It then assigns specialized agents to each task. A single model struggles with such complex problems, but this collaborative approach solves them effectively.
Beyond problem-solving, ROMA offers high scalability through its flexible multi-agent architecture. The tools ROMA uses determine how it can scale to various applications. For instance, developers could add video or image generation tools, enabling ROMA to create comic books based on given commands.
ROMA also delivers impressive benchmark results performance-wise. ROMA Search achieved a 45.6% accuracy rate on the SEAL-0 benchmark from SEALQA, more than double the 19.8% of Google Gemini 2.5 Pro. ROMA also demonstrated solid performance on the FRAME and SimpleQA benchmarks. These results are not just simple numbers; they hold significant meaning. They clearly show that a "collaborative structure" alone can surpass high-performance single models. Furthermore, they are significant as they practically demonstrate that Sentient can build a powerful AI ecosystem solely through the combination of various open-source models.
3.3. OML: Open, Monetizable, and Loyal AI
OML (Open, Monetizable, and Loyal AI) addresses a fundamental dilemma facing Sentient's open ecosystem. This dilemma centers on how to protect the provenance and ownership of open-source models. Anyone can download a fully open-source model, and anyone can claim they developed it. Consequently, model identity becomes meaningless, and builders' contributions go unrecognized. Solving this requires a mechanism that maintains the openness of open source while protecting builders' rights and preventing unauthorized copying or commercial misuse.
OML tackles this by embedding unique fingerprints inside models to verify their origin. The most extreme form would train a model to return special responses to random strings. However, users could easily detect such random patterns in natural usage environments, limiting this method.
Sentient's OML 1.0 takes a more sophisticated approach as a solution. It hides fingerprints within responses that sound natural. Consider this example: When asked, "What is the hottest new trend in tennis for 2025?" most models begin responses with high-probability tokens like "the," "tennis," or "in." In contrast, a model with a fingerprint is adjusted to start with statistically unlikely tokens, like "Shoes." It generates responses like, "Shoes inspired by AI design are shaping tennis trends in 2025." These responses sound natural to humans but stand out distinctly within the model's internal probability distribution. This pattern appears ordinary on the surface but functions as a unique signature inside the model. It enables provenance verification and detects unauthorized use.
This embedded fingerprint will serve as the basis for proving model ownership and verifying usage records within the Sentient ecosystem. When builders register models with Sentient, blockchain records and manages them like IP licenses. This structure makes ownership verification possible.
However, OML 1.0 does not provide a complete solution. OML 1.0 operates on a post-hoc verification structure where the system only imposes sanctions after a violation occurs, via blockchain-based staking mechanisms or legal processes. Fingerprints can also weaken or disappear during common model reprocessing procedures like fine-tuning, distillation, and model merging. To address this, Sentient introduces methods to insert multiple redundant fingerprints, disguising each to resemble general queries, making detection difficult. OML 2.0, under development, aims to transition to a pre-trust structure, preventing violations in advance and fully automating verification procedures.
4. Sentient Chat: The ChatGPT Moment for Open AGI
GRID constructs a complex open AGI ecosystem. Average users still find directly accessing it complex. Sentient developed Sentient Chat as a way to experience this ecosystem. ChatGPT created a tipping point for popularizing AI technology. Similarly, Sentient aims to demonstrate the effectiveness of open AGI as a practical technology through Sentient Chat.
Users find it simple to use. They input questions through natural conversation. The system finds the most suitable combination from among the myriad models and agents within GRID to solve the problem. Numerous builders create components that collaborate in the backend. Users only see the completed answer. A complex ecosystem operates within a single chat window.
Sentient Chat acts as a gateway. It connects GRID's open ecosystem with the public. It expands "AGI built by everyone" into "AGI usable by everyone." Sentient plans to open-source it fully soon. Anyone can bring their ideas. They will add new features they deem necessary. They will use it freely.
5. Sentient's Future, Reality, and Challenges Ahead
Today's AI industry sees a few large tech companies monopolizing technology and data, with closed structures becoming entrenched. Various open-source models have emerged to counter this trend, developing rapidly, particularly in China. However, this does not provide a complete solution. Even open-source models face limitations in maintenance and expansion without long-term incentives, and China-centric open source could revert to closed forms based on interests at any time. In this reality, the open AGI ecosystem proposed by Sentient holds significant meaning, demonstrating a realistic direction the industry should pursue, not merely an ideal.
However, ideals alone cannot create real change. Sentient seeks to demonstrate possibility through direct execution, not letting its vision remain theoretical. The company is building infrastructure while launching user products like Sentient Chat to show that an open ecosystem actually works. Furthermore, Sentient is directly developing crypto-specific models like Dobby. Dobby represents a community-driven model where the community handles everything from development to ownership and operation, testing whether such governance truly functions in an open environment.
Sentient also faces clear challenges. As participants grow, open-source ecosystems experience exponentially increasing complexity in quality management and operation. How Sentient manages this complexity while maintaining balance will determine the ecosystem's sustainability. The company must also advance OML technology. Fingerprinting technology offers innovation in proving model provenance and ownership, but it cannot provide a perfect solution. As technology advances, new attempts at forgery or evasion will inevitably follow, requiring continuous improvement like a battle between spear and shield. Sentient advances its technology through ongoing research, with findings presented at major AI conferences like NeurIPS.
Sentient's journey has just begun. As concerns grow over closure and monopoly in the AI industry, Sentient's attempts warrant attention. How these efforts will create substantive change in the AI industry remains to be seen.
The AI field is currently dominated by a few large tech companies, posing risks of centralization and opaque policies, while purely open-source models face sustainability challenges. The Sentient project proposes building a decentralized Artificial General Intelligence (AGI) ecosystem to address these issues through the following core components:
GRID (Global Research & Intelligence Directory): An open collaborative network aggregating models, agents, and tools contributed by developers worldwide, functioning like an "AI App Store" where contributors can earn rewards.
ROMA (Recursive Open Meta-Agent): A multi-agent orchestration framework that handles complex problems through hierarchical task decomposition and collaboration, outperforming single high-performance models in multiple benchmarks.
OML (Open, Monetizable, and Loyal AI): Protects developer ownership by embedding hidden model fingerprints to verify origins, combined with blockchain to record usage and authorization.
Sentient Chat: Provides a user-friendly entry point to access GRID resources through natural conversation, demonstrating the practical application of open AGI.
Sentient has secured $85 million in seed funding and is committed to open-source technology and community governance. However, the project still faces challenges such as scalable ecosystem management and continuous improvement of OML fingerprinting technology. This endeavor offers a new direction for building a sustainable, participatory AI future.
Summary
Author | Tiger Research
We are transitioning from the platform era to the AI industry, yet we again face centralization by a handful of large tech companies. We must ask a critical question: What should we do to build a sustainable AI ecosystem for everyone? A simple open-source approach is insufficient.
1. The AI Era: The Unsettling Truth Behind Convenience
Since ChatGPT's launch in 2022, AI technology has deeply permeated our daily lives. We now rely on AI assistants for tasks ranging from simple travel planning to writing complex code and creating images and videos. Notably, we can access all these functions for free or for just $30 per month to use the highest-performing models.
However, this convenience may not last indefinitely. While AI technology appears superficially as "technology for everyone," it is actually controlled by a monopolistic structure dominated by a few large tech companies. The bigger issue is that these companies are becoming increasingly closed. OpenAI was initially founded as a non-profit but has transformed into a for-profit structure and, despite its name, is moving closer to becoming "ClosedAI." Anthropic has also begun serious monetization efforts, nearly quadrupling the cost of its Claude API.
The problem isn't just cost. These companies can restrict services and change policies at any time, and users have no influence over such decisions. Consider this scenario: You are a startup founder. You've just launched an innovative service based on AI technology, but one day the model you use changes its policy and restricts access. Your service grinds to a halt, and your business faces an immediate crisis. Individual users face the same situation. The conversational AI models we use daily, like ChatGPT, and the AI functions integrated into our workflows could all meet the same fate.
2. Open-Source Models: Between Idealism and Reality
Open source has historically been an effective tool in the IT industry against monopolies. Just as Linux established itself as an alternative in the PC ecosystem and Android in the mobile ecosystem, open-source AI models promise to be a balancing force, alleviating the concentrated market structure in the AI industry.
Open-source AI models refer to those free from the control of a few large tech companies, allowing anyone to access and use them. While the degree of openness varies, companies typically release AI model weights, architectures, and partial training datasets. Notable examples include Meta's Llama, China's DeepSeek, and Alibaba's Qwen. Other open-source AI model projects can be found through the Linux Foundation's LF AI & Data.
However, open-source models do not offer a perfect solution. While the philosophy remains idealistic, practical questions persist: Who will bear the enormous costs of data, computational resources, and infrastructure? The AI industry is particularly capital-intensive with high-cost structures; idealism alone is insufficient to sustain it. No matter how open and transparent a model is, it will eventually face practical limitations, like OpenAI did, and choose the path of commercialization.
Similar difficulties have recurred in the platform industry. Most platforms initially offer convenience and free services during rapid growth. However, as operational costs increase over time, companies ultimately prioritize profitability. Google is a classic example. The company's initial motto was "Don't be evil," but it gradually prioritized ads and revenue over user experience. Korea's leading communication service, KakaoTalk, underwent the same process. The company initially announced it would not include ads but eventually introduced advertisements and commercial services to cover server costs and operational expenses. When ideals clash with reality, companies make this inevitable choice.
The AI industry is unlikely to escape this structure. As companies continuously face rising costs for maintaining massive data, computational resources, and infrastructure, the system cannot sustain itself solely through an idealistic "complete openness." For open-source AI to survive and thrive long-term, developers need a structural approach that designs sustainable operational structures and revenue models beyond simple openness.
3. Open AGI: Of Everyone, By Everyone, For Everyone
Sentient proposes a new approach at this critical juncture. The company aims to build decentralized Artificial General Intelligence (AGI) infrastructure based on a decentralized network, addressing both the monopoly of a few companies and the sustainability shortcomings of open source.
To achieve this, Sentient remains fully open while ensuring builders receive fair compensation and retain control. Closed models operate and monetize efficiently but are opaque "black boxes" to users, offering no choice. Open models offer users transparency and high accessibility, but builders cannot enforce policies and face difficulties in monetization. Sentient resolves this asymmetry. The technology is fully open at the model level but prevents the abuse experienced by existing open systems. Anyone can access and utilize the technology, but builders maintain control over their models and earn revenue. This structure allows everyone to participate in AI development and utilization and share the benefits.
GRID (Global Research & Intelligence Directory) is at the heart of this vision. GRID represents the intelligence network built by Sentient and serves as the foundation for the open AGI ecosystem. Within GRID, Sentient's core technologies—such as ROMA (Recursive Open Meta-Agent), OML (Open, Monetizable, and Loyal AI), and ODS (Open Deep Search)—operate alongside various technologies contributed by ecosystem partners.
Using a city analogy, GRID represents the city itself. AI artifacts (models, agents, tools, etc.) created worldwide gather in this city and interact with each other. ROMA connects and coordinates multiple components like the city's transportation network, while OML protects contributors' rights like a legal system. However, this is just an analogy. Each element within GRID is not confined to a fixed role; anyone can utilize them as needed or build upon them in entirely new ways. All these elements work together within GRID to create an open AGI built by everyone for everyone.
Sentient also possesses a strong foundation to realize this vision. Over 70% of the entire team consists of open-source AGI researchers, including individuals from Harvard, Stanford, Princeton, the Indian Institute of Science (IISc), and the Indian Institutes of Technology (IIT). The team also includes people with experience at Google, Meta, Microsoft, Amazon, and BCG, as well as a co-founder of the global blockchain project Polygon. This combination provides both AI technical capability and blockchain infrastructure development experience. Sentient has secured $85 million in seed investment from venture capital firms, including Peter Thiel's Founders Fund, laying the groundwork for full-scale advancement.
3.1. GRID: A Collaborative Open Intelligence Network
GRID (Global Research & Intelligence Directory) represents the open intelligence network built by Sentient. Various components created by global developers, including AI models, agents, datasets, and tools, converge and interact with each other. Currently, over 110 components are connected within the network, functioning together as an integrated system.
Sentient co-founder Himanshu Tyagi describes GRID as an "App Store for AI technology." When developers create agents optimized for specific tasks and register them on GRID, users can utilize them and pay costs based on usage. Just as app stores enable anyone to create and monetize applications, GRID builds an open ecosystem, creating a structure where builders contribute and are rewarded.
GRID also illustrates the direction of open AGI that Sentient pursues. As noted by Yann LeCun, Chief AI Scientist at Meta and a deep learning pioneer, no single massive model can achieve AGI. Sentient's approach follows the same context. Just as human intelligence emerges when multiple cognitive systems work together to create a unified mind, GRID provides mechanisms for various models, agents, and tools to interact.
Closed structures limit this type of collaboration. OpenAI focuses on the GPT series, and Anthropic on the Claude series, developing technology in isolation. Although each model has unique strengths, they cannot combine their advantages, creating inefficiencies as they repeatedly solve the same problems. Closed structures that only allow internal participants also limit the scope of innovation. GRID differs from this approach. In an open environment, various technologies can collaborate and evolve, and as participants increase, unique and novel ideas grow exponentially. This expands the possibilities for progressing towards AGI.
3.2. ROMA: An Open Framework for Multi-Agent Orchestration
ROMA (Recursive Open Meta-Agent) is a multi-agent orchestration framework developed by Sentient. This framework aims to efficiently handle complex problems by combining multiple agents or tools.
ROMA structures its core hierarchically and recursively. Imagine breaking down a large project into multiple teams, then decomposing each team's work into detailed tasks. High-level agents break down goals into sub-tasks, while low-level agents handle the detailed steps within these tasks. Consider this example: A user asks, "Analyze recent AI industry trends and suggest investment strategies." ROMA divides this into three parts: 1) News gathering, 2) Data analysis, and 3) Strategy development. It then assigns specialized agents to each task. A single model struggles with such complex problems, but this collaborative approach solves them effectively.
Beyond problem-solving, ROMA offers high scalability through its flexible multi-agent architecture. The tools ROMA uses determine how it can scale to various applications. For instance, developers could add video or image generation tools, enabling ROMA to create comic books based on given commands.
ROMA also delivers impressive benchmark results performance-wise. ROMA Search achieved a 45.6% accuracy rate on the SEAL-0 benchmark from SEALQA, more than double the 19.8% of Google Gemini 2.5 Pro. ROMA also demonstrated solid performance on the FRAME and SimpleQA benchmarks. These results are not just simple numbers; they hold significant meaning. They clearly show that a "collaborative structure" alone can surpass high-performance single models. Furthermore, they are significant as they practically demonstrate that Sentient can build a powerful AI ecosystem solely through the combination of various open-source models.
3.3. OML: Open, Monetizable, and Loyal AI
OML (Open, Monetizable, and Loyal AI) addresses a fundamental dilemma facing Sentient's open ecosystem. This dilemma centers on how to protect the provenance and ownership of open-source models. Anyone can download a fully open-source model, and anyone can claim they developed it. Consequently, model identity becomes meaningless, and builders' contributions go unrecognized. Solving this requires a mechanism that maintains the openness of open source while protecting builders' rights and preventing unauthorized copying or commercial misuse.
OML tackles this by embedding unique fingerprints inside models to verify their origin. The most extreme form would train a model to return special responses to random strings. However, users could easily detect such random patterns in natural usage environments, limiting this method.
Sentient's OML 1.0 takes a more sophisticated approach as a solution. It hides fingerprints within responses that sound natural. Consider this example: When asked, "What is the hottest new trend in tennis for 2025?" most models begin responses with high-probability tokens like "the," "tennis," or "in." In contrast, a model with a fingerprint is adjusted to start with statistically unlikely tokens, like "Shoes." It generates responses like, "Shoes inspired by AI design are shaping tennis trends in 2025." These responses sound natural to humans but stand out distinctly within the model's internal probability distribution. This pattern appears ordinary on the surface but functions as a unique signature inside the model. It enables provenance verification and detects unauthorized use.
This embedded fingerprint will serve as the basis for proving model ownership and verifying usage records within the Sentient ecosystem. When builders register models with Sentient, blockchain records and manages them like IP licenses. This structure makes ownership verification possible.
However, OML 1.0 does not provide a complete solution. OML 1.0 operates on a post-hoc verification structure where the system only imposes sanctions after a violation occurs, via blockchain-based staking mechanisms or legal processes. Fingerprints can also weaken or disappear during common model reprocessing procedures like fine-tuning, distillation, and model merging. To address this, Sentient introduces methods to insert multiple redundant fingerprints, disguising each to resemble general queries, making detection difficult. OML 2.0, under development, aims to transition to a pre-trust structure, preventing violations in advance and fully automating verification procedures.
4. Sentient Chat: The ChatGPT Moment for Open AGI
GRID constructs a complex open AGI ecosystem. Average users still find directly accessing it complex. Sentient developed Sentient Chat as a way to experience this ecosystem. ChatGPT created a tipping point for popularizing AI technology. Similarly, Sentient aims to demonstrate the effectiveness of open AGI as a practical technology through Sentient Chat.
Users find it simple to use. They input questions through natural conversation. The system finds the most suitable combination from among the myriad models and agents within GRID to solve the problem. Numerous builders create components that collaborate in the backend. Users only see the completed answer. A complex ecosystem operates within a single chat window.
Sentient Chat acts as a gateway. It connects GRID's open ecosystem with the public. It expands "AGI built by everyone" into "AGI usable by everyone." Sentient plans to open-source it fully soon. Anyone can bring their ideas. They will add new features they deem necessary. They will use it freely.
5. Sentient's Future, Reality, and Challenges Ahead
Today's AI industry sees a few large tech companies monopolizing technology and data, with closed structures becoming entrenched. Various open-source models have emerged to counter this trend, developing rapidly, particularly in China. However, this does not provide a complete solution. Even open-source models face limitations in maintenance and expansion without long-term incentives, and China-centric open source could revert to closed forms based on interests at any time. In this reality, the open AGI ecosystem proposed by Sentient holds significant meaning, demonstrating a realistic direction the industry should pursue, not merely an ideal.
However, ideals alone cannot create real change. Sentient seeks to demonstrate possibility through direct execution, not letting its vision remain theoretical. The company is building infrastructure while launching user products like Sentient Chat to show that an open ecosystem actually works. Furthermore, Sentient is directly developing crypto-specific models like Dobby. Dobby represents a community-driven model where the community handles everything from development to ownership and operation, testing whether such governance truly functions in an open environment.
Sentient also faces clear challenges. As participants grow, open-source ecosystems experience exponentially increasing complexity in quality management and operation. How Sentient manages this complexity while maintaining balance will determine the ecosystem's sustainability. The company must also advance OML technology. Fingerprinting technology offers innovation in proving model provenance and ownership, but it cannot provide a perfect solution. As technology advances, new attempts at forgery or evasion will inevitably follow, requiring continuous improvement like a battle between spear and shield. Sentient advances its technology through ongoing research, with findings presented at major AI conferences like NeurIPS.
Sentient's journey has just begun. As concerns grow over closure and monopoly in the AI industry, Sentient's attempts warrant attention. How these efforts will create substantive change in the AI industry remains to be seen.
Share Dialog
Share Dialog
No comments yet