Trevor Koverko is a repeat entrepreneur and investor at the intersection of crypto and AI. After being drafted by the NHL’s New York Rangers, Trevor’s hockey career was cut short by a serious car accident in 2011, prompting his transition into technology.
In 2017, co-founded Polymath, a trailblazer in regulated digital assets. Polymath raised $75M and reached a $1B market cap during the 2018 bull cycle. Trevor also helped launch Polymesh, a purpose-built blockchain for financial securities; Tokens.com, which went public in 2021; and DeFi Technologies, now trading on NASDAQ with a valuation over $1B.
Today, Trevor is co-founder of Sapien, a decentralized data foundry that powers enterprise AI models with verified, human-labeled training data. With 1.93 million users and over 200 million tasks completed, Sapien provides critical infrastructure for customers like Alibaba, Workday, ByteDance, and Transsion, across sectors from autonomous vehicles to global development.
The Problem: Centralized Data Pipelines
AI models are only as powerful as the data they’re trained on. Yet today’s training pipelines are controlled by centralized data vendors — efficient, yes, but expensive, opaque, and biased by design. Their black-box systems quietly shape how AI sees the world.
That’s the problem Sapien solves.
By decentralizing the creation, validation, and structuring of training data, Sapien removes these single points of failure. Instead of top-down control, quality is enforced through Proof of Quality — a reputation-based, crypto-aligned framework powered by $SAPIEN.
Key Components of Proof of Quality:
- Staking: Contributors post collateral to access high-value tasks, aligning them with accuracy.
- Peer Validation: High-reputation users validate the work of peers, earning rewards or penalties based on performance.
- On-Chain Reputation: Every contributor builds a transparent, tamper-proof history that determines access and tier.
- Slashing: Low-quality work or manipulation results in stake loss and reduced privileges.
This turns data labeling into a permissionless, trust-minimized system — capable of scaling globally while preserving integrity at the individual level.
Why It Matters
Meta’s recent acquisition of Scale AI underscores the strategic importance — and value — of data infrastructure in the AI value chain. But while centralized platforms focus on speed and scale, Sapien focuses on open participation, provable quality, and decentralized governance.
Where centralized models embed institutional bias, Sapien’s approach makes diversity and auditability native to the protocol — turning raw human effort into structured, verified intelligence for the world’s most important AI systems.
In short: Sapien is building the data layer for decentralized intelligence — where humans and machines collaborate on open rails, and where quality is not a promise, but a measurable, economic truth.
Looking Ahead
Sapien’s token generation event (TGE) will take place on August 20, 2025, with tokenomics already released: a fixed 1B supply, contributor-aligned rewards, and DAO-based treasury governance.
The project’s growth is reinforced by a world-class advisory network:
- Advisors: Lucy Guo (Scale AI), Tonghao Zhang (ByteDance), Anto Patrex (xAI)
- Angels: Emad Mostaque (Stability AI), Tim Shi (OpenAI), Ed Gong (Palantir)
The infrastructure is live. The contributors are global. The opportunity is clear: decentralized data foundries are the missing layer in AI infrastructure.



