
happy horse 1.0
The #1 AI Video Model Nobody Saw Coming
A pseudonymous model with no known team just dethroned Seedance 2.0 on Artificial Analysis. We break down the Elo scores, the claimed architecture, the origin mystery, and what it actually means for you.
Abstract: On April 7, 2026, a model called happy horse 1.0 appeared at the top of Artificial Analysis's Video Arena — simultaneously claiming #1 in both Text-to-Video and Image-to-Video (no audio) categories. No team has claimed ownership. No weights are publicly available. No API exists. This report dissects the confirmed Elo data, examines the claimed 40-layer Transformer architecture, investigates community theories about its origin, and provides an actionable assessment for developers and creators evaluating their AI video stack.
Table of Contents
- The Arrival: How happy horse 1.0 Topped the Leaderboard Overnight
- Understanding the Elo System: Why These Numbers Matter
- Technical Architecture: The Claimed 40-Layer Transformer
- Multimodal Capabilities: T2V, I2V, and Audio Generation
- The Origin Mystery: Who Built happy horse 1.0?
- Leaderboard Deep Dive: Full Competitive Landscape
- The Open Source Question: Promises vs. Reality
- What This Means for Developers and Creators
- Conclusion: Signal vs. Noise
- FAQ
1. The Arrival: How happy horse 1.0 Topped the Leaderboard Overnight
On the morning of April 7, 2026, the AI video community woke up to an anomaly. A model no one had heard of — happy horse 1.0 — was sitting at the #1 position on both the Text-to-Video and Image-to-Video arenas on Artificial Analysis, the most respected blind-comparison benchmark for generative video models.
Artificial Analysis confirmed the addition with a post on X: "We've added a new pseudonymous video model to our Text to Video and Image to Video Arenas. 'happy horse 1.0' is currently landing in the #1 spot." The use of the word 'pseudonymous' was deliberate — Artificial Analysis itself could not confirm the team behind the submission.
Within hours, the AI community erupted. Brent Lynch's viral tweet — "WHO IS happy horse 1.0? IS IT WAN 2.7 VIDEO?" — captured the collective confusion. Chinese tech media platform 36Kr published an in-depth investigation. The model became the most discussed topic in AI video circles overnight.
happy horse 1.0 (and a V2 variant) quietly submitted to Artificial Analysis Video Arena
Model reaches #1 in T2V and I2V (no audio) categories; Artificial Analysis confirms on X
Community investigation begins: X, Reddit, and WeChat explode with speculation
Multiple unofficial websites appear; Chinese AI community traces potential origin

Source: Artificial Analysis (@ArtificialAnlys) on X — April 7, 2026
2. Understanding the Elo System: Why These Numbers Matter
Before diving into what happy horse 1.0 can (or claims to) do, it's critical to understand why this ranking is significant — and why it's not the whole story.
How Artificial Analysis Works
Artificial Analysis runs a blind comparison arena. Users are shown two video outputs generated from the same prompt — they don't know which model produced which output. They simply vote on which video looks better. These votes are then fed into an Elo rating system, the same mathematical framework used in chess rankings.
This methodology matters because it eliminates self-reported benchmarks. When a company says their model scores 95/100 on their internal test suite, that number is marketing. When thousands of anonymous users independently prefer one model's output over another in blind tests, that's a market signal.
What happy horse 1.0's Elo Scores Actually Tell Us
An Elo difference of ~60 points translates to roughly a 58-59% win rate — meaning in a head-to-head comparison, the higher-ranked model would be preferred about 6 out of 10 times. happy horse 1.0's lead over the #2 model (Seedance 2.0) is 97 points in T2V — a statistically substantial gap.
However, Elo scores for newly added models are inherently volatile. Seedance 2.0 has accumulated over 7,500 vote samples, establishing a stable rating. happy horse 1.0's sample count is still growing. As more votes come in, the score could stabilize higher, lower, or roughly where it is.
happy horse 1.0 Elo Rankings Across All Categories
| Category | Elo Score | Rank | Gap to #2 | Sample Confidence |
|---|---|---|---|---|
| Text-to-Video (no audio) | 1,370 | #1 | +97 over Seedance 2.0 | Growing (new entry) |
| Image-to-Video (no audio) | 1,392 | #1 | +37 over Seedance 2.0 | Growing (new entry) |
| Text-to-Video (with audio) | 1,205 | #2 | -14 behind Seedance 2.0 | Growing (new entry) |
| Image-to-Video (with audio) | 1,161 | #2 | -1 behind Seedance 2.0 | Growing (new entry) |
Key Takeaway
happy horse 1.0 dominates in pure video quality (no audio), but Seedance 2.0 maintains a slight edge when audio synchronization is factored in. This suggests happy horse 1.0's core visual generation capability is exceptional, while its audio pipeline may be less mature.

Source: Artificial Analysis Video Arena — Text-to-Video Leaderboard
3. Technical Architecture: The Claimed 40-Layer Transformer
Everything in this section comes from unofficial happy horse 1.0 websites. None of these technical claims have been independently verified. We present them for informational context, not as confirmed facts.

Single Self-Attention Transformer (Claimed)
According to information on happyhorses.io, happy horse 1.0 uses a unified 40-layer Transformer architecture. Unlike traditional multi-modal models that use separate encoders with cross-attention bridges, happy horse 1.0 reportedly processes all modalities — text tokens, reference image latents, and noisy video/audio tokens — through a single self-attention mechanism.
The first and last 4 layers allegedly use modality-specific projections (mapping each data type into a shared embedding space), while the middle 32 layers share parameters across all modalities. This design, if true, would be architecturally elegant — it means the model learns unified representations rather than bolting together separate subsystems.
How This Compares to Known Architectures
If the claims are accurate, happy horse 1.0's architecture resembles an evolution of the single-stream approach seen in models like Alibaba's WAN series, but with joint audio-video denoising — a capability most competitors implement as a post-processing step.
For context: Seedance 2.0 uses a Dual-Branch Diffusion Transformer with an "Attention Bridge" connecting separate video and audio branches. Kling 3.0 uses a cascaded approach with separate super-resolution stages. happy horse 1.0's claimed single-stream design is arguably more ambitious, but also harder to verify without open weights.
| Model | Architecture Approach | Verification |
|---|---|---|
| happy horse 1.0 | Single-stream 40-layer DiT, joint audio-video denoising | Claimed, unverified |
| Seedance 2.0 | Dual-branch DiT with Attention Bridge for audio-video sync | Published, confirmed |
| Kling 3.0 | Cascaded DiT with separate super-resolution stages | Published, confirmed |
| SkyReels V4 | Multi-resolution diffusion with progressive generation | Published, confirmed |
Claimed Inference Performance
The primary happy horse 1.0 site lists specific inference speeds: 2 seconds for a 5-second 256p clip, and 38 seconds for 1080p resolution on an H100 GPU. These numbers, if accurate, would make it one of the fastest high-quality video generators available.
However, these are self-reported vendor numbers with zero third-party verification. Until independent benchmarks are run on publicly available weights, these figures should be treated as marketing claims.
4. Multimodal Capabilities: T2V, I2V, and Audio Generation
happy horse 1.0 appears in both the Text-to-Video and Image-to-Video arenas under the same model name, suggesting a unified pipeline capable of handling both input modalities.
Text-to-Video Generation
Generate video from text prompts. This is where happy horse 1.0 shows its strongest performance — Elo 1,370, a full 97 points ahead of Seedance 2.0.
Image-to-Video Animation
Animate a reference image into video. happy horse 1.0 leads here too with Elo 1,392 — its highest score across all categories, suggesting particularly strong image conditioning.
Joint Audio-Video Synthesis
Generate synchronized dialogue, ambient sounds, and Foley in a single pass. Performance is strong but not dominant — Seedance 2.0 edges it out in both audio categories.
Multilingual Audio-Video Support
Claims native support for six languages: Chinese, English, Japanese, Korean, German, and French. A secondary site adds Cantonese and mentions 'ultra-low WER lip-sync.' These language claims remain unverifiable without public access.
Claimed, unverified
Source: Artificial Analysis Video Arena — Image-to-Video Leaderboard
5. The Origin Mystery: Who Built happy horse 1.0?
This is the question that has consumed the AI community since April 7. Artificial Analysis described the model as 'pseudonymous' — meaning a real team submitted it, but chose not to reveal their identity publicly.

Source: @BrentLynch on X — April 7, 2026
Theory 1: WAN 2.7 (Alibaba)
Evidence For
- +WAN 2.6 (Alibaba's current public model) sits at Elo 1,189 — far below happy horse 1.0
- +Chinese AI labs have a pattern of anonymous pre-launch testing (the 'Pony Alpha' incident in February 2026, GLM-5 precedent)
- +happy horse 1.0's CJK language support and timing patterns match Chinese lab release cycles
- +Community investigators traced connections to Alibaba-linked researchers
Evidence Against
- -No leaked weights or API fingerprinting connects happy horse 1.0 to Alibaba's WAN family
- -The architectural description doesn't perfectly match WAN 2.6's known design
- -Alibaba has no commercial incentive to hide a #1 model
Theory 2: Independent Chinese Lab
Evidence For
- +36Kr investigation traced potential connections to Zhang Di's Taotian Group Future Life Laboratory
- +Collaboration speculated with Sand.ai (founder Cao Yue) and Shanghai Institute of Intelligent Computing's GAIR Lab (Prof. Liu Pengfei)
- +These entities have the talent and compute access for such a model
Evidence Against
- -No official confirmation from any named individual or organization
- -Taotian Group connection is speculative, based on community investigation
Scam Alert: Fake happy horse 1.0 Websites
Multiple Chinese AI community members (notably @passluo on X) have warned that over a dozen fake 'happy horse 1.0' websites have appeared offering paid video generation services. None of these have been verified as official. URLs include happyhorse.app, happy-horse.ai, happyhorse-ai.com, and many more. Do not pay for services on any of these sites until an official source is confirmed.

Source: @passluo on X — Warning about fake HappyHorse websites
6. Leaderboard Deep Dive: Full Competitive Landscape
To understand happy horse 1.0's position, you need to see the full picture. Here's the complete top-tier Video Model landscape as of April 8, 2026.
Text-to-Video Rankings (No Audio) — April 2026
| Rank | Model | Elo | API Available | Price (per min) | Released |
|---|---|---|---|---|---|
| #1 | happy horse 1.0 | 1,370 | No | — | Apr 2026 |
| #2 | Seedance 2.0 720p | 1,273 | No public API | — | Mar 2026 |
| #3 | SkyReels V4 | 1,245 | Yes | $7.20 | Mar 2026 |
| #4 | Kling 3.0 1080p Pro | 1,242 | Yes | $13.44 | Feb 2026 |
| #5 | PixVerse V6 | 1,240 | Yes | $5.40 | Mar 2026 |
| #6 | Grok Imagine Video | 1,233 | Yes | $8.00 | Mar 2026 |
| #7 | Runway Gen-4 Turbo | 1,215 | Yes | $10.80 | Feb 2026 |
| #8 | WAN 2.6 | 1,189 | Yes (open) | Free/self-host | Jan 2026 |
Image-to-Video Rankings (No Audio) — April 2026
| Rank | Model | Elo | API Available | Released |
|---|---|---|---|---|
| #1 | happy horse 1.0 | 1,392 | No | Apr 2026 |
| #2 | Seedance 2.0 | 1,355 | No public API | Mar 2026 |
| #3 | PixVerse V6 | 1,338 | Yes | Mar 2026 |
| #4 | Grok Imagine Video | 1,333 | Yes | Mar 2026 |
| #5 | Kling 3.0 Omni | 1,297 | Yes | Feb 2026 |

What the Landscape Tells Us
7. The Open Source Question: Promises vs. Reality
Several happy horse 1.0-associated websites make bold open-source claims. The happyhorses.io site states: "Base model, distilled model, super-resolution model, and inference code — all released" and "Everything is open." This would, if true, make it the most capable open-source video model by a dramatic margin.
However, the reality as of April 8, 2026 tells a different story.
Both GitHub and the site's own links show 'Coming Soon.' Searches on GitHub for 'happy horse 1.0' return zero results.
No model card, weights, or documentation exists on HuggingFace as of publication.
No public API with pricing or documentation has been announced.
No downloadable weights are available from any source.
No arXiv paper or technical report has been published.
The Core Contradiction
The website claims 'Everything is open' while simultaneously showing 'Coming Soon' on every access point. This contradiction — combined with the proliferation of fake websites — makes it impossible to verify any technical claim. Until weights are publicly downloadable and independently tested, the open-source promise remains just a promise.
8. What This Means for Developers and Creators
Let's cut through the hype and talk about what this means practically.
The Quality Signal Is Real
Regardless of who made happy horse 1.0, the Elo signal from blind voting is genuine. Thousands of users, without knowing the model's identity, consistently preferred its outputs. This isn't marketing — it's empirical preference data. Something capable has been built.
But You Can't Use It Today
For anyone building a pipeline, shipping a product, or creating content professionally: happy horse 1.0 doesn't exist as an option yet. No API, no weights, no playground, no pricing. The quality signal is interesting; the practical utility is zero.
What You Should Actually Do
Three Milestones to Watch For
GitHub Release
A public repository with downloadable weights and inference code
Not yetHuggingFace Model Card
A verifiable model card with architecture details, license, and benchmarks
Not yetAPI Access
A public endpoint with pricing, rate limits, and documentation
Not yetConclusion: Signal vs. Noise
happy horse 1.0 is a genuinely interesting development in the AI video space. The blind-comparison data from Artificial Analysis provides a credible quality signal that can't be faked or gamed — users preferred this model's outputs over every competitor, including Seedance 2.0, which held the #1 position for weeks.
But everything beyond the Elo numbers exists in a fog. The team is unknown. The architecture is unverified. The open-source promises are contradicted by the current reality. And the explosion of fake websites adds noise to an already confusing picture.
Our assessment: Watch this space closely, but don't change your production stack based on a model that doesn't exist as a usable product yet. The leaderboard numbers are real. Everything else — team, weights, access, timeline — is pending.
We'll update this analysis as new information becomes available. If happy horse 1.0 delivers on even half of its implied promises, it will reshape the competitive landscape of AI video generation.
Last updated: April 8, 2026. This article will be updated as new verifiable information becomes available. Elo scores sourced from Artificial Analysis; all other technical claims are attributed to their respective sources and flagged as unverified where applicable.
Frequently Asked Questions
Who made happy horse 1.0?
Is happy horse 1.0 available to use right now?
Is happy horse 1.0 the same as WAN 2.7?
How does Artificial Analysis rank video models?
What does an Elo score of 1,370 mean?
When will happy horse 1.0 weights be released?
Are the happy horse 1.0 websites legitimate?
What's the best AI video model I can actually use today?
How does happy horse 1.0 compare to Seedance 2.0?
Should I wait for happy horse 1.0 before starting my AI video project?
Ready to Create AI Videos Now?
While we wait for happy horse 1.0, you can start generating professional AI videos today with SkyReels, Kling, PixVerse, and more.
