HappyHorse 1.0
Text-to-video & image-to-video AI

Key features at a glance

HappyHorse 1.0 offers 5 generation modes plus cinematic-grade output—all available on A2E.

Wan 2.7 Text to video

Text-to-Video (T2V)

wan 2.7 9-grid image-to-video

Image-to-Video (I2V)

wan 2.7 First & last frame video

Subject-to-Video (S2V)

wan 2.7 Subject + voice reference

Video-to-Video (V2V)

wan 2.7 Instruction-based editing

Subject + Video to Video (SV2V)

wan 2.7 Video recreation

Cinematic depth & multi-shot

It really is this simple. Free credits let you test without paying.

Step 1

Open the generator

Open HappyHorse on A2E. Pick from T2V, I2V, S2V, V2V, or SV2V mode.

Step 2

Prompt or upload

Write a prompt, upload a photo, provide a reference subject, or feed in an existing video—depending on your chosen mode.

Step 3

Generate and download

Preview the result and download an MP4. Ready for social media, a pitch deck, or ad testing.

Real HappyHorse 1.0 outputs from each generation mode. Replace placeholders with your own MP4s or embeds.

T2V: Text prompt

1080p video with synced audio

I2V: Still image

animated video clip

S2V: Reference subject

inserted into generated video

Model specs

  • Developer: Alibaba Token Hub (ATH) Business Unit
  • Parameters: ~15 billion, ~40 transformer layers
  • Output: Up to 15s 1080p, multi-shot, synced audio
  • Modes: T2V, I2V, S2V, V2V, SV2V
  • Audio: Lip-synced dialogue, ambient soundscapes, expressive vocals

Strengths

  • Cinematography: Wide-aperture, shallow depth-of-field, atmospheric visual language
  • Multi-shot: Stable character positioning across frequent cut transitions
  • Action: Motorcycle chases, racing circuits, high-speed tracking shots
  • Drama: Suspenseful confrontation, romance narratives with camera movement and emotional atmosphere

Best for

  • Ads & marketing: Product demos, social media clips, campaign creatives
  • Short-form video: TikTok, Reels, YouTube Shorts
  • Short dramas: Multi-shot narratives with consistent characters
  • Developers: API integration via A2E

Why Choose A2E?

High-Quality Videos for Free

Consistent and Lifelike Characters

Simple video-creation process

  • Alibaba’s Future Life Lab (Taotian Group), under the ATH AI Innovation Unit. The project is led by Zhang Di, former VP at Kuaishou and the tech lead behind Kling AI. Weights are on Hugging Face under Apache-2.0.

  • Up to 15 seconds of 1080p video with multiple shots. The model supports 5 aspect ratios (16:9, 9:16, 4:3, 3:4, 1:1), so you can output for any platform directly.

  • Cinematic output with wide-aperture shallow depth-of-field, multi-shot consistency with stable character positioning across cuts, and high-speed dynamic action—motorcycle chases, racing sequences, suspenseful confrontations, and romance narratives with nuanced camera movement.

  • Five modes: Text-to-Video (T2V), Image-to-Video (I2V), Subject-to-Video (S2V), Video-to-Video (V2V), and Subject-and-Video-to-Video (SV2V). S2V lets you insert a person or object from a reference photo; V2V modifies an existing clip while keeping its motion; SV2V combines both.

  • Yes. New users get 100 free credits on signup and 30 bonus credits daily through check-in. No credit card required. Paid plans are available if you need higher limits or priority processing.

  • Yes. The model produces synchronized audio-visual output—lip-synced dialogue, ambient soundscapes, and emotionally expressive vocals. Audio generation is optional; you can turn it off if you only need the video track.

  • Yes. Content created on A2E paid plans can be used for ads, social media, client projects, and other commercial purposes.