Google · Filed Nov 10, 2025 · Published May 14, 2026 · verified — real USPTO data

Google Patents a Faster Probability Model Init for Video Compression

Every time a video codec starts encoding a new frame, it needs a statistical 'best guess' about what the image data will look like — and getting that guess wrong wastes bits. Google's new patent finds a cheaper way to make that guess smarter.

Google Patent: Smarter Video Codec Probability Init — figure from US 2026/0136026 A1
FIG. 1A — rendered from the official USPTO publication PDF.
Publication number US 2026/0136026 A1
Applicant GOOGLE LLC
Filing date Nov 10, 2025
Publication date May 14, 2026
Inventors Lin Zheng, Jingning Han, Yaowu Xu
CPC classification 375/240.26
Grant likelihood Medium
Examiner CENTRAL, DOCKET (Art Unit OPAP)
Status Docketed New Case - Ready for Examination (Dec 2, 2025)
Parent application Claims priority from a provisional application 63719067 (filed 2024-11-11)
Document 20 claims

What Google's tile-sampling trick does for video codecs

Imagine a video codec as a very fast typist who's learned to predict the next word before you finish typing. That prediction engine — called a probability model — needs to be re-tuned for every frame of video, and tuning it costs time and compute.

What Google's patent describes is a shortcut: instead of scanning every tile of the previous frame to build that tuning, the codec samples only a subset of tiles, combines their probability models into a single shared one, and uses that as the starting point for every tile in the new frame.

The result is that each tile in the current frame gets a reasonably good head start without the encoder having to do the full, expensive warm-up pass across all reference tiles. For you as a viewer, this kind of optimization quietly contributes to crisper video at lower bitrates — the kind of incremental improvement that adds up across billions of YouTube streams.

How Google builds one combined model from a tile subset

Modern video codecs like AV1 divide frames into tiles — rectangular regions that can be processed in parallel. Each tile uses an entropy coding probability model (essentially a lookup table of how likely each symbol is) to compress data efficiently. The more accurate the model, the fewer bits are wasted.

Typically, a codec initializes this model for a new frame by pulling data from the equivalent tiles in the previous (reference) frame. Google's patent changes the initialization strategy in two steps:

  • Sample a subset: Instead of using all tiles in the reference frame, the encoder collects probability models from only a representative subset of those tiles.
  • Combine into one: Those subset models are merged into a single combined probability model — likely via averaging or weighted aggregation.
  • Broadcast to all: Every tile in the current frame is initialized using that single combined model, rather than each tile independently referencing its own counterpart in the prior frame.

The key insight is that using a partial sample is often good enough statistically — especially in scenes where the probability distributions across tiles don't vary wildly — while dramatically reducing the bookkeeping overhead of a full per-tile reference pass.

What this means for streaming and real-time video quality

Google runs YouTube and Google Meet at a scale where even a marginal reduction in encoder compute translates into real infrastructure savings. AV1, Google's royalty-free codec, is already the default on YouTube and is spreading across the web — so optimizations in its probability modeling pipeline have an outsized reach.

For real-time applications like video calls or cloud gaming, faster probability model initialization directly reduces latency at the start of a new scene or stream. If this technique lands in a future AV1 or follow-on codec profile, you'd feel it as slightly faster initial picture quality when a video starts or cuts to a new scene — without any change to the file format itself.

Editorial take

This is solidly useful codec plumbing rather than a headline-grabbing AI trick. The insight — that a sampled subset of reference tiles is statistically sufficient to warm up probability models — is elegant and the kind of thing that gets quietly merged into encoder libraries and saves Google real money at scale. It's not a reason to get excited about a new product, but it's a reason to appreciate that Google's codec team is still grinding on AV1 internals.

Get one Big Tech patent every Sunday

Plain English, intelligent commentary, no hype. Free.

Source. Full patent text and figures from the official USPTO publication PDF.

Editorial commentary on a publicly published patent application. Not legal advice.