Google · Filed Apr 10, 2025 · Published May 14, 2026 · verified — real USPTO data

Google Patents a Smarter Filtering System for Video Compression

Every time you stream a video, your device is making thousands of tiny guesses about what each block of pixels should look like — and those guesses can be wrong. Google's new patent describes a way to correct those guesses on the fly using a pixel-aware filter baked into the codec itself.

Google Patent: Inter-Prediction Filtering for Video Codecs — figure from US 2026/0136002 A1
FIG. 1A — rendered from the official USPTO publication PDF.
Publication number US 2026/0136002 A1
Applicant Google LLC
Filing date Apr 10, 2025
Publication date May 14, 2026
Inventors Xiang Li, Jianle Chen, Debargha Mukherjee, Jingning Han, Yaowu Xu
CPC classification 375/240.02
Grant likelihood Medium
Examiner NGUYEN, KATHLEEN V (Art Unit 2486)
Status Non Final Action Mailed (Apr 15, 2026)
Parent application is a National Stage Entry of PCTUS2022053152 (filed 2022-12-16)
Document 1 claims

What Google's inter-prediction filter actually does to video

Imagine a video codec as a guessing game. When your phone or browser decodes a video, it often predicts what a patch of pixels looks like by borrowing it from an earlier frame and shifting it based on motion. That borrowed patch is rarely a perfect match — edges blur, textures smear.

Google's patent describes a way to filter that borrowed patch before it gets used. The filter is smart: it looks at the already-decoded pixels surrounding the current block and compares them to the pixels surrounding the borrowed reference patch. From that comparison, it calculates a set of correction weights and applies them — sharpening the prediction before the final image is assembled.

On the encoding side, the same idea runs in reverse: the encoder finds the best motion vector and the best filter together, so the decoder knows exactly what correction to apply. The result is a tighter, cleaner match between prediction and reality — which means either better image quality at the same file size, or smaller files at the same quality.

How the filter coefficients are derived and applied

The patent covers both the encoder and decoder halves of a video codec pipeline, centered on a technique called inter prediction with filtering.

In normal inter prediction, the decoder finds an intermediate prediction block — a region from a reference frame, displaced by a motion vector (a set of coordinates describing how something moved). That block is used as-is to reconstruct the current block. Here, an extra step is inserted: a spatial filter is applied to the intermediate block first.

  • Filter coefficients are derived by comparing first reconstructed pixels (the decoded pixels around the current block's position) against second reconstructed pixels (the pixels around the reference block's position). Minimizing the error between these two neighborhoods produces the optimal correction weights.
  • The filter supports non-linear components and operates over a 3×3 neighborhood around each pixel — using the center pixel plus its four cardinal neighbors.
  • On the decoder side, a coefficient refinement value can be transmitted in the bitstream to fine-tune predicted filter coefficients, keeping the overhead small.

The encoder simultaneously refines the motion vector using the filter coefficients — meaning the codec jointly optimizes where it looks in the reference frame and how it corrects what it finds there. The patent also covers luma/chroma consistency, applying derived luma filter coefficients to chroma (color) channels.

What this means for next-gen video codec quality

Video codec efficiency is a perpetual arms race — every fraction of a percent improvement in compression translates to real bandwidth savings at YouTube or Google Meet scale. This patent targets a well-known weak spot: inter prediction residuals that pile up when motion isn't perfectly captured by a single motion vector. By inserting a locally-adaptive filter into that prediction loop, Google could squeeze more quality out of the same bitrate.

This fits squarely into Google's ongoing work on AV1 and its successor codecs (like AVM), where novel in-loop and prediction-stage filters are a primary competitive battleground against HEVC and VVC. If this technique makes it into a shipping codec, you'd benefit every time you watch a high-motion video on YouTube or join a Google Meet call on a slow connection.

Editorial take

This is a focused, technically credible codec patent — not flashy, but exactly the kind of incremental improvement that accumulates into a meaningfully better codec over time. The joint optimization of motion vectors and filter coefficients is the genuinely interesting part; it avoids the chicken-and-egg problem of filtering that doesn't account for where the prediction came from. Worth tracking if you follow AV1/AVM development.

Get one Big Tech patent every Sunday

Plain English, intelligent commentary, no hype. Free.

Source. Full patent text and figures from the official USPTO publication PDF.

Editorial commentary on a publicly published patent application. Not legal advice.