Google · Filed Apr 10, 2025 · Published May 14, 2026 · verified — real USPTO data

Google Patents a Simplified Filter Derivation Method for Hardware Video Codecs

Video codecs can predict color information from brightness data with impressive accuracy — but the math is often too slow for dedicated hardware chips. Google's new patent trims that math down just enough to make it practical.

Google Patent: Faster Video Codec Cross-Component Prediction — figure from US 2026/0136024 A1
FIG. 1A — rendered from the official USPTO publication PDF.
Publication number US 2026/0136024 A1
Applicant Google LLC
Filing date Apr 10, 2025
Publication date May 14, 2026
Inventors Xiang Li, Jingning Han, Yaowu Xu, Debargha Mukherjee
CPC classification 375/240.12
Grant likelihood Medium
Examiner LEE, JIMMY S (Art Unit 2483)
Status Docketed New Case - Ready for Examination (Feb 4, 2026)
Parent application is a National Stage Entry of PCTUS2022053149 (filed 2022-12-16)
Document 20 claims

How Google speeds up chroma prediction in video encoding

Imagine your phone records a 4K video. Behind the scenes, a video encoder is constantly looking for shortcuts — ways to describe what's on screen using as few bits as possible. One clever trick is to predict color values (called chroma) from brightness values (called luma), since the two are often correlated. A technique called cross-component prediction does exactly this, but the math involved can be expensive and slow.

Google's patent describes a way to simplify that math without throwing away too much quality. The key move is deliberately reducing the numerical precision of the filter coefficients — the weights used in the prediction formula — so the calculation fits neatly into a fixed number of bits that hardware can handle quickly.

The result is that a prediction method previously too slow for dedicated encoding chips (like the ones in phones, streaming cameras, and smart TVs) could now run fast enough to be genuinely useful. You probably won't notice anything directly, but your videos could compress better at the same quality level.

How CCCM filter simplification cuts latency in hardware

The patent targets a specific bottleneck inside the Convolutional Cross-Component Model (CCCM) — a prediction technique that uses a small learned filter to estimate chroma (color) sample values from nearby luma (brightness) samples. CCCM can improve compression efficiency meaningfully, but its filter coefficient derivation step involves arithmetic with a wide dynamic range (meaning the numbers can get very large or very small), which is hard to implement cheaply in silicon.

Google's approach attacks this in three ways:

  • Bit-range limiting: The filter coefficients are explicitly reduced to a defined bit precision — think rounding a decimal to fewer significant digits — so the hardware multipliers stay small and fast.
  • Coding unit size gating: CCCM is only applied to blocks above or below a certain size threshold, skipping the expensive derivation step for blocks where the technique doesn't offer much benefit anyway.
  • Non-downsampled luma input: Normally the luma samples are scaled down before being fed into the filter; this patent allows using the full-resolution luma samples directly, which can save an intermediate processing step.

The method sits inside the broader entropy encoding pipeline — the final stage of a video codec where symbols are packed into a compressed bitstream. Reducing latency here has a direct knock-on effect on how fast a hardware chip can process each frame.

What this means for next-gen hardware video encoders

Hardware video encoders — the dedicated chips inside phones, streaming dongles, and broadcast cameras — have strict timing budgets. A technique that works fine in software (where you can take as long as you need) often can't make the cut in silicon. By trimming CCCM's math to fit a fixed bit width, Google potentially unlocks a real compression quality gain for every device that ships with a hardware codec, not just servers doing offline transcoding.

Google maintains its own open video codec (AV1, and the successor AV2 is in development), and this kind of implementation-friendly optimization is exactly what gets built into codec standards. If this approach lands in a future codec spec, it could quietly improve video quality across YouTube, Meet, and Android devices at scale.

Editorial take

This is unglamorous but genuinely useful engineering. The gap between 'works great in software' and 'fast enough for hardware' is where a lot of codec research dies, and this patent directly addresses that gap for a specific, high-value prediction tool. It's worth watching if you follow video codec standards.

Get one Big Tech patent every Sunday

Plain English, intelligent commentary, no hype. Free.

Source. Full patent text and figures from the official USPTO publication PDF.

Editorial commentary on a publicly published patent application. Not legal advice.