Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Crypto Love You
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Crypto Love You
    Home»AI News»Google Launches TensorFlow 2.21 And LiteRT: Faster GPU Performance, New NPU Acceleration, And Seamless PyTorch Edge Deployment Upgrades
    Google Launches TensorFlow 2.21 And LiteRT: Faster GPU Performance, New NPU Acceleration, And Seamless PyTorch Edge Deployment Upgrades
    AI News

    Google Launches TensorFlow 2.21 And LiteRT: Faster GPU Performance, New NPU Acceleration, And Seamless PyTorch Edge Deployment Upgrades

    March 7, 20264 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    binance


    Google has officially released TensorFlow 2.21. The most significant update in this release is the graduation of LiteRT from its preview stage to a fully production-ready stack. Moving forward, LiteRT serves as the universal on-device inference framework, officially replacing TensorFlow Lite (TFLite).

    This update streamlines the deployment of machine learning models to mobile and edge devices while expanding hardware and framework compatibility.

    LiteRT: Performance and Hardware Acceleration

    When deploying models to edge devices (like smartphones or IoT hardware), inference speed and battery efficiency are primary constraints. LiteRT addresses this with updated hardware acceleration:

    • GPU Improvements: LiteRT delivers 1.4x faster GPU performance compared to the previous TFLite framework.
    • NPU Integration: The release introduces state-of-the-art NPU acceleration with a unified, streamlined workflow for both GPU and NPU across edge platforms.

    This infrastructure is specifically designed to support cross-platform GenAI deployment for open models like Gemma.

    coinbase

    Lower Precision Operations (Quantization)

    To run complex models on devices with limited memory, developers use a technique called quantization. This involves lowering the precision—the number of bits—used to store a neural network’s weights and activations.

    TensorFlow 2.21 significantly expands the tf.lite operators’ support for lower-precision data types to improve efficiency:

    • The SQRT operator now supports int8 and int16x8.
    • Comparison operators now support int16x8.
    • tfl.cast now supports conversions involving INT2 and INT4.
    • tfl.slice has added support for INT4.
    • tfl.fully_connected now includes support for INT2.

    Expanded Framework Support

    Historically, converting models from different training frameworks into a mobile-friendly format could be difficult. LiteRT simplifies this by offering first-class PyTorch and JAX support via seamless model conversion.

    Developers can now train their models in PyTorch or JAX and convert them directly for on-device deployment without needing to rewrite the architecture in TensorFlow first.

    Maintenance, Security, and Ecosystem Focus

    Google is shifting its TensorFlow Core resources to focus heavily on long-term stability. The development team will now exclusively focus on:

  • Security and bug fixes: Quickly addressing security vulnerabilities and critical bugs by releasing minor and patch versions as required.
  • Dependency updates: Releasing minor versions to support updates to underlying dependencies, including new Python releases.
  • Community contributions: Continuing to review and accept critical bug fixes from the open-source community.
  • These commitments apply to the broader enterprise ecosystem, including: TF.data, TensorFlow Serving, TFX, TensorFlow Data Validation, TensorFlow Transform, TensorFlow Model Analysis, TensorFlow Recommenders, TensorFlow Text, TensorBoard, and TensorFlow Quantum.

    Key Takeaways

    • LiteRT Officially Replaces TFLite: LiteRT has graduated from preview to full production, officially becoming Google’s primary on-device inference framework for deploying machine learning models to mobile and edge environments.
    • Major GPU and NPU Acceleration: The updated runtime delivers 1.4x faster GPU performance compared to TFLite and introduces a unified workflow for NPU (Neural Processing Unit) acceleration, making it easier to run heavy GenAI workloads (like Gemma) on specialized edge hardware.
    • Aggressive Model Quantization (INT4/INT2): To maximize memory efficiency on edge devices, tf.lite operators have expanded support for extreme lower-precision data types. This includes int8/int16 for SQRT and comparison operations, alongside INT4 and INT2 support for cast, slice, and fully_connected operators.
    • Seamless PyTorch and JAX Interoperability: Developers are no longer locked into training with TensorFlow for edge deployment. LiteRT now provides first-class, native model conversion for both PyTorch and JAX, streamlining the pipeline from research to production.

    Check out the Technical details and Repo. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

    Michal Sutter is a data science professional with a Master of Science in Data Science from the University of Padova. With a solid foundation in statistical analysis, machine learning, and data engineering, Michal excels at transforming complex datasets into actionable insights.



    Source link

    frase
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    CryptoExpert
    • Website

    Related Posts

    How multi-agent AI economics influence business automation

    March 13, 2026

    How to Design a Streaming Decision Agent with Partial Reasoning, Online Replanning, and Reactive Mid-Execution Adaptation in Dynamic Environments

    March 12, 2026

    AI Is Learning From the News. Now Publishers Want to Get Paid

    March 11, 2026

    Improving AI models’ ability to explain their predictions | MIT News

    March 10, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    aistudios
    Latest Posts

    Ripple to Buy Back $750M in Shares through April: Report

    March 13, 2026

    EigenCloud Challenge Reveals 5 AI Agents Using TEEs for Verifiable Trust

    March 13, 2026

    Vitalik Buterin Redefines Ethereum With Three Core Roles

    March 13, 2026

    The U.S. Economy Is Already Slowing. Here Are 3 Canadian Stocks Built to Keep Earning Through It.

    March 13, 2026

    Brian Armstrong Denies Lobbying Against Bitcoin De Minimis Tax Exemption

    March 12, 2026
    livechat
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    Bitcoin Outperforms Macro Assets in Iran Conflict as $72,000 Returns

    March 13, 2026

    DeFi User Loses $50M in Crypto Swap Gone Wrong

    March 13, 2026
    bybit
    Facebook X (Twitter) Instagram Pinterest
    © 2026 CryptoLoveYou.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.