• Contact Us
  • Privacy Policy
  • Terms of Use
  • DMCA
  • Disclaimer
Sunday, December 14, 2025
CryptoBangs.com
Advertisement
  • Home
  • Live Crypto Prices
  • Crypto News
    • Bitcoin
    • Ethereum
    • Ripple
    • Altcoin
    • NFT News
  • DeFi
  • Blockchain
  • Regulation
  • Shop
  • Blog
  • Calculator
No Result
View All Result
  • Home
  • Live Crypto Prices
  • Crypto News
    • Bitcoin
    • Ethereum
    • Ripple
    • Altcoin
    • NFT News
  • DeFi
  • Blockchain
  • Regulation
  • Shop
  • Blog
  • Calculator
No Result
View All Result
CryptoBangs.com
No Result
View All Result

IBM Research Unveils Cost-Effective AI Inferencing with Speculative Decoding

June 24, 2024
in Blockchain
Reading Time: 2 mins read
A A
IBM Research Unveils Cost-Effective AI Inferencing with Speculative Decoding
ShareShareShareShareShare





IBM Research has announced a significant breakthrough in AI inferencing, combining speculative decoding with paged attention to enhance the cost performance of large language models (LLMs). This development promises to make customer care chatbots more efficient and cost-effective, according to IBM Research.

In recent years, LLMs have improved the ability of chatbots to understand customer queries and provide accurate responses. However, the high cost and slow speed of serving these models have hindered broader AI adoption. Speculative decoding emerges as an optimization technique to accelerate AI inferencing by generating tokens faster, which can reduce latency by two to three times, thereby improving customer experience.

Related articles

Pepe Price Plunges As This Rival Raises Over $3.5M In Presale

Pepe Price Plunges As This Rival Raises Over $3.5M In Presale

December 10, 2024
Riot Platforms (RIOT) Launches $525 Million Convertible Notes Offering

Riot Platforms (RIOT) Launches $525 Million Convertible Notes Offering

December 10, 2024

Despite its advantages, reducing latency traditionally comes with a trade-off: decreased throughput, or the number of users that can simultaneously utilize the model, which increases operational costs. IBM Research has tackled this challenge by cutting the latency of its open-source Granite 20B code model in half while quadrupling its throughput.

Speculative Decoding: Efficiency in Token Generation

LLMs use a transformer architecture, which is inefficient at generating text. Typically, a forward pass is required to process each previously generated token before producing a new one. Speculative decoding modifies this process to evaluate several prospective tokens simultaneously. If these tokens are validated, one forward pass can generate multiple tokens, thus increasing inferencing speed.

This technique can be executed by a smaller, more efficient model or part of the main model itself. By processing tokens in parallel, speculative decoding maximizes the efficiency of each GPU, potentially doubling or tripling inferencing speed. Initial introductions of speculative decoding by DeepMind and Google researchers utilized a draft model, while newer methods, such as the Medusa speculator, eliminate the need for a secondary model.

IBM researchers adapted the Medusa speculator by conditioning future tokens on each other rather than on the model’s next predicted token. This approach, combined with an efficient fine-tuning method using small and large batches of text, aligns the speculator’s responses closely with the LLM, significantly boosting inferencing speeds.

Paged Attention: Optimizing Memory Usage

Reducing LLM latency often compromises throughput due to increased GPU memory strain. Dynamic batching can mitigate this but not when speculative decoding is also competing for memory. IBM researchers addressed this by employing paged attention, an optimization technique inspired by virtual memory and paging concepts from operating systems.

Traditional attention algorithms store key-value (KV) sequences in contiguous memory, leading to fragmentation. Paged attention, however, divides these sequences into smaller blocks, or pages, that can be accessed as needed. This method minimizes redundant computation and allows the speculator to generate multiple candidates for each predicted word without duplicating the entire KV-cache, thus freeing up memory.

Future Implications

IBM has integrated speculative decoding and paged attention into its Granite 20B code model. The IBM speculator has been open-sourced on Hugging Face, enabling other developers to adapt these techniques for their LLMs. IBM plans to implement these optimization techniques across all models on its watsonx platform, enhancing enterprise AI applications.

Image source: Shutterstock



Credit: Source link

ShareTweetSendPinShare
Previous Post

Ethereum Set For $5,000? ETH Open Interest Expanding On CME Ahead Of Spot ETFs Trading

Next Post

Sealana ICO Ends in 24 Hours After Raising Over $5 Million – Solana’s Next Top Meme Coin?

Related Posts

Pepe Price Plunges As This Rival Raises Over $3.5M In Presale

Pepe Price Plunges As This Rival Raises Over $3.5M In Presale

December 10, 2024

Join Our Telegram channel to stay up to date on breaking news coverage The Pepe price plunged over 12% in...

Riot Platforms (RIOT) Launches $525 Million Convertible Notes Offering

Riot Platforms (RIOT) Launches $525 Million Convertible Notes Offering

December 10, 2024

Darius Baruo Dec 10, 2024 06:18 Riot Platforms announces a $525 million offering of 0.75% convertible...

Bitfarms to Restate Financials Following SEC Review of Digital Asset Proceeds

Bitfarms to Restate Financials Following SEC Review of Digital Asset Proceeds

December 10, 2024

Peter Zhang Dec 10, 2024 06:02 Bitfarms Ltd. will restate its financial statements for 2022 and...

Top Cryptocurrencies to Buy Now December 9 – Stellar, Litecoin, Cardano

Top Cryptocurrencies to Buy Now December 9 – Stellar, Litecoin, Cardano

December 9, 2024

Join Our Telegram channel to stay up to date on breaking news coverage The cryptocurrency market has experienced notable activity,...

NexBridge Raises $30 Million with Tokenized US Treasury Offering

NexBridge Raises $30 Million with Tokenized US Treasury Offering

December 9, 2024

Joerg Hiller Dec 09, 2024 17:09 NexBridge, a digital asset issuer in El Salvador, successfully raises...

Load More
Next Post
Sealana ICO Ends in 24 Hours After Raising Over $5 Million – Solana’s Next Top Meme Coin?

Sealana ICO Ends in 24 Hours After Raising Over $5 Million – Solana’s Next Top Meme Coin?

No Content Available
CryptoBangs.com

CryptoBangs.com is an online news portal that aims to share the latest crypto news, bitcoin, altcoin, blockchain, nft news and much more stuff like that.

What’s New Here!

  • Tucker Carlson and Roger Ver Reveal Shocking Details About US Extradition Battle and Bitcoin in Exclusive TCN Interview
  • Goldman Sachs eyeing crypto market-making for Bitcoin, Ethereum if US regulations shift
  • BC.GAME Announces UFC Welterweight Champion Colby Covington as New Brand Ambassador
  • How High Will Dogecoin Rise If the Markets ‘Go Wild’?

Newsletter

Don't miss a beat and stay up to date with our Newsletter!
Loading

  • Contact Us
  • Privacy Policy
  • Terms of Use
  • DMCA
  • Disclaimer

© 2023 - CryptoBangs.com - All Rights Reserved!

No Result
View All Result
  • Home
  • Live Crypto Prices
  • Crypto News
    • Bitcoin
    • Ethereum
    • Ripple
    • Altcoin
    • NFT News
  • DeFi
  • Blockchain
  • Regulation
  • Shop
  • Blog
  • Calculator

© 2018 JNews by Jegtheme.

Please enter CoinGecko Free Api Key to get this plugin works.
WP Twitter Auto Publish Powered By : XYZScripts.com