PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Predictor Analysis

United Kingdom News News

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Predictor Analysis
United Kingdom Latest News,United Kingdom Headlines
  • 📰 hackernoon
  • ⏱ Reading Time:
  • 19 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 11%
  • Publisher: 51%

This paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.

This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license. Authors: Minghao Yan, University of Wisconsin-Madison; Hongyi Wang, Carnegie Mellon University; Shivaram Venkataraman, [email protected]. Table of Links Abstract & Introduction Motivation Opportunities Architecture Overview Proble Formulation: Two-Phase Tuning Modeling Workload Interference Experiments Conclusion & References A. Hardware Details B. Experimental Results C. Arithmetic Intensity D.

We replay a 60-second stream where we initially set the latency SLO to 250ms for the first half , and then increase it to 700ms for the remainder. As shown in Figure 14, under stringent latency conditions, the predictor deduces that it is impractical to schedule fine-tuning requests while adhering to the latency SLO, hence no fine-tuning requests are scheduled.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

hackernoon /  🏆 532. in US

United Kingdom Latest News, United Kingdom Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Experimental ResultsPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Experimental ResultsThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: OpportunitiesPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: OpportunitiesThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: MotivationPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: MotivationThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Arithmetic IntensityPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Arithmetic IntensityThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: ExperimentsPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: ExperimentsThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Hardware DetailsPolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Hardware DetailsThis paper investigates how the configuration of on-device hardware affects energy consumption for neural network inference with regular fine-tuning.
Read more »



Render Time: 2025-04-04 13:42:39