Urmish Thakker

Orcid: 0000-0002-0515-9155

According to our database1, Urmish Thakker authored at least 26 papers between 2019 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Efficiently Adapting Pretrained Language Models To New Languages.
CoRR, 2023

Training Large Language Models Efficiently with Sparsity and Dataflow.
CoRR, 2023

2022
A Survey on Federated Learning for Resource-Constrained IoT Devices.
IEEE Internet Things J., 2022

PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts.
CoRR, 2022



2021
Compressing RNNs to Kilobyte Budget for IoT Devices Using Kronecker Products.
ACM J. Emerg. Technol. Comput. Syst., 2021

Multitask Prompted Training Enables Zero-Shot Task Generalization.
CoRR, 2021

MLPerf Tiny Benchmark.
CoRR, 2021

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices.
CoRR, 2021


Doping: A technique for Extreme Compression of LSTM Models using Sparse Structured Additive Matrices.
Proceedings of Machine Learning and Systems 2021, 2021

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers.
Proceedings of Machine Learning and Systems 2021, 2021

2020
Benchmarking TinyML Systems: Challenges and Direction.
CoRR, 2020

Federated Learning for Resource-Constrained IoT Devices: Panoramas and State-of-the-art.
CoRR, 2020

Compressing Language Models using Doped Kronecker Products.
CoRR, 2020

Understanding the Impact of Dynamic Channel Pruning on Conditionally Parameterized Convolutions.
Proceedings of the AIChallengeIoT@SenSys 2020: Proceedings of the 2nd International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things, 2020

Pushing the Envelope of Dynamic Spatial Gating technologies.
Proceedings of the AIChallengeIoT@SenSys 2020: Proceedings of the 2nd International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things, 2020

Rank and run-time aware compression of NLP Applications.
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, 2020

Ternary MobileNets via Per-Layer Hybrid Filter Banks.
Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020

2019
A Static Analysis-based Cross-Architecture Performance Prediction Using Machine Learning.
CoRR, 2019

Compressing RNNs for IoT devices by 15-38x using Kronecker Products.
CoRR, 2019

Measuring scheduling efficiency of RNNs for NLP applications.
CoRR, 2019

Skipping RNN State Updates without Retraining the Original Model.
Proceedings of the 1st Workshop on Machine Learning on Edge in Sensor Systems, 2019

Pushing the limits of RNN Compression.
Proceedings of the Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing, 2019

Run-Time Efficient RNN Compression for Inference on Edge Devices.
Proceedings of the 2nd Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications, 2019


  Loading...