Samuel S. Ogden

Orcid: 0000-0003-2851-4129

According to our database1, Samuel S. Ogden authored at least 13 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Symptom Detection with Text Message Log Distributions for Holistic Depression and Anxiety Screening.
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., March, 2024

2023
Layercake: Efficient Inference Serving with Cloud and Mobile Resources.
Proceedings of the 23rd IEEE/ACM International Symposium on Cluster, 2023

2022
Left on Read: Reply Latency for Anxiety & Depression Screening.
Proceedings of the Adjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers, 2022

2021
PieSlicer: Dynamically Improving Response Time for Cloud-based CNN Inference.
Proceedings of the ICPE '21: ACM/SPEC International Conference on Performance Engineering, 2021

Many Models at the Edge: Scaling Deep Inference via Model-Level Caching.
Proceedings of the IEEE International Conference on Autonomic Computing and Self-Organizing Systems, 2021

2020
Demystifying the Placement Policies of the NVIDIA GPU Thread Block Scheduler for Concurrent Kernels.
SIGMETRICS Perform. Evaluation Rev., 2020

MDINFERENCE: Balancing Inference Accuracy and Latency for Mobile Applications.
Proceedings of the 2020 IEEE International Conference on Cloud Engineering, 2020

2019
Characterizing the Deep Neural Networks Inference Performance of Mobile Applications.
CoRR, 2019

ModiPick: SLA-aware Accuracy Optimization For Mobile Deep Inference.
CoRR, 2019

CloudCoaster: Transient-aware Bursty DatacenterWorkload Scheduling.
CoRR, 2019

Challenges and Opportunities of DNN Model Execution Caching.
Proceedings of the DIDL '19: Proceedings of the Workshop on Distributed Infrastructures for Deep Learning, 2019

EdgeServe: efficient deep learning model caching at the edge.
Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, 2019

2018
MODI: Mobile Deep Inference Made Efficient by Edge Computing.
Proceedings of the USENIX Workshop on Hot Topics in Edge Computing, 2018


  Loading...