James Thewlis

Orcid: 0000-0001-8410-2570

According to our database1, James Thewlis authored at least 14 papers between 2016 and 2023.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Language as the Medium: Multimodal Video Classification through text only.
CoRR, 2023

2022
VTC: Improving Video-Text Retrieval with User Comments.
Proceedings of the Computer Vision - ECCV 2022, 2022

2019
Deep Industrial Espionage.
CoRR, 2019

Unsupervised Learning of Landmarks by Descriptor Vector Exchange.
Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, 2019

Slim DensePose: Thrifty Learning From Sparse Annotations and Motion Cues.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019

2018
Objects from motion.
PhD thesis, 2018

Substitute Teacher Networks: Learning with Almost No Supervision.
CoRR, 2018

Modelling and unsupervised learning of symmetric deformable object categories.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Self-supervised Segmentation by Grouping Optical-Flow.
Proceedings of the Computer Vision - ECCV 2018 Workshops, 2018

Cross Pixel Optical-Flow Similarity for Self-supervised Learning.
Proceedings of the Computer Vision - ACCV 2018, 2018

2017
Unsupervised object learning from dense equivariant image labelling.
CoRR, 2017

Unsupervised learning of object frames by dense equivariant image labelling.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

Unsupervised Learning of Object Landmarks by Factorized Spatial Embeddings.
Proceedings of the IEEE International Conference on Computer Vision, 2017

2016
Fully-trainable deep matching.
Proceedings of the British Machine Vision Conference 2016, 2016


  Loading...