Taishi Nakamura

According to our database1, Taishi Nakamura authored at least 12 papers between 2024 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Rewriting Pre-Training Data Boosts LLM Performance in Math and Code.
CoRR, May, 2025

Building Instruction-Tuning Datasets from Human-Written Instructions with Open-Weight Large Language Models.
CoRR, March, 2025

Wider or Deeper? Scaling LLM Inference-Time Compute with Adaptive Branching Tree Search.
CoRR, March, 2025

Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Agent Skill Acquisition for Large Language Models via CycleQD.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025


2024
Why We Build Local Large Language Models: An Observational Analysis from 35 Japanese and Multilingual LLMs.
CoRR, 2024

Balancing Speed and Stability: The Trade-offs of FP8 vs. BF16 Training in LLMs.
CoRR, 2024

LLM-jp: A Cross-organizational Project for the Research and Development of Fully Open Japanese LLMs.
CoRR, 2024

Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities.
CoRR, 2024

Building a Large Japanese Web Corpus for Large Language Models.
CoRR, 2024

Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order.
CoRR, 2024


  Loading...