Can Jin

Ph.D. Student in Computer Science, Rutgers University

personal1.jpg
CBIM, Busch Campus can.jin@rutgers.edu

I’m a Computer Science Ph.D. student, starting in Fall 2024 at Rutgers University, New Brunswick, under the guidance of Professor Dimitris N. Metaxas. I hold both a Bachelor’s and a Master’s degree in Mathematics from the University of Science and Technology of China. Prior to my Ph.D., I spent two years as a Machine Learning Engineer at Meituan Dianping Corporation. My research interests include LLM reasoning/generalization/reliability, Efficient AI, and 3D/image/video/multimodal generation.

I am excited to announce that I will be joining Adobe Research Adobe Research as a Research Scientist Intern in Summer 2025.

I’m also open to collaborating on related projects. Please contact me via email if you share similar interests.

Research

🧠 Large Language Models

Reasoning | Generalization | Reliability
Developing advanced techniques to improve LLM capabilities, including:

  • Training innovations: Supervised fine-tuning (SFT), reinforcement learning with human/AI feedback (RLHF/RLAIF), Direct Preference Optimization (DPO)
  • Post-training refinement: Self-critique mechanisms, iterative self-refinement
  • Reliability: Robustness, Harmlessness, Helpfulness

Recent progress: NeurIPS 2024, submitted to ICML 2025

Efficient AI

Computational Efficiency | Effectiveness
Exploring methods to improve the efficiency of ML models, including:

  • Model compression via distillation/pruning
  • Prompt (engineering) for task adaptation

Recent progress: AAAI 2025, ICLR 2025


🎨 Multimodal Generation

3D | Video | Image | Multimodal generation
Exploring methods to generate 3D, video, image, and multimodal content using generative models. This is an ongoing research direction.

Academic Services

Teaching
  • 24Fall: CS210: Data Management for Data Science
  • 25Spring: CS534: Computer Vision
Peer Review
  • Reviewer: CVPR 2025, ICML 2024 Workshop, Alexandria Engineering Journal, Information Fusion, Pattern Recognition, Signal Processing

selected publications

  1. Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate
    In Advances in Neural Information Processing Systems, 2024
  2. LoR-VP: Low-Rank Visual Prompting for Efficient Vision Model Adaptation
    Can Jin , Ying Li , Mingyu Zhao, Shiyu ZhaoZhenting WangXiaoxiao HeLigong HanTong Che, and Dimitris N. Metaxas
    In The Thirteenth International Conference on Learning Representations, 2025
  3. Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective
    In The 39th Annual AAAI Conference on Artificial Intelligence, 2024
  4. WWW
    APEER.png
    APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking
    In Companion Proceedings of the ACM Web Conference 2025, Sydney, NSW, Australia, 2025
  5. WWW
    RankFlow.png
    RankFlow: A Multi-Role Collaborative Reranking Workflow Utilizing Large Language Models
    Can Jin*Hongwu Peng* , Anxiang Zhang , Nuo Chen , Jiahui Zhao, Xi Xie , Kuangzheng Li, Shuya Feng, Kai ZhongCaiwen Ding, and Dimitris N Metaxas
    In Companion Proceedings of the ACM Web Conference 2025, Sydney, NSW, Australia, 2025