Research
My research focuses on trustworthy NLP, AI safety, LLM interpretability, and representation analysis.
I am particularly interested in understanding and mitigating subtle biases in language and multimodal models, while improving their reliability and generalization in real-world settings.
My recent work spans embedding-level bias mitigation, bias evaluation in vision-language models, and post-training methods for preserving generalization in vision-language-action systems.
Current Research
OMRON SINIC X Corporation, Japan
Full-time Research Intern (Jan 2026 - Present)
Working on preserving generalization performance during VLA post-training.Ochanomizu University, Japan
M.S. Researcher in Computer Science (Apr 2025 - Present)
Studying embedding-level bias mitigation by controlling output dimensions associated with token probabilities.
Previous Research
Ochanomizu University, Japan
B.S. Researcher in Computer Science (Apr 2024 - Mar 2025)
Developed a DFT-based approach on log-likelihood sequences to detect subtle gender bias beyond probability-gap analyses.Ochanomizu University, Japan
Pre-Graduation Undergraduate Research Project (Apr 2023 - Mar 2024)
Analyzed gender-gap factors across 47 prefectures in Japan using large-scale data collection and visualization.
Visiting / Industry Research Experience
DFKI Saarbruecken, Germany
Visiting Researcher (Aug 2025 - Oct 2025)
Conducted collaborative research on trustworthy and interpretable AI.vislang Lab, Rice University, U.S.
Visiting Student (Feb 2025 - Aug 2025)
Built a human-centered benchmark and evaluated 39 VLMs for reasoning, bias, and identity verification.Artificial Intelligence Research Center at Fujitsu Laboratories Ltd., Japan
Short-term Research Intern (Aug 2024)
Developed prompt-based personality conditioning and evaluation metrics for bias detection in interview-assistant settings.
