QY
Photo of Qiwei Ye

Qiwei Ye (叶启威)

Beijing Academy of Artificial Intelligence (BAAI)

Qiwei Ye (叶启威) is the Tech Lead of the Health Computing Research Center at the Beijing Academy of Artificial Intelligence (BAAI). He was previously a Senior Researcher at Microsoft Research Asia, where he co-led the development of LightGBM and Suphx. His research interests include generative AI, reinforcement learning, and AI for Science. He is exploring how general-purpose AI can push the boundaries of life sciences.

Citations: 24,739 h-index: 14 i10-index: 18

News

  • 2026
    Released FloydNet, a new learning paradigm for global relational reasoning. GitHub
  • 2026
    Floyd-ARC achieves SOTA on ARC-AGI among models trained on ARC-style data, applying FloydNet to abstract reasoning. GitHub
  • 2025
    Released OpenComplex2 with multi-modal foundation models for biomolecular complex structure prediction. Project Page GitHub
  • 2025
    IneqSearch accepted at NeurIPS 2025 — hybrid reasoning for Olympiad inequality proofs.
  • 2025
    Our work on RNA ligand discovery "UltraSelex" was published in Nature Chemical Biology.
  • 2025
    Sable, a versatile pre-training paradigm for protein structure understanding, published in Briefings in Bioinformatics.
  • 2024
    "Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness" received the ICLR 2024 Outstanding Paper Honorable Mention.

Selected Publications

For a full list, please visit

Google Scholar
2025

Single-step Discovery of High-Affinity RNA Ligands by UltraSelex

Yaqing Zhang, Yuan Jiang, David Kuster, Qiwei Ye, Wenhao Huang, Simon Fürbacher, Jingye Zhang, Pia Doll, Wenjun Lin, Siwei Dong, Hui Wang, Zhipeng Tang, David Ibberson, Klemens Wild, Irmgard Sinning, Anthony A. Hyman, Andres Jäschke

Nature Chemical Biology 2025

New 11 citations

Research Interests

🔬

Scientific AI / AI for Science

Developing AI methods to accelerate scientific discovery across domains including molecular simulation, protein structure prediction, and materials design.

🧠

Foundation Models

Research on large-scale pre-trained models, including extending context length, improving efficiency, and adapting foundation models for scientific applications.

⚙️

Machine Learning Systems

Building efficient and scalable machine learning systems, including gradient boosting frameworks (LightGBM) and communication-efficient distributed algorithms.

Reinforcement LearningGraph Neural NetworksLarge Language ModelsGradient BoostingComputer VisionProtein Structure PredictionMolecular Property PredictionDistributed Computing

Experience

Tech Lead of Health Computing Research Center / Vice President

Beijing Academy of Artificial Intelligence (BAAI)

Dec 2021 – Present

Co-Director

Tsinghua AIR–BAAI Joint Research Center for Health

2021 – 2025

Senior Researcher

Microsoft Research Asia

Mar 2014 – Dec 2021

M.E. in Computer Software Engineering

Peking University

2012 – 2015

(More details to be updated)