Dongkuan (DK) Xu / 胥栋宽

Hello! I am an Assistant Professor at NC State CS, leading the NCSU Generative Intelligent Computing Lab and working on machine learning, natural language processing, and computer vision. I have been honored with the Microsoft Accelerating Foundation Models Research Award 2024, the NCSU Carla Savage Award 2024, and the Best Paper Award of ICCCN 2023. I received my Ph.D. at Penn State, and received my M.S. and B.E. at the University of Chinese Academy of Sciences and Renmin University of China, respectively.

I has been collaborating with Microsoft Research exploring trustworthiness evaluation and efficient hyperparameter-architecture search of Foundation Models, and with Google DeepMind to enable scalable and adaptive learning for Vision-Language Models. I was an intern research scientist at Moffett AI, investigating low-resource model compression. I also spent some wonderful time at NEC Labs America on contrastive learning and multi-task learning.

Other than my work, I am a big fan of American football. I love Nittany Lions, New York Giants, and Dallas Cowboys. I also like workout and soccer ball.

Email  /  CV (Oct 2023)  /  Twitter  /  Google Scholar  /  LinkedIn   

I'm looking for multiple PhDs / interns, particularly from underrepresented groups, to work on Generative AI. Feel free to send me your CV. Once we have a commitment to each other, trust me I will do my best to help you!   

profile photo


Research

My research is fundamentally grounded in exploring and advancing Artificial General Intelligence, with particular emphasis on studying the autonomy of intelligent agents, reasoning reliability, and resource efficiency in Generative AI Systems. My research group provides full-stack solutions, ranging from theoretical optimization methods and data-centric strategies to the development of efficient deep learning techniques and the co-design of algorithms and hardware. My long-term research goal is to liberate AI productivity and democratize its application to serve a broader range of populations and real-world applications, equally, sustainably, and responsibly.

  • Task Planning / External Tool Use of Large Language/Vision Models

  • Reliable & Scalable Reasoning of Large Language/Vision Models

  • Data/Tool Generation, Retrieval & Optimization

  • Algorithm-Hardware Co-design for Generative AI Acceleration

  • Applications: Education, Agriculture, Networking, Scientific Discovery, Transportation, Healthcare

Representative Work: Gentopia.AI [link]

News

  • 04/2024: Will mentor Wake STEM Early College High School students in the GCSP-REU summer program
  • 04/2024: A paper on LLM-powered code comprehension was accepted to IJCAI'24
  • 04/2024: A paper on diffusion model-augmented wireless networks was accepted to IFIP/IEEE Networking'24
  • 04/2024: Invited to serve on the NSF Core Program panel
  • 03/2024: The 2nd Workshop on Resource-Efficient Learning for Knowledge Discovery was accepted to KDD'24
  • 03/2024: Gave a talk at NCSU Forest Carbon Solutions Initiative: Foundation Models for Geospatial Analytics
  • 03/2024: Gave a talk at Fo Guang Shan Buddhist Temple [link]: Impact of AI on Our Lives and Beyond [News, in Chinese]
  • 03/2024: We released an investigation of large model alignment methods [link]
  • 03/2024: A paper on reliable sparse training was accepted to Transactions on Machine Learning Research
  • 03/2024: Received the The Carla Savage Award [News]
  • 02/2024: Workshop on Deep Learning-Hardware Co-Design for Generative AI Acceleration was accepted to DAC'24
  • 02/2024: Our DDCV workshop@CVPR'24 is looking for submissions. We will offer 3 free registration for students!
  • 02/2024: Received a Gift Fund from Microsoft
  • 01/2024: Received the Microsoft Accelerating Foundation Models Research Award [News]
  • 01/2024: Our proposal of The 1st Workshop on Dataset Distillation for Computer Vision was accepted to CVPR'24
  • 12/2023: Gave a talk at STARS AI Scholars Program: How LLMs Work and Cutting-Edge Research on Generative AI
  • 12/2023: A paper on large language model education was accepted to AAAI/EAAI'24
  • 12/2023: A paper on neural architecture search for Spiking Transformers was accepted to ICASSP'24
  • 11/2023: Our work, AGENT [link], was accepted to CPAL'24 as Oral Paper
  • 10/2023: Gentopia was accepted to EMNLP'23 (System Demo)
  • 10/2023: Two papers (Robust LLM Pruning + Controllable Randomized Pruning) were accepted to EMNLP'23
  • 10/2023: A paper on code generation in domain-changing environments was accepted to EMNLP'23 Pan-DL Workshop
  • 09/2023: A paper on personalized federated learning was accepted to NeurIPS'23
  • 09/2023: My mentored undergraduate, Zihan, received COE REU Award ($3,000). Congrats Zihan.
  • 09/2023: Gave a talk at Microsoft Research Asia: Sculpting the Future of Collective Growth in Collaborative AI
  • 09/2023: Invited to serve as an Area Chair for LREC-COLING'24
  • 08/2023: Invited to serve on the NSF CAREER panel
  • 08/2023: Gave a talk at CoreNet Global: ChatGPT in Corporate Real Estate - Unlocking the Potential [link]
  • 08/2023: Released a paper providing more details about Gentopia.AI
  • 08/2023: Launched Gentopia.AI. Check out our teams from NCSU, GMU, CMU, UMich, etc.
  • 07/2023: Honored to receive the Best Paper Award at ICCCN'23
  • 07/2023: One paper was accepted to ICCV'23
  • 07/2023: One paper was accepted to CDC'23
  • 07/2023: Our ChatGPT Education Workshops [link] are available (co-organized with Tiffany)
  • 06/2023: Feel free to check out our ALM work, ReWOO (GitHub) (中文解读 1, 2, 3)
  • 06/2023: Invited to serve as a Senior PC member of AAAI'24.
  • 05/2023: One paper was accepted to KDD'23
  • 05/2023: One paper was accepted to ACL'23
  • 04/2023: Our work, E-App, was accepted to ICCCN'23. See u in Honolulu
  • 04/2023: One paper was accepted to ICAIBD'23. Congrats to our undergrad, Zihan
  • 03/2023: Our work, Acc.DD (paper), was selected as a Highlight (2.5%) of CVPR'23
  • 03/2023: Will co-chair RelKD'23: Resource-Efficient Learning for Knowledge Discovery Workshop @KDD'23.
  • 02/2023: Two papers on accelerating data/model learning were accepted to CVPR'23. Stay tuned ;-)
  • 02/2023: Two papers on dynamic training were accepted to DAC'23.
  • 01/2023: Our work, Calibrated Rigged Lottery, was accepted to ICLR'23.
  • 01/2023: Our work, Efficient Informed Proposals for Discrete Distributions, was accepted to AISTATS'23.
  • 01/2023: Invited to give a talk at Rutgers EFficient AI (REFAI) Seminar on Feb 16, 2023.
  • 12/2022: Invited to serve as a journal reviewer for TPAMI and Communications of the ACM.
  • 11/2022: Invited to serve as the PC Chair for MLNLP 2022.
  • 11/2022: Two papers were accepted to AAAI'23. See you in DC in February
  • 10/2022: Invited to serve as a TPC member for ISQED'23.
  • 09/2022: Will chair The First Workshop on DeepLearning-Hardware Co-Design for AI Acceleration with AAAI'23
  • 09/2022: Our work, AutoDistil (paper), was accepted to NeurIPS'22.
  • 09/2022: Invited to give a talk at the CIS Department of the University of Macau.
  • 07/2022: Will chair a Research session (Deep Learning: New Architectures and Models) and an Applied Data Science session (Scalable, Distributed Systems & Trustable AI) of KDD'22. Super welcome!
  • 07/2022: Will be teachinng CSC 791 Advanced Topics in Efficient Deep Learning at NC State this fall. Feel free to attend!
  • 07/2022: One paper, S4: a High-sparsity, High-performance AI Accelerator (paper), was accepted to SNN'22
  • 07/2022: Invited to serve as a (Senior) PC member for AAAI'23. and ICLR'23.
  • 06/2022: Invited to serve as a Column Editor for ACM SIGAI Newsletter.
  • 06/2022: Invited to give a talk at Pinterest (Pinterest Machine Learning Lunch) on August 18, 2022.
  • 06/2022: Invited to give a talk at 中科院深圳先进技术研究院 on June 27, 2022.
  • 06/2022: Invited to serve as a PC member for WSDM'23, LoG'22, and AACL-IJCNLP'22.
  • 05/2022: Invited to serve as a PC member for COLING'22 and a reviewer for the journal TNNLS.
  • 05/2022: Invited to give a talk at Amazon Search (A9) on May 20, 2022. ("5.20" >.< "我爱你")
  • 04/2022: Invited to give a talk at 将门创投 on May 24, 2022. Welcome!
  • 04/2022: Invited to give a talk at Vanderbilt University's Machine Learning Lunch Seminar on May 09, 2022.
  • 04/2022: Invited to give a talk at Renmin University of China in May 2022.
  • 04/2022: Invited to give a talk at Shenzhen University in May 2022.
  • 04/2022: A new US patent application: Bank-balanced-sparse Activation for Deep NN Models.
  • 04/2022: Invited to give a talk at University of Connecticut on April 27, 2022.
  • 04/2022: Invited to give a talk at UCAS (中国科学院大学) on April 25, 2022.
  • 04/2022: Invited to give a talk at New York Institute of Technology's Research Seminar Series.
  • 04/2022: Organizing MLNLP Community's 6th Academic Seminar.
  • 04/2022: Third place winner (Eng.) in the 37rd annual PSU Graduate Exhibition (News).
  • 03/2022: Invited to serve as a PC member for NeurIPS'22.
  • 02/2022: One paper, Sparse Progressive Distillation (code, paper), was accepted to ACL'22
  • 02/2022: Invited to serve as a PC member for CIKM'22.
  • 12/2021: Thanks to MLNLP (机器学习与自然语言处理) for reporting our work SparseBERT.
  • 12/2021: Code released for SparseBERT (NAACL'21) (code, paper)! Feel free to use it.
  • 12/2021: Invited to serve as a PC member for ICML'22.
  • 12/2021: Invited to give a talk "Parameter Efficiency: Democratizing AI at Scale" at Brandeis University (slides).
  • 11/2021: Invited to serve as a PC member for KDD'22 (both Research and Applied Science Tracks).
  • 10/2021: Our ML&NLP academic community is officially launched (>500k followers).
  • 10/2021: Received IST Fall 2021 Travel Award.
  • 09/2021: Our work, InfoGCL, was accepted to NeurIPS'21
  • 08/2021: Invited to serve as PC member for AAAI'22, ACL Rolling Review'22., SDM'22.
  • 07/2021: Received complimentary ACM student membership. Thanks you ACM.
  • 06/2021: Invited to serve as a PC member for ICLR'22, WSDM'22, IJCAI-ECAI'22.
  • 05/2021: Received NAACL 2021 Scholarship.
  • 05/2021: One paper was accepted to ACL'21!
  • 05/2021: Excited to join Microsoft Research as a research intern working on neural architecture search.
  • 04/2021: Gave a talk titled "BERT Pruning: Structural vs. Sparse" at Brandeis University (slides).
  • 04/2021: Gave a talk titled "BERT, Compression and Applications" at Xpeng Motors (小鹏汽车) (slides).
  • 03/2021: My application to SDM'21 Doctoral Forum has been accepted.
  • 03/2021: Received a SIAM Student Travel Award to attend SDM'21.
  • 03/2021: Our work, SparseBERT, was accepted to NAACL'21. Along with three U.S. patent applications
  • 03/2021: Invited to serve as a PC member for NeurIPS'21, EMNLP'21, CIKM'21.
  • 03/2021: Received IST Spring 2021 Travel Award.
  • 12/2020: One paper was accepted to SDM'21.
  • 12/2020: Invited to serve as a Senior PC member for IJCAI'21.
  • 12/2020: Four papers were accepted to AAAI'21.
  • 12/2020: Invited to serve as a PC member for ICML'21, KDD'21, NAACL'21, IJCNN'21.
  • 09/2020: Our work, PGExplainer, was accepted to NeurIPS'20.
  • 08/2020: Invited to serve as a PC member for AAAI'21, EACL'21, a journal reviewer for Information Fusion.
  • 08/2020: Received KDD 2020 Student Registration Award.
  • 06/2020: Invited to serve as a reviewer for NeurIPS'20.
  • 05/2020: Happy to join Moffett AI as an intern research scientist.
  • 04/2020: One paper was accepted to SIGIR'20.
  • 03/2020: Invited to serve as a PC member for EMNLP'20, KDD'20, CIKM'20, AACL-IJCNLP'20.
  • 02/2020: Received IST Spring 2020 Travel Award.
  • 12/2019: Invited to serve as a PC member for IJCAI'20, IJCNN'20.
  • 12/2019: Received AAAI 2020 Student Scholarship.
  • 11/2019: Two papers were accepted to AAAI'20. See you in the Big Apple.
  • 08/2019: Invited to serve as a PC member for AAAI'20.
  • 08/2019: One paper was accepted to ICDM'19.
  • 05/2019: One paper was accepted to IJCAI'19.
  • 05/2019: Happy to join NEC Labs America as a research intern.
  • 03/2019: Received IST Spring 2019 Travel Award.
  • 01/2019: Grateful to receive The Award for Excellence in Teaching, IST (News).
  • 01/2019: Invited to serve as a PC member for IJCNN'19.
  • 12/2018: One paper was accepted to SDM'19. See you in Calgary.
  • 05/2018: Started working at NEC Labs America as a research intern.
  • 11/2017: Invited to serve as a PC member for IJCNN'18.
Education Outreach / Community Engagement

Workshop on DL-Hardware Co-Design for Generative AI Acceleration to be held at DAC 2024 (Lead Chair)

Workshop on Dataset Distillation for Computer Vision to be held at CVPR 2024 (Co-Chair) [link]

  • Topics: distillation methods, theory study, benchmarks, hardware approaches, etc

  • Call for Papers: submission deadline is March 29, 2024

Socially-Relevant Computing and Analytics REU Site 2024 @ NCSU (Undergraduate Mentor) [link]

  • Goal: Provide research experience to undergrad students from populations that are under-represented in computing

  • Applications Now Open: deadline is Feb 15, 2024

NC State ChatGPT Workshops: Integrating ChatGPT into K-12 Classrooms 2023 (Co-Chair) [link]

  • Slides & Videos: (i) Introduction [link], (ii) Risk Understanding [link], (iii) Classroom Integration [link], (iv) AI Tutoring [link]

Workshop on Resource-Efficient Learning for Knowledge Discovery at KDD 2023 (Co-Chair) [link]

Workshop on DeepLearning-Hardware Co-Design for AI Acceleration at AAAI 2023 (Lead Chair) [link]

Publications

2024

3DSP Purpose Enhanced Reasoning through Iterative Prompting: Uncover Latent Robustness of ChatGPT on Code Comprehension
Y. Wang, Q. Zhao, D. Xu, X. Liu
[IJCAI 2024] International Joint Conference on Artificial Intelligence
PDF (to appear) / Project (to appear)

We present a modular prompting framework to solve robustness issues in code comprehension for LLMs by leveraging main-purpose reasoning guidance and iterative reasoning enhancement.

3DSP RM-Gen: Conditional Diffusion Model-Based Radio Map Generation for Wireless Networks
X. Luo, Z. Li, Z. Peng, D. Xu, Y. Liu
[IFIP/IEEE Networking 2024] International Federation for Information Processing Networking Conference
PDF (to appear) / Project (to appear)

We explore cost-effective radio map generation using generative diffusion probabilistic models, applicable to both indoor and outdoor wireless network scenarios, particularly valuable in complex scenarios where obtaining comprehensive measurements is challenging.

3DSP On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
X. Wang, S. Duan, X. Yi, J. Yao, S. Zhou, Z. Wei, P. Zhang, D. Xu, M. Sun, X. Xie
PDF (available) / Project (to appear)

We comprehensively investigate value alignment approaches. We first unpack the historical context of alignment tracing back to the 1920s (where it comes from), then delve into the mathematical essence of alignment (what it is), shedding light on the inherent challenges.

3DSP Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World
B. Lei, D. Xu, R. Zhang, B. Mallick
[TMLR] Transactions on Machine Learning Research
PDF (available) / Code (to appear)

We investigate for the first time the reliability of sparse training from an out-of-distribution (OOD) perspective, which jointly considers OOD reliability and efficiency and has important implications for real-world deep neural network applications.

3DSP ToolNet: Connecting Large Language Models with Massive Tools via Tool Graph
X. Liu, Z. Peng, X. Yi, X. Xie, L. Xiang, Y. Liu, D. Xu
PDF (available) / Code (to appear)

We introduce ToolNet, a plug-and-play method to organize massive tools into a directed graph, facilitating their use by large language models (LLMs) via in-context learning.

3DSP Towards Inductive and Efficient Explanations for Graph Neural Networks
D. Luo, T. Zhao, W. Cheng, D. Xu, F. Han, W. Yu, X. Liu, H. Chen, X. Zhang
[TPAMI] IEEE Transactions on Pattern Analysis and Machine Intelligence
Impact Factor: 23.6 (as of Feb 2024)
PDF

We present PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs). PGExplainer adopts a deep neural network to parameterize the generation process of explanations and provide a global understanding of any GNN models on arbitrary machine learning tasks.

3DSP Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
B. Lei, D. Xu, R. Zhang, S. He, B. K. Mallick
[CPAL 2024] The 2024 Conference on Parsimony and Learning
Oral Paper
PDF / Code

We propose an adaptive gradient correction method to accelerate and stabilize sparse training. Our method reduces the number of epochs up to 52.1% compared to the leading sparse training methods. Our method is compatible with both unstructured and structured sparse training pipelines.

3DSP AutoST: Training-free Neural Architecture Search for Spiking Transformers
Z. Wang, Q. Zhao, J. Cui, X. Liu, D. Xu
[ICASSP 2024] The 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing
PDF

We introduce AutoST, a training-free NAS method for Spiking Transformers, to rapidly identify high-performance Spiking Transformer architectures.

3DSP Students’ Perceptions and Preferences of Generative Artificial Intelligence Feedback for Programming
Z. Zhang*, Z. Dong* (Undergrad at NC State), Y. Shi, N. Matsuda, T. Price, D. Xu
[AAAI/EAAI 2024] The 14th Symposium on Educational Advances in Artificial Intelligence
PDF

This study makes contributions to the field of computer science education, and explores the feasibility of utilizing large language models (LLMs) for automating feedback for Java programming assignments in an introductory computer science (CS1) class.

2023

3DSP ReWOO: Decoupling Reasoning from Observations for Efficient Augmented Language Models
B. Xu, Z. Peng, B. Lei, S. Mukherjee, Y. Liu, D. Xu
PDF / Live Demo / Code / Twitter / Auto-GPT Reading / Marktechpost Media / 中文解读 1 / 中文解读 2

We present a modular ALM framework to solve multi-step reasoning by decoupling reasoning from tool feedback and observations. Theoretical decomposition of prompt tokens establishes that our method substantially reduces prompting redundancy in prevailing Thought-Action-Observation ALM systems.

3DSP Gentopia.AI: A Collaborative Platform for Tool-Augmented LLMs
B. Xu, X. Liu, H. Shen, Z. Han, Y. Li, M. Yue, Z. Peng, Y. Liu, Ziyu Yao, D. Xu
[EMNLP 2023 (System Demo Track)] The 2023 Conference on Empirical Methods in Natural Language Processing
PDF / Web / Twitter

We present an augmented language model platform that enables flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm.

3DSP Accelerating Dataset Distillation via Model Augmentation
L. Zhang*, J. Zhang*, B. Lei, S. Mukherjee, X.Pan, B.Zhao, C. Ding, Y. Li, D. Xu
[CVPR 2023] The IEEE/CVF Conference on Computer Vision and Pattern Recognition
Highlight Paper (2.5%)
PDF / Code

We propose two model augmentation techniques, i.e. using early-stage models and weight perturbation to learn an informative synthetic set with significantly reduced training cost. Extensive experiments demonstrate that our method achieves up to 20× speedup.

3DSP You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model
S. Tang, Y. Wang, Z. Kong, T. Zhang, Y. Li, C. Ding, Y. Wang, Y. Liang, D. Xu
[CVPR 2023] The IEEE/CVF Conference on Computer Vision and Pattern Recognition
PDF / Code

We propose a novel early exiting strategy based on cascading input similarity with valid assumptions on saturation states in visual-language models, a pioneering exploration of extending early exiting selection to encoders and decoders of sequence-to-sequence architectures.

3DSP Rethinking Data Distillation: Do Not Overlook Calibration
D. Zhu, B. Lei, J. Zhang, Y. Fang, Y. Xie, R. Zhang, D. Xu
[ICCV 2023] International Conference on Computer Vision
PDF

We show that distilled data lead to not-calibratable networks due to the loss of information that is semantically meaningful but unrelated to classification tasks. We propose Masked Temperature Scaling & Distillation Training to mitigate these limitations while maintaining the efficiency.

3DSP Towards Personalized Federated Learning via Heterogeneous Model Reassembly
J. Wang, X. Yang, S. Cui, L. Che, L. Lyu, D. Xu, F. Ma
[NeurIPS 2023] Thirty-seventh Conference on Neural Information Processing Systems
PDF

This paper focuses on addressing the practical yet challenging problem of model heterogeneity in federated learning, where clients possess models with different network structures. We propose pFedHR, focusing on solving the problem of heterogeneous model cooperation.

3DSP Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models
J. Li, Q. Lei, W. Cheng, D. Xu
[EMNLP 2023] The 2023 Conference on Empirical Methods in Natural Language Processing
PDF

We aim to answer: (i) What is the core to defend against adversarial attacks for sparse language models? (ii) How can we efficiently prevent the loss of pre-trained knowledge in pruning to preserve or even enhance robustness?

3DSP Breaking through Deterministic Barriers: Randomized Pruning Mask Generation and Selection
J. Li, W. Gao, Q. Lei, D. Xu
[EMNLP 2023 (Findings)] The 2023 Conference on Empirical Methods in Natural Language Processing
PDF

This paper introduces controllable randomness by generating binary masks in a specific random fashion. We aim to answer: (i) Which is better for pruning? a deterministic way or a randomized way? (ii) Can we design a consistently effective randomized pruning method?

3DSP Co-evolving Data-driven and NLU-driven Synthesizers for Generating Code in Domain Growth and Data Scarcity
J. Gu, Z. Nan, Z. Peng, X. Shen, D. Xu
[EMNLP 2023 Workshop] The 2023 Pattern-based Approaches to NLP in the Age of Deep Learning (Pan-DL)
PDF

We propose a circular training framework, Colead, which co-evolves both the data-driven synthesizer and the NLU-driven synthesizer to achieve high-quality code generation in the presence of data scarcity and domain growth.

3DSP Toward Efficient Traffic Signal Control: Smaller Network Can Do More
S. Li, H. Mei, J. Li, H. Wei, D. Xu
[CDC 2023] The 62nd IEEE Conference on Decision and Control
PDF (to appear)

We introduce EfficientLight, an RL-based traffic signal control method that balances model size and performance. In multi-intersection scenarios, our method outperforms all baseline methods with the lowest #paras and the smallest computational cost compared to other RL-based methods.

3DSP E-App: Adaptive mmWave Access Point Planning with Environmental Awareness in Wireless LANs
Y. Liu, M. Chen, D. Xu, Z. Yang, S. Zhao
[ICCCN 2023] The 32nd International Conference on Computer Communications and Networks
Best Paper Award
PDF

We develop an adaptive access point (AP) planning approach that can accurately sense the environment dynamics, reconstruct the obstacle map, and then predict the placements of mmWave APs adaptively.

3DSP Labels Are Not Necessary: Assessing Peer-Review Helpfulness Using Domain Adaptation Based on Self-Training
C. Liu, D. Doshi, M. Bhargava, R. Shang, J. Cui, D. Xu, E. Gehringer
[BEA 2023] The 18th Workshop on Innovative Use of NLP for Building Educational Applications
PDF

This study highlights the pedagogical significance of predicting useful comments in mutual assessment to promote student learning and reduces the need to collect labeled data via domain adaptation.

3DSP Towards Reliable Rare Category Analysis on Graphs via Individual Calibration
L. Wu, B. Lei, D. Xu, D. Zhou
[KDD 2023] The 29th SIGKDD Conference on Knowledge Discovery and Data Mining

How can we quantify the uncertainty in the learning process and enable reliable rare category analysis? We jointly learn the characterizations of rare categories and calibrate the confidence.

3DSP A Survey for Efficient Open Domain Question Answering
Q. Zhang, S. Chen, D. Xu, Q. Cao, X, Chen, T. Cohn, M. Fang
[ACL 2023] The 61th Annual Meeting of the Association for Computational Linguistics
PDF

We walk through the ODQA models and conclude the core techniques on efficiency. Quantitative analysis on memory cost, processing speed, accuracy and overall comparison are given.

3DSP Calibrating the Rigged Lottery: Making All Tickets Reliable
B. Lei, R. Zhang, D. Xu, B. K. Mallick
[ICLR 2023] The 11th International Conference on Learning Representations
PDF

We for the first time identify and study the reliability problem of sparse training and find that sparse training exacerbates the over-confidence problem of DNNs. We then develop a new sparse training method, CigL, to produce more reliable sparse models.

3DSP Exploring the Augmented Large Language Model with Mathematical tools in Personalized and Efficient Education
Zihan Dong (Undergrad at NC State), D. Xu
[ICAIBD 2023] The 6th International Conference on Artificial Intelligence and Big Data

This study explores how ChatGPT personalizes the learning experience, how it can be augmented with math and physical performance, and how educators can ensure that the LLM algorithm is unbiased.

3DSP Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off
S. Huang, B. Lei, D. Xu, H. Peng, Y. Sun, M. Xie, C. Ding
[DAC 2023] The 60th Design Automation Conference
PDF

To assist explainable sparse training, we propose important weights exploitation and weights coverage exploration to characterize sparse training. Our method does not need to train dense models, achieving up to 95% sparsity ratio and even higher accuracy than dense training, with same amount of iterations.

3DSP Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
S. Huang, H. Fang, K. Mahmood, B. Lei, N. Xu, B. Lei, Y. Sun, D. Xu, W. Wen, C. Ding
[DAC 2023] The 60th Design Automation Conference

We propose an energy efficient spiking neural network training workflow, and design a new drop-andgrow strategy with decreasing number of non-zero weights in the process of dynamically updating sparse mask. We demonstrate extremely high sparsity (i.e., 99%) model performance in SNN based vision tasks.

3DSP Efficient Informed Proposals for Discrete Distributions via Newton’s Series Approximation
Y. Xiang*, D. Zhu*, B. Lei, D. Xu, R. Zhang
[AISTATS 2023] The 26th International Conference on Artificial Intelligence and Statistics
PDF

We develop a gradient-like proposal for any discrete distribution without this strong requirement. Built upon a locally-balanced proposal, our method efficiently approximates the discrete likelihood ratio via a Newton’s series expansion to enable a large and efficient exploration in discrete spaces.

3DSP Improving Long-tailed Classification by Disentangled Variance Transfer
Y. Tian, W. Gao, Q. Zhang, P. Sun, D. Xu
Internet of Things
PDF

We propose a class-based covariance transfer method from the perspective of disentangling to transfer covariance information in long-tailed classification task.

3DSP Auto-CAM: Label-Free Earth Observation Imagery Composition and Masking Using Spatio-Temporal Dynamics
Y. Xie, Z. Li, H. Bao, X. Jia, D. Xu, X. Zhou, S. Skakun
[AAAI 2023] The 37th AAAI International Conference on Artificial Intelligence
PDF

We propose an autonomous image composition and masking method for cloud masking, a fundamental task in Earth observation problems across social sectors such as agriculture, energy, and water.

3DSP Time Series Contrastive Learning with Information-Aware Augmentations
D. Luo, W. Cheng, Y. Wang, D. Xu, J. Ni, W. Yu, X. Zhang, Y. Liu, Y. Chen, H. Chen, X. Zhang
[AAAI 2023] The 37th AAAI International Conference on Artificial Intelligence
PDF

We propose an adaptive data augmentation method to avoid ad-hoc choices or painstakingly trial-and-error tuning for time series representation learning.

2022

3DSP AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models
D. Xu, S. Mukherjee, X. Liu, D. Dey, W. Wang, X. Zhang, A. H. Awadallah, J. Gao
[NeurIPS 2022] The 36th Conference on Neural Information Processing Systems
PDF / Code

We develop a few-shot task-agnostic NAS framework, AutoDistil, for distilling large language models into compressed students with variable computational cost. AutoDistil outperforms leading baselines with upto 3x additional reduction in computational cost and negligible loss in task performance.

3DSP S4: a High-sparsity, High-performance AI Accelerator
I. E. Yen, Z. Xiao, D. Xu
[SNN 2022] Sparsity in Neural Networks 2022 Workshop
PDF / Code / Supp / Slides

We introduce the first commercial hardware platform supporting high-degree sparsity acceleration up to 32 times — S4. S4 provides a (sparse) equivalent computation power of 944 TOPS in INT8 and 472 TFLOPS in BF16, and has 20GB LPDDR4 memory with up to 72 GB memory bandwidth in a low 70 Watt power envelope. We demonstrate several-times practical inference speedup on S4 over mainstream inference platforms such as Nvidia T4.

3DSP An Automatic and Efficient BERT Pruning for Edge AI Systems
S. Huang, N. Liu, Y. Liang, H. Peng, H. Li, D. Xu, M. Xie, C. Ding
[ISQED 2022] The 23rd IEEE International Society for Quality Electronic Design
Video / PDF / Code / Supp / Slides

We propose AE-BERT, an automatic and efficient pruning framework. AE-BERT achieves the inference time of a single BERT-BASE encoder on Xilinx Alveo U200 FPGA board that is 1.83x faster compared to Intel(R) Xeon(R) Gold 5218 (2.30GHz) CPU.

3DSP Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
S. Huang*, D. Xu*, I. E. Yen, S. Chang, B. Li, C. Ding, et al.
[ACL 2022] The 60th Annual Meeting of the Association for Computational Linguistics
PDF / Code / Supp / Slides

We study network pruning of Transformer-based language models under the pre-training and fine-tuning paradigm and propose a counter-traditional hypothesis that pruning increases the risk of overfitting when performed during the fine-tuning phase.

2021

3DSP InfoGCL: Information-Aware Graph Contrastive Learning
D. Xu, W. Cheng, D. Luo, H. Chen, X. Zhang
[NeurIPS 2021] The 35th Conference on Neural Information Processing Systems
PDF / Code / Supp / Slides

We propose an information-aware contrastive learning framework for graph-structure data, and show for the first time that all recent graph contrastive learning methods can be unified by our framework.

3DSP (SparseBERT) Rethinking Network Pruning - under the Pre-train and Fine-tune Paradigm
Dongkuan Xu, Ian En-Hsu Yen, Jinxi Zhao, Zhibin Xiao
[NAACL-HLT 2021] 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics
PDF / Code / Supp / Slides

We study how knowledge is transferred and lost during the pre-train, fine-tune, and pruning process, and propose a knowledge-aware sparse pruning process that achieves significantly superior results than existing literature.

3DSP Data Augmentation with Adversarial Training for Cross-Lingual NLI
Xin Dong, Yaxin Zhu, Zuohui Fu, Dongkuan Xu, Gerard de Melo
[ACL 2021] The 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
PDF / Code / Supp / Slides

We study data augmentation for cross-lingual natural language inference and propose two methods of training a generative model to induce synthesized examples to reflect more diversity in a semantically faithful way.

3DSP Deep Multi-Instance Contrastive Learning with Dual Attention for Anomaly Precursor Detection
Dongkuan Xu, Wei Cheng, Jingchao Ni, Dongsheng Luo, Masanao Natsumeda, Dongjin Song, Bo Zong, Haifeng Chen, Xiang Zhang
[SDM 2021] The 21th SIAM International Conference on Data Mining
PDF / Code / Supp / Slides

We utilize multi-instance learning to model the uncertainty of precursor period, and design a contrastive loss to address the issue that annotated anomalies are few.

3DSP Multi-Task Recurrent Modular Networks
Dongkuan Xu, Wei Cheng, Xin Dong, Bo Zong, Wenchao Yu, Jingchao Ni, Dongjin Song, Xuchao Zhang, Haifeng Chen, Xiang Zhang
[AAAI 2021] The 35th AAAI International Conference on Artificial Intelligence
PDF / Code / Supp / Slides

We propose MT-RMN to dynamically learn task relationships and accordingly learn to assemble composable modules into complex layouts to jointly solve multiple sequence processing tasks.

3DSP Transformer-Style Relational Reasoning with Dynamic Memory Updating for Temporal Network Modeling
Dongkuan Xu, Junjie Liang, Wei Cheng, Hua Wei, Haifeng Chen, Xiang Zhang
[AAAI 2021] The 35th AAAI International Conference on Artificial Intelligence
PDF / Code / Supp / Slides

We propose TRRN to model temporal networks by employing transformer-style self-attention to reason over a set of memories.

3DSP How Do We Move: Modeling Human Movement with System Dynamics
Hua Wei, Dongkuan Xu, Junjie Liang, Zhenhui Li
[AAAI 2021] The 35th AAAI International Conference on Artificial Intelligence
PDF / Code / Supp / Slides

We propose MoveSD to model state transition in human movement from a novel perspective, by learning the decision model and integrating the system dynamics.

3DSP Longitudinal Deep Kernel Gaussian Process Regression
Junjie Liang, Yanting Wu, Dongkuan Xu, Vasant Honavar
[AAAI 2021] The 35th AAAI International Conference on Artificial Intelligence
PDF / Code / Supp / Slides

We introduce Longitudinal deep kernel Gaussian process regression to fully automate the discovery of complex multi level correlation structure from longitudinal data.

2020

3DSP Parameterized Explainer for Graph Neural Network
Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, Xiang Zhang
[NeurIPS 2020] The 34th Conference on Neural Information Processing Systems
PDF / Code / Supp / Slides

We propose to adopt deep neural networks to parameterize the generation process of explanations, which enables a natural approach to multi-instance explanations.

3DSP Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text Classification
Xin Dong, Yaxin Zhu, Yupeng Zhang, Zuohui Fu, Dongkuan Xu, Sen Yang, Gerard de Melo
[SIGIR 2020] The 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
PDF / Code / Supp / Slides

We propose a semi-supervised adversarial perturbation framework that encourages the model to be more robust towards such divergence and better adapt to the target language.

3DSP Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
Dongkuan Xu, Wei Cheng, Bo Zong, Dongjin Song, Jingchao Ni, Wenchao Yu, Yanchi Liu, Haifeng Chen, Xiang Zhang
[AAAI 2020] The 34th AAAI International Conference on Artificial Intelligence
PDF / Code / Poster / Slides

We propose a deep architecture for learning trends in multivariate time series, which jointly learns both local and global contextual features for predicting the trend of time series.

3DSP Longitudinal Multi-Level Factorization Machines
Junjie Liang, Dongkuan Xu, Yiwei Sun, Vasant Honavar
[AAAI 2020] The 34th AAAI International Conference on Artificial Intelligence
PDF / Code / Supp

We propose longitudinal kulti-level factorization machine, to the best of our knowledge, the first model to address these challenges in learning predictive models from longitudinal data.

2019

3DSP Adaptive Neural Network for Node Classification in Dynamic Networks
Dongkuan Xu, Wei Cheng, Dongsheng Luo, Yameng Gu, Xiao Liu, Jingchao Ni, Bo Zong, Haifeng Chen, Xiang Zhang
[ICDM 2019] The 19th IEEE International Conference on Data Mining
PDF / Slides

We propose an adaptive neural network for node classification in dynamic networks, which is able to consider the evolution of both node attributes and network topology.

3DSP Spatio-Temporal Attentive RNN for Node Classification in Temporal Attributed Graphs
Dongkuan Xu, Wei Cheng, Dongsheng Luo, Xiao Liu, Xiang Zhang
[IJCAI 2019] The 29th International Joint Conference on Artificial Intelligence
PDF / Code / Poster / Slides

We propose a spatio-temporal attentive RNN model, which aims to learn node representations for classification by jointly considering both the temporal and spatial patterns of the node.

3DSP Deep Co-Clustering
Dongkuan Xu, Wei Cheng, Dongsheng Luo, Xiao Liu, Xiang Zhang
[SDM 2019] The 19th SIAM International Conference on Data Mining
PDF / Code / Supp / Poster / Slides

DeepCC utilizes the deep autoencoder for dimension reduction, and employs a variant of Gaussian mixture model to infer the cluster assignments. A mutual information loss is proposed to bridge the training of instances and features.

2018

3DSP Co-Regularized Deep Multi-Network Embedding
Jingchao Ni, Shiyu Chang, Xiao Liu, Wei Cheng, Haifeng Chen, Dongkuan Xu and Xiang Zhang
[WWW 2018] The 27th International Conference on World Wide Web
PDF / Code

DMNE coordinates multiple neural networks (one for each input network data) with a co-regularized loss function to manipulate cross-network relationships, which can be many-to-many, weighted and incomplete.

3DSP Multiple Instance Learning Based on Positive Instance Graph
Dongkuan Xu, Wei Zhang, Jia Wu, Yingjie Tian, Qin Zhang, Xindong Wu
arXiv preprint

Most multi-instance learning (MIL) methods that study true positive instances ignore 1) the global similarity among positive instances and 2) that negative instances are non-i.i.d.. We propose a MTL method based on positive instance graph updating to address this issue.

3DSP A Review of Multi-Instance Learning Research
Yingjie Tian, Dongkuan Xu, Chunhua Zhang
Operations Research Transactions, 2018
PDF

This paper reviews the research progress of multi-instance learning (MTL), introduces different assumptions, and categories MTL methods into instance-level, bag-level, and embedded-space. Extensions and major applications in various areas are discussed at last.

2017

3DSP SALE: Self-Adaptive LSH Encoding for Multi-Instance Learning
Dongkuan Xu, Jia Wu, Dewei Li, Yingjie Tian, Xingquan Zhu, Xindong Wu
Pattern Recognition, 2017
PDF

We propose a self-adaptive locality-sensitive hashing encoding method for multi-instance learning (MIL), which efficiently deals with large MIL problems.

3DSP Metric Learning for Multi-Instance Classification with Collapsed Bags
Dewei Li, Dongkuan Xu, Jingjing Tang, Yingjie Tian
[IJCNN 2017] The 30th IEEE International Joint Conference on Neural Networks
PDF

We propose a metric learning method for multi-instance classification, aiming to find an instance-dependent metric by maximizing the relative distance on neighborhood level.

2016

3DSP PIGMIL: Positive Instance Detection via Graph Updating for Multiple Instance Learning
Dongkuan Xu, Jia Wu, Wei Zhang, Yingjie Tian
arXiv preprint arXiv:1612.03550, 2016
PDF

We propose a positive instance detection method based on multiple instance learning, of which the core idea is that true positive instances should not only be similar to themselves globally but also different from negative instances robustly.

3DSP Multi-Metrics Classification Machine
Dewei Li, Wei Zhang, Dongkuan Xu, Yingjie Tian
[ITQM 2016] The 4th International Conference on Information Technology and Quantitative Management
PDF (Best Paper Award)

We propose a metric learning approach called multi-metrics classification machine. We establish an optimization problem for each class (each metric) to learn multiple metrics independently.

2015

3DSP A Comprehensive Survey of Clustering Algorithms
Dongkuan Xu, Yingjie Tian
Annals of Data Science, 2015
PDF

We introduce the definition of clustering, the basic elements involved in clustering process, and categorize the clustering algorithms into the traditional ones and the modern ones. All the algorithms are discussed comprehensively.

Undergraduate

3DSP A Support Vector Machine-based Ensemble Prediction for Crude Oil Price with VECM and STEPMRS
Dongkuan Xu, Tianjia Chen, Wei Xu
International Journal of Global Energy Issues, 2015
PDF

This paper proposes a support vector machine-based ensemble model to forecast crude oil price based on VECM and stochastic time effective pattern modelling and recognition system (STEPMRS).

3DSP A Neural Network-Based Ensemble Prediction Using PMRS and ECM
Dongkuan Xu, Yi Zhang, Cheng Cheng, Wei Xu, Likuan Zhang
[HICSS 2014] The 47th Hawaii International Conference on System Science
PDF

This paper presents an integrated model to forecast crude oil prices, where pattern modelling & recognition system is used to model the price trend and error correction model is offered to forecast errors. A neural network layer is employed to integrate the results.

Professional Services
  • Panel Reviewer:
    • NSF CORE Program, 2024
    • NSF CAREER Program, 2023
  • Column Editor:
    • ACM Special Interest Group on Artificial Intelligence (SIGAI) Newsletter
  • Conference/Workshop Chair:
    • The First Workshop on Dataset Distillation for Computer Vision @ CVPR2024 (link)
    • Workshops of Integrating ChatGPT into K-12 Classrooms @ NSF SRCA REU Site 2023 (link)
    • The First Workshop on DL-Hardware Co-Design for AI Acceleration @ AAAI2023 (link)
    • International Workshop on Resource-Efficient Learning for Knowledge Discovery (RelKD'23) @ KDD2023 (link)
    • The Conference on Machine Learning Algorithms & Natural Language Processing (MLNLP'22, 23) (link)
  • Academic Committee Member:
    • Machine Learning Algorithms & Natural Language Processing (MLNLP) (link)
  • Area Chair:
    • LREC-COLING'24
  • Session Chair:
    • Research Track of KDD'22
    • ADS Track of KDD'22
  • Senior Program Committee Member:
    • AAI'23, 24
    • IJCAI'21
  • Program Committee Member:
    • ICLR'21, 22, 23
    • ICML'21, 22
    • NeurIPS'20, 21, 22, 23
    • AAAI'20, 21, 22
    • ISQED'23
    • KDD'20, 21, 22
    • ACL Rolling Review'22
    • LoG'22
    • IJCAI'20, 22
    • NAACL'21
    • EMNLP'20, 21
    • COLING'22
    • WSDM'22, 23
    • SDM'22
    • EACL'21
    • ACM CIKM'20, 21, 22
    • AACL-IJCNLP'20, 22
    • IJCNN'18, 19, 20, 21
  • Journal Reviewer:
    • IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
    • Communications of the ACM
    • IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
    • IEEE Transactions on Knowledge and Data Engineering (TKDE)
    • IEEE Transactions on Cybernetics
    • Information Fusion
    • ACM Transactions on Knowledge Discovery from Data (TKDD)
    • Pattern Recognition
    • Neural Networks
    • Neurocomputing
    • ACM Transactions on Asian and Low-Resource Language Information Processing
    • IEEE Access
    • Neural Computation
    • Complexity
    • Soft Computing
    • Complex & Intelligent Systems
    • Multimedia Tools and Applications
    • Big Data
  • Conference Volunteer:
    • The Annual Conference of NAACL-HLT, 2021
    • Backuping SDM Session Chairs, 2021
    • The 35th AAAI Conference on Artificial Intelligence, 2021
    • The 26th SIGKDD Conference on Knowledge Discovery and Data Mining, 2020
Teaching Experiences
  • Guest Lecturer
    • COSI 133A: Graph Mining
      Brandeis University, 2021 Fall

    • COSI 165B: Deep Learning
      Brandeis University, 2021 Spring

Supervised Students/Interns
  • Postdoctoral Researcher
    • Zhiyuan Peng, NC State
      Topic: Augmented Large Language Model

  • Ph.D. Students
    • Chengyuan Liu, NC State
      Topic: Large Language Models in Education

  • Master Students
    • LiChia Chang, NC State
      Topic: Efficient Retrieval-Augmented Generation

    • John Zhu, NC State
      Topic: Trustworthy Evaluation of LLMs

    • Teddy Chen, NC State
      Topic: LLM Bias

  • Undergraduate Researchers
    • Aditya Basarkar, Undergraduate at NC State
      Topic: LLM-powered Math Reasoning

    • Zihan (Z) Dong, NC State
      Topic: Large Language Models in Education

  • Intern Researchers
    • Bowen Lei, Apple
      Topic: Theoretical Foundations of Efficient Learning

    • Zhengdong Zhang, Amazon
      Topic: Reliable Large Language Models

    • Shengjie Liu, PhD at NC State
      Topic: Robust Multi-modal LLM Agents

    • Xukun Liu, Master at Northwestern University
      Topic: Accelerating LLM Decoding

    • Berwin Chen, Master at University of Birmingham
      Topic: Retrieval-Augmented Generation for Scientific Discovery

    • Shanlin Liu, Master at University of Shanghai for Science and Technology
      Topic: Retrieval-Augmented Generation for Scientific Discovery

    • Shengkun Tang, Undergraduate at Wuhan University
      Topic: Multi-modal Foundation Models

    • Huanhuan Ma, Master at Chinese Academy of Sciences
      Topic: Trustworthiness Evaluation of LLMs

Talks
  • The Impact of AI on Our Lives and Beyond (Talk news, in Chinese)
    Raleigh, NC, USA, March 2024
    Fo Guang Shan Buddhist Temple, North Carolina

  • Advancing Environmental Strategies: Leveraging Foundation Models for Enhanced Geospatial Analytics and Conservation
    Raleigh, NC, USA, March 2024
    Forest Carbon Solutions Initiative (FCSI), NC State

  • How LLMs Work and Cutting-Edge Research on Generative AI
    Online, USA, Dec 2023
    STARS AI Scholars Program (link)

  • Sculpting the Future of Collective Growth in Collaborative AI (Talk news)
    Microsoft Research Asia, Online, Sep 2023
    ACE (Advance, Creativity and Empowerment) Talk

  • ChatGPT in Corporate Real Estate - Unlocking the Potential (Web & Slides)
    Raleigh, NC, USA, Aug 2023
    CoreNet Global Carolinas Chapter (link)

  • Testing Accuracy is Not All You Need: Less Training Cost & More Testing Reliability
    Rutgers University, New Brunswick, USA, Feb 2023
    Rutgers Efficient AI (REFAI) Seminar (link)

  • Resource-efficient Deep Learning: Democratizing AI at Scale
    Online, USA, June 2022
    Pinterest Machine Learning Lunch

  • Resource-efficient Deep Learning: Democratizing AI at Scale
    Online, USA, May 2022
    Amazon Search (A9)

  • Parameter Efficiency: Democratizing AI at Scale (Slides)
    Waltham, MA, USA, Dec. 2021
    Brandeis University

  • Chasing Efficiency of Pre-trained Language Models
    Redmond, Washington, USA, Jun. 2021
    Microsoft Research Lab

  • BERT Pruning: Structural vs. Sparse (Slides)
    Waltham, MA, USA, Apr. 2021
    Brandeis University.

  • BERT, Compression and Applications (Slides)
    Mountain View, USA, Apr. 2021
    Xpeng Motors

  • BERT Architecture and Computation Analysis (Slides)
    Los Altos, USA, May. 2020
    Moffett.AI

  • Anomaly Precursor Detection via Deep Multi-Instance RNN (Slides)
    Princeton, USA, May. 2019
    NEC Laboratories America

Honors and Awards
  • North Carolina State University
    • NCSU Carla Savage Award, 2024
    • Microsoft Gift Fund, 2024
    • Microsoft Accelerating Foundation Models Research Award, 2024
    • ICCCN Best Paper Award, 2023
  • The Pennsylvania State University
    • College of IST Award for Excellence in Teaching Support (top 2), 2019
    • Third place winner (Eng.) in the 37rd annual PSU Graduate Exhibition (News), 2022
    • NAACL Scholarship, 2021
    • SIAM Student Travel Award, 2021
    • KDD Student Registration Award, 2020
    • AAAI Student Scholarship, 2020
    • IST Travel Award, 2019-2021
  • University of Chinese Academy of Sciences
    • President’s Fellowship of Chinese Academy of Sciences (the most prestigious award), 2016
    • National Graduate Scholarship, China (2% in university), 2016
    • Graduate Student Academic Scholarship, 2015-2017
  • Renmin University of China
    • First-class Scholarship of Sashixuan Elite Fund, China (5% in university), 2014
    • Kwang-hua Scholarship of RUC, China, 2014
    • Meritorious Winner in Mathematical Contest in Modeling, 2013
    • First-class Scholarship of Social Work and Volunteer Service of RUC, 2013
Extracurricular Activities
  • IEEE (Institute of Electrical and Electronics Engineers) Membership, 2023-Present
  • ACL (Association for Computational Linguistics) Membership, 2021-Present
  • President of Youth Volunteers Association of School of Information of RUC, 2012-2013
  • Volunteer of Beijing Volunteer Service Federation (BVF), 2012-2014
  • Leader of National Undergraduate Training Programs for Innovation and Entrepreneurship, 2011-2012


*Last updated on 04/16/2024*
This guy makes a nice webpage