I’m an M.S. student in Computer Science at Northwestern University focusing on developing interpretable and generalizable intelligent systems, especially in the domain of NLP and RL, solving tasks such as reasoning and planning which usually involve processing abilities on a higher level of abstraction.
My current goal of research could be expressed as 2 divisions:
Develop algorithms that produce intrinsically interpretable models.
There exist many ways to improve the interpretability of intelligent algorithms, which are outlined clearly in this paper. Intrinsically interpretable models are one of them that could possibly help humans learn structured knowledge in the process of interpretation. Methods used in this area include imposing sparsity constraints such as limiting related representations (1) and using simpler surrogate models (2) (3) (4) (5), or using causality relations (6), and hierarchical learning (7) (8).
Design generalizable representations for intelligent systems to express, and access knowledge.
While current deep neural models perform incredibly well on raw features, they lack generalizability when dealing with inputs from different modalities. This survey provides an overview of joint and coordinated representations used to cope with the problem, but there exists a vast amount of structured knowledge sources, such as knowledge bases, relational/non-relational databases, apart from popular datasets. Incorporating these more complex forms of knowledge requires specially engineered methods (9) (10).
Download my resumé.
MSc in Computer Science, 2022
Northwestern University
BEng in Computer Science, 2020
Sun Yat-sen University
We introduce a method to extend the vocabulary encoding of BERT with context encoding of a input token, with information from knowledge bases, in a given sequence of text.