Taken in Seattle(Credit to Tiffany😊)

[Twitter] [Linkedin] [Github] [Google Scholar]

Email: zliu2803[at]usc[dot]edu

Ziyi Liu

Hi theređź‘‹ , I am a second-year PhD student at the University of Southern California, advised by Prof. Jieyu Zhao in the LIME Lab. Previously, I earned my master's degree at USC and worked as a Research Assistant in USC ISI's Ink Lab for two years under the guidance of Professor Xiang Ren. My research primarily focuses on social reasoning and trustworthy NLP, particularly evaluating LLM behavior and aligning LLM values with human values in human-LLM interaction. My work is driven by two key questions:
  • How can we make interactions between models and humans more seamless?
  • How can we ensure the faithfulness of LLMs and avoid hallucinations during interactions?

I am open to collaboration! If you are a master or undergraduate student in USC, please fill in this form first before contacting me. If you are a PhD student from another university, feel free to drop me an email!

I am looking for 2025 Summer Research internship!


News

Sept 19th 2024: Our paper 'InterIntent: Invesigating Social Intelligence of LLMs via Intention Understanding in an Interactive Game Environment' got ACCEPTED at EMNLP 2024! See you guys in Miami 🥳

Sept 19th 2024: Our paper 'Self-Contradictory Reasoning Evaluation and Detection' got ACCEPTED as Findings in EMNLP 2024! 🥳

May 1st 2023: Our paper 'Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales' got ACCEPTED at ACL 2023! 🥳


Publications

2024

InterIntent: Invesigating Social Intelligence of LLMs via Intention Understanding in an Interactive Game Environment (EMNLP 2024)

Ziyi Liu*, Ahbishek Anand*, Pei Zhou, Jen-tse Huang, Jieyu Zhao (* means same contribution)

Self-Contradictory Reasoning Evaluation and Detection (Findings of EMNLP 2024)

Ziyi Liu, Soumya Sanyal, Isabelle Lee, Yongkang Du, Rahul Gupta, Yang Liu, Jieyu Zhao

2023

Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales (ACL 2023)

Brihi Joshi*, Ziyi Liu*, Sahana Ramnath, Aaron Chan, Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin Choi, Xiang Ren (* means same contribution)

2022

ER-Test: Evaluating Explanation Regularization Methods for NLP Models (Findings of EMNLP 2022)

Brihi Joshi*, Aaron Z. Chan*, Ziyi Liu*, Shaoliang Nie, Maziar Sanjabi, Hamed Firooz and Xiang Ren(* means same contribution)

2021

A deep-learning framework for multi-level peptide–protein interaction prediction (Nature communications)

Yipin Lei, Shuya Li, Ziyi Liu, Fangping Wan, Tingzhong Tian, Shao Li, Dan Zhao, Jianyang Zeng