My research studies how large language models can reason, act, and learn in data-rich environments. The central thread is LLM-driven agentic AI, with a long-running focus on time series analysis and broader applications in table mining, scientific literature mining, and recommender systems.
This direction focuses on intelligent systems that reason, act, and improve through interaction. We study autonomous learning for large language models, slow-thinking reasoning, process-aware reinforcement learning, tool-augmented execution, memory, and agent runtime mechanisms. The goal is to move from prompt-only behavior toward agents that can plan, recover, and learn from environmental feedback in real workflows.
This direction develops context-aware time series intelligence across representation learning, forecasting, classification, and anomaly detection. Earlier work builds neural architectures and self-supervised pre-training methods for temporal signals; recent work connects time series with multimodal language modeling, slow-thinking LLM reasoning, and agentic forecasting systems that select tools, engineer features, and refine predictions through structured interaction.
LLM-driven tabular data understanding, table reasoning, multi-table inference, and scientific table analysis. We explore how LLMs can be augmented with programmatic tools and slow-thinking strategies to tackle complex structured data tasks.
Literature retrieval, multimodal scientific document understanding, knowledge discovery, and evidence-grounded reasoning over scholarly content. We build agentic systems and benchmarks for evaluating tool-augmented reasoning on academic literature.
Sequential modeling, one-class collaborative filtering, and LLM-driven reasoning for dynamic user preference understanding. Research covers behavior-level data augmentation, cross-domain recommendation, and LLM-based user simulation.