Name: THUDM
Type: Organization
Bio: ChatGLM, CogVLM, CodeGeeX, WebGLM, GLM-130B, CogView, CogVideo | CogDL, GNNs, AMiner | Knowledge Engineering Group (KEG) & Data Mining at Tsinghua University
Twitter: thukeg
Location: FIT Building, Tsinghua University
Blog: https://huggingface.co/THUDM
THUDM's Projects
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
AgentTuning: Enabling Generalized Agent Abilities for LLMs
多维度中文对齐评测基准 | Benchmarking Chinese Alignment of LLMs
ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation (WWW'23)
The source code for BatchSampler that accepted in KDD'23
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
CodeGeeX2: A More Powerful Multilingual Code Generation Model
CogDL: A Comprehensive Library for Graph Deep Learning (WWW 2023)
Source code and dataset for paper "Cognitive Knowledge Graph Reasoning for One-shot Relational Learning"
Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"
a state-of-the-art-level open visual language model | 多模态预训练模型
Source code and dataset for KDD 2020 paper "Controllable Multi-Interest Framework for Recommendation"
DropConn: Dropout Connection Based Random GNNs for Molecular Property Prediction (TKDE'24)
Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Source code and dataset for TKDE 2019 paper “Trust Relationship Prediction in Alibaba E-Commerce Platform”
Transformer related optimization, including BERT, GPT
Inference speed-up for stable-diffusion (ldm) with TensorRT.