Giter Site home page Giter Site logo
THUDM photo

thudm Goto Github PK

repos: 101.0 gists: 0.0

Name: THUDM

Type: Organization

Bio: ChatGLM, CogVLM, CodeGeeX, WebGLM, GLM-130B, CogView, CogVideo | CogDL, GNNs, AMiner | Knowledge Engineering Group (KEG) & Data Mining at Tsinghua University

Twitter: thukeg

Location: FIT Building, Tsinghua University

Blog: https://huggingface.co/THUDM

THUDM's Projects

agentbench icon agentbench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)

agenttuning icon agenttuning

AgentTuning: Enabling Generalized Agent Abilities for LLMs

alignbench icon alignbench

多维度中文对齐评测基准 | Benchmarking Chinese Alignment of LLMs

apegnn icon apegnn

ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation (WWW'23)

batchsampler icon batchsampler

The source code for BatchSampler that accepted in KDD'23

chatglm-6b icon chatglm-6b

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型

chatglm2-6b icon chatglm2-6b

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型

chatglm3 icon chatglm3

ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型

codegeex icon codegeex

CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)

codegeex2 icon codegeex2

CodeGeeX2: A More Powerful Multilingual Code Generation Model

cogdl icon cogdl

CogDL: A Comprehensive Library for Graph Deep Learning (WWW 2023)

cogkr icon cogkr

Source code and dataset for paper "Cognitive Knowledge Graph Reasoning for One-shot Relational Learning"

cogqa icon cogqa

Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"

cogvideo icon cogvideo

Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"

cogview icon cogview

Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".

cogview2 icon cogview2

official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"

cogvlm icon cogvlm

a state-of-the-art-level open visual language model | 多模态预训练模型

comirec icon comirec

Source code and dataset for KDD 2020 paper "Controllable Multi-Interest Framework for Recommendation"

dropconn icon dropconn

DropConn: Dropout Connection Based Random GNNs for Molecular Property Prediction (TKDE'24)

etrust icon etrust

Source code and dataset for TKDE 2019 paper “Trust Relationship Prediction in Alibaba E-Commerce Platform”

fastldm icon fastldm

Inference speed-up for stable-diffusion (ldm) with TensorRT.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.