mard1no / flexgen Goto Github PK
View Code? Open in Web Editor NEWThis project forked from fminference/flexgen
Running large language models like OPT-175B/GPT-3 on a single GPU. Up to 100x faster than other offloading systems.
License: Apache License 2.0