Comments (7)
Hi,
Currently, the content of the model directory is an implementation detail and its structure could change in the future.
I'm not sure when would you need to have the model files in memory? As you should load the actual files at some point, isn't it equivalent to build a Model
instance and then move it around (the instance being the in-memory representation of the model on disk)? Or are you dynamically generating the model and vocabularies?
from ctranslate2.
I want to extract the files in memory as std::vector<unsigned char>
s directly from an archive, for simplicity and security reasons. Then, I would like to reinterpret_cast
them in the right type and use them as normally at runtime. Could that work?
Another case I can think of is if someone wants to share vocabularies or vmaps among different models.
from ctranslate2.
I'm thinking we could add a ModelReader
interface with the following methods (to be refined):
std::unique_ptr<std::istream> get_model_binary()
bool is_vocabulary_shared()
(iftrue
we build a singleVocabulary
instance)std::unique_ptr<std::istream> get_source_vocabulary()
std::unique_ptr<std::istream> get_target_vocabulary()
std::unique_ptr<std::istream> get_vocabulary_mapping()
(returnsnullptr
if no vocabulary mapping)
Then we add a new overload that accepts an instance implementing this interface.
In you code, you would need to extend this base class and implement your own loading logic. I think there are ways to wrap a stringstream
over an existing buffer.
What do you think?
from ctranslate2.
That would be great! I need to get a bit deeper into the relevant classes and interfaces in the code, but I'd like to get a first grasp of your idea and the work this will involve: all these methods should be templates with arguments and the derived class will implement the conversion of these arguments into the right types, do I get it right?
from ctranslate2.
ModelReader
would be an abstract class with methods left unimplemented. Derived classes (such as ModelFileReader
) can implement arbitrary loading logic as long as they meet the interface: returning a stream over the requested objects.
The main goal is to not integrate your loading logic into the main codebase, as it is very specific to your use case, but allow to plug it.
Do you want to take this one? I can also implement it if you prefer.
from ctranslate2.
Actually a single method could be enough, assuming you could easily map a filename to a stream:
std::unique_ptr<std::istream> get_file(const std::string& filename)
from ctranslate2.
ModelReader
would be an abstract class with methods left unimplemented. Derived classes (such asModelFileReader
) can implement arbitrary loading logic as long as they meet the interface: returning a stream over the requested objects.The main goal is to not integrate your loading logic into the main codebase, as it is very specific to your use case, but allow to plug it.
Yes, absolutely, I get the general idea and it's great as long as I have a way to implement my logic.
Do you want to take this one? I can also implement it if you prefer.
I would love to work on that, but honestly you already have implemented it in your mind, along with alternatives :), so I would prefer and appreciate it if you could add it --it would save us great time and effort.
Thank you!
from ctranslate2.
Related Issues (20)
- Does fairseq transform align can returns the alignment result? HOT 3
- Running Llama3 like a genius (that I am not) [SOLVED]: HOT 2
- Phi-3 support HOT 9
- faster-whisper inferencing problem HOT 2
- Congrats on flash attention, now how do I run it??? HOT 3
- BENCHmarking new flash attention! HOT 10
- How to run FA HOT 2
- [Feature Request] add a classification / estimation layer on top of Encoders HOT 1
- Anomalous T5 results using GPU inference on a 4090 graphics card HOT 3
- [Feature Request] Support Epsilon Sampling
- New Flash Attention Error HOT 8
- Build Ctranslate2 for android HOT 1
- OpenELM Support
- Problem converting Phi3-instruct-128k; "su" rope scaling in Phi-3 HOT 2
- Dynamic LoRA switching HOT 1
- New Llama3 sample script not working: HOT 3
- target_prefix latency HOT 2
- Unexpected inference results from Flan-T5 XXL converted to ctranslate2 with version 4.2.1 and 4.1.1 (using tensor parallel) HOT 4
- How to compile from source on windows 11? HOT 1
- Can't hide GPUs to get_cuda_device_count() HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ctranslate2.