Let's see what candidates have to say now! This will be a good project to throw out there given our current climate, even though I'm sure there are hundreds of these project types. I will try my best to make it super easy to add new candidate information and data sources. In the end, the model will, theoretically, be able to parse and process any data type you drop into the data folders.
Powered by Ollama (has not let me down yet, super easy to work with and very fast)
Open source custom GPT based on Eric Hartford's Dolphin-mistral and Wizard-Vicuna-Uncensored (and likely a few TinyLlama or TinyDolphin nodes).
When completed, it should be able to support any model available on HF and will be able to fine-tune any model available and export as quantized or unquantized.
We live in a very cluttered and noisy political landscape. In my opinion, the majority of the noise is due to the techno-socio-policital attack strategies that agents for both parties tend to use (it's pretty obvious, right?). We have a world of information available to us and now the means to really do something special with that information that has never really been done in the history of our digital age. Organize it.
Alright, so we have social networks, streaming platforms, other mobile sharing and messaging systems, etc. The amount of information exchanged on these platforms is mind-boggling. It seems like a significant news story is coming at you every 15 minutes from every app. Researchers keep researching and creators keep creating. So you decide to practice safe & responsible adulting and DYOR. And then you realize you have no idea where to start, no idea which outlets or people you can trust and really no true awareness of what DYOR even means. The decision? Believe none of it and move on choosing to believe that no matter how crazy it seems in the world, it will all work out like it always has.
GPT models give us the opportunity to weight and account for data and processes we were previously unable to easily access. LLM model training and fine-tuning acts as a long-term, global memory of sorts. It contains statistical relationships that encompass pretty much everything at this point. There are many, many strategies for bridging that data back to the real world. My goal is to smash together a bunch of old, known processes and use them in a way that produces meaningful connections in a network capable of limited evolution. One that can train a new expert model for a specialized task and store it as a tool when it needs. One that is capable of strategizing short and long term memory, capable of self-learning and curriculum planning, and most importantly, one that understands the importance of remembering and understanding where it came from*... More on that a little later.
Comprehensive graphdb NVO triplets - nlp modeling Entity recognition and metadata
JULIET (Junctive Unsupervised Learning for Incrementally Evolving Transformers) NEAT integrations
I prefer the inspect package and couple it with importlib to search all .py modules in the root directory and parse out the functions to a yaml file in root. That process should be run before every commit so the inspections map should be up to date. Map includes the function names, module, testing status and the docstrings.
I will be uploading additional data sources for parsing, modules, templates, a callbacks manager with module import and remote execution methods and some other stuff. This project has, of course, escalated in magnitude as always. This one, however, I am sharing public as we go.