dense-analysis / neural Goto Github PK
View Code? Open in Web Editor NEWAI Vim/Neovim code generation plugin (OpenAI, ChatGPT, and more)
License: MIT License
AI Vim/Neovim code generation plugin (OpenAI, ChatGPT, and more)
License: MIT License
As we use GPT models, we may come across good prompts that derive the outputs we want e.g.
Write a PlantUML diagram for X
Summarise X
Act as an expert in X and explain Y
It would be helpful for people to save these and bring them forward with a custom dropdown behaviour or via a "completion source". Particularly useful for prompt building for #19
The input prompt from nui doesn't close with vim.api.nvim_command(':q')
when using the custom mapping of Ctrl-C
to exit.
We should probably be calling the input.unmount()
instead.
It would be useful to add the ability for neural to get the token count for some given input. This would help prevent initiating requests that accidentally go over the maximum token count for some given model source.
This will also be useful in situations where we want to extract the maximum possible response from a model via request_token_num = model_max_token_len - context_tokens_len
This issue will likely share some of the same requirements as #16.
We should use awareness of where the user's cursor lies and the surrounding context to automatically modify prompts to produce more accurate results. I believe this prefixing of the prompt should be automatically on by default, but it should be possible to disable it, and maybe even have an extra command to temporarily forgo automatic prompt enhancement.
Speaking in a broad sense, say you are editing the following code in Go.
package main
func main() {
// Your cursor lies here!
}
You enter the prompt glob files ending in .csv.
Neural should automatically change that prompt to something like Write code in the Go programming language. Do not write a "package" or a main function. glob files ending in .csv.
. All of this can be achieved through knowledge of the surrounding text and any semantic information we can get.
As in #16, we can integrate with Language Server Protocol (LSP) to gain knowledge of the surrounding code. We can also access basic information from Vim, such as &filetype
, and the surrounding text in the buffer. Through some combination of all of the available information, we can build up a library of prompt prefixes.
Note that future machine learning tools will likely make it easier to introduce negative prompts, and to specify context, through separate parameters to the prompt itself. When we build this functionality, we should be sure to logically separate what strings are for context, and what the negative prompts are, and then produce a function that builds a single prompt string. That way, when future tools are ready, we'll be able to integrate with them quickly, without having to go back and re-do our code.
We may be able to automatically adjust the tokens requested for a single prompt. Machine learning text generation tools sometimes need to be told how much text it is that you want exactly. There will likely be some common natural language phrases we can recognise, and automatically adjust the requested tokens for the user to get better results. This too should be configurable.
Hello there everyone.
Thanks for building this fantastic plugin!
I've been trying to configure this in my neovim (using astrovim) setup. Still, I cannot configure the API key so far - either via environment variables or straight-up in plain text.
Here are the configs I have in place:
{
"dense-analysis/neural",
config = function()
require("neural").setup({
open_ai = {
-- api_key = os.getenv("OPENAI_API_KEY"),
api_key = "API_KEY_IN_PLAIN_TEXT",
},
})
end,
requires = {
"MunifTanjim/nui.nvim",
"ElPiloto/significant.nvim",
},
},
How can I debug what's the issue here?
It would be useful to undo an entire completion generation in one go, especially in cases where the generation is large.
Everyone and their mother is writing an OpenAI/ChatGPT or similar plugin. People want to analyse their code with machine learning, but are beginning to go about it all wrong. People have started copying and pasting entire regions of code into machine learning tools, whether manually, or through plugins in editors. This approach is fundamentally flawed for the following reasons.
Instead of simply firing code or text at machine learning tools blindly, Neural will instead take the following approach.
Nothing will ever be able to stop a user manually copying and pasting whole sections of code, but no sane software should automatically or implicitly introduce these risks to unwitting users. Software should lead you in the right direction, not the wrong one. In future, Dense Analysis will be working on and integrating with local FOSS machine learning models, which will offer a lot of power. The future of machine learning is not rate-limited third party providers hosting binary blobs you cannot audit, who share your data with God knows who, but models and tools entirely controlled by you.
Speaking in practical terms, we can very quickly implement this feature pretty easily.
I think this plan can be implemented relatively quickly.
Sometimes we forget specific vim commands.
It would be great to have a :NeuralCommand where you use natural language to describe what the command does, it generates and shows a preview of that command so you can inspect it before you run it. Now you will never forget how to quit vim!
Hi,
When I select a range of lines to ask a question about (such as shown in the example video) I'm getting, open executing "'<,'>:Neural": "E481: No range allowed". Similarly, I can't ask questions specifically on a selected code. What am I doing wrong?
Hello !
Thank you for this amazing plugin!
As a lot of people, including me, put our configuration plugins in git, I would prefer to have a way to not write the key into the configuration options.
What about adding a way to retrieve environment variables ?
Hi! Wanted to see if adding the claude api to this would be of interest at all?
We're experiencing errors with certificate verification via default installations of Python on macOS. We either need to attempt to load better versions of Python on macOS by default, such as those installed by brew
, or find some means to easily install the correct certificates on macOS so requests to the services on macOS are as secure as they should be. Certificate verification has temporarily been disabled until we can apply a proper patch.
We've got CI running for our supported Vim and Neovim versions, and we have 100% test coverage for Python scripts. Now we need to add Vim test coverage. Lua testing should be a separate issue we tackle in future.
Currently, the Neovim lua code is missing test coverage. We should add some and set up to run with the CLI.
Open Assistant is a developing FOSS alternative to ChatGPT, and we should integrate with it. It should make it possible for us to run local models for better privacy. We will be able to share more information with local models for better results than we will see with OpenAI.
I have put the basic files in place for CI testing of all the things, but I haven't had the time to actually get it running yet. This issue is a reminder for me to fix that.
This is my config of it (in after/plugin), sorry because I cannot show the api key.
neural.setup({
mappings = {
prompt = '<Tab><space>',
},
open_ai = {
api_key = "***",
}
})
The error I get:
Error executing vim.schedule lua callback: Vim:E885: Not possible to change sign AnimationSign_dots_4
stack traceback:
[C]: in function 'sign_place'
...k/packer/start/significant.nvim/lua/significant/init.lua:134: in function 'sign_place_fn'
...k/packer/start/significant.nvim/lua/significant/init.lua:99: in function ''
vim/_editor.lua: in function <vim/_editor.lua:0>
Press ENTER or type command to continue
Although it's worked after I press q
but the result is not completely correct:
print the name in English form of current Vietnamese president # My request.
Nguyen Xuan Phuc # Correct result.
Nguyen Phu Trong # Result I got.
But it's not as important as how I do to hidden the error.
From my Fedora 37, neovim 0.8 in kitty (surely I installed two require plugin).
Thanks for your help and sorry for my non-native English.
The idea is to have a Neural Buffer, a multi-line generation/completion playground.
This will allow users to write long-form prompts, which will be helpful for larger-sized texts, generate a completion, make amendments to the initial prompt/generated completion and rerun it. This can open up a few applications:
Write a breakdown of the SAML Protocol for a 5-year old
Hello Ben, Let me explain it in 5 points ... <nerual>
We should permit the number of tokens to be something that can be set at will when text is requested, in addition to being configurable for all prompts, so you can request smaller or larger prompts in different contexts.
It should be possible to stop Neural entirely at any point. We should make the following a reality.
We could support this through a simple :NeuralStop
command, and integrate that well with keybinds. I would like to offer a keybind wrapper for <C-c>
that both stops Neural running and applies the default behaviour of <C-c>
. Maybe we can violate the golden rule of Vim plugins never hijacking keybinds by default by making it so seamless it Just Works β’οΈ, with an option to stop Neural doing that.
Hi, today I tested this awesome plugin (great job btw) with my personal configuration which is customized NvChad settings. I created a separated Lua file for this plugin with the example configuration and tried to change because it's my keybind for NvimTree, and it does not seem to work.
neural.lua file:
local neural = {
mappings = {
swift = '<C-m>', -- Context completion
prompt = '<C-space>', -- Open prompt
},
-- OpenAI settings
open_ai = {
temperature = 0.1,
presence_penalty = 0.5,
frequency_penalty = 0.5,
max_tokens = 2048,
context_lines = 16, -- Surrounding lines for swift completion
api_key = 'some_api_key_i_wont_plaintext'
},
-- Visual settings
ui = {
use_prompt = true, -- Use visual floating Input
use_animated_sign = true, -- Use animated sign mark
show_hl = true,
show_icon = true,
icon = 'π²', -- Prompt/Static sign icon
icon_color = '#ffe030', -- Sign icon color
hl_color = '#4D4839', -- Line highlighting on output
prompt_border_color = '#E5C07B',
},
}
And my init.lua file in custom/plugins folder:
return {
["neovim/nvim-lspconfig"] = {
config = function()
require "plugins.configs.lspconfig"
require "custom.plugins.lspconfig"
end,
},
["simrat39/rust-tools.nvim"] = {
after = "nvim-lspconfig",
config = function()
require('rust-tools').setup({})
end,
},
["dense-analysis/neural"] = {
config = function()
require "custom.plugins.neural"
end,
requires = {
"MunifTanjim/nui.nvim",
}
}
}
I hope someone can help and if it's my fault I'll figure it somehow in the next 3 hours π
I'm getting [Neural Error] -> curl: (92) HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
when I run C-N
. Any advice on how to debug this?
I've been using GPT for a bit.
Over time, my workflow has evolved into using precrafted pieces of prompts.
As an example, one case was targeting a certain environment which didn't have an otherwise common certain standard library functions available, some had replacements available, and some were stuff you could avoid using that had an alternative available that was just worse, which is the best thing when you don't have the better thing available.
These would become write ... don't use 'x', use 'y' instead, ....
.
There was also, ..., if you need 'x', you can use the variable 'y', ...
.
One I just made up is this,
write a javascript prototype. include prototypes for assert and log as well. the assert should verify types and parameters. log should print the parameters. this prototype is named
And this would be followed by, for example,
user. it should have an age, always an integer greater than 0, a name, which is nullable or a string, when it is not null, it should be longer than 10 characters.
This creates the output,
It's made up so it's not really useful, I hope people can see the use case nevertheless.
Once you have a high quality prompt template, it really helps a lot. Even just the boilerplate.
Anyway, in practice, these weren't always the start of a prompt, sometimes you'd get a better result when your prompt ended with certain parameters, other times a prompt in the middle was useful, the latter was a rare case though.
There's a certain art to it, somewhere between being too complicated and too simple, GPT can produce really good results at times.
I'd like to suggest a configuration option that does string replacement. Such as defining x
and being able to write x
which would place your pre-configured prompt.
I wonder what do people think of something along these lines?
I have been working with Vim plugins for over 6 years now and I still don't understand how to reliably output messages to the status line you can read later that don't result in Vim saying "Press Enter to Continue" ever. I would like to be able to echo Neural's messages without this happening. Maybe someone knows a better way.
Is there any way to write up the question or instruction first, and then ask the chat GPT rather than writing down everything in :Neural ...
prompt?
Hi, when using this plugin to ask questions in Danish, the replies from chatGPT are returned with escaped utf-8 sequences / codes instead of the actual Danish characters.
Example reply:
Chatbot teknologi er en computerprogram, der kan simulere en samtale med et menneske. Dette ---> g\u00f8res <-----
ved at bruge et ---> s\u00e6t <---- af algoritmer at analysere samtaler og generere svar, der er relevante for den
samtale, der er i gang.
So in this case \u00f8
should have been 'ΓΈ'
and \u00e6
should have been 'Γ¦'.
Is there a way to get the right characters back into the buffer?
OpenAI are slowly deprecating text-davinci-003
, and they might remove it at any model. We should consider changing the default model to gpt-3.5-turbo-instruct
after testing it and comparing the results.
Looking at the README and source, it doesn't look like there's a config option to select the model you'd like to use - this would be a welcome addition I think.
I really respect the privacy first mentality. But to satisfy those who need context, using a descriptive function like :Neural.getcontext "prompt" could be very useful while maintaining your design pattern.
Right now, I was using Neural (successfully) to do some changes in my code :
def my_health(host)
url = URI.parse("#{host}/health")
req = Net::HTTP::Post.new(url.path)
req.body_stream = StringIO.new(query)
req.content_length = query.bytesize
req.set_content_type("multipart/form-data", { "boundary" => BOUNDARY })
Net::HTTP.start(url.host, url.port) {|http|
begin
response = http.request(req)
if response.code != '200'
raise "Error from server #{response.code}"
end
return response.body
rescue => e
raise e
end
}
end
This code was pasted from somewhere else in my application, and is for a POST request.
I just wanted to ask Chatgpt to change the code in order to do a GET Request.
What I did :
Select the content of the function using vim
Copy it to the main register
Run ":Neural Rewrite the following code to do a GET request : <hit Ctrl-R+" to paste the code>"
**What I would have liked: (to avoid copy-pasting) **
What I would have liked would be to ask a question to Neural and provide the context as a visual selection.
`:'<,'>Neural 'Please rewrite the code below to use a GET request on the /health route'
This would make life easier to be able to do that without any registers.
An early test version of this plugin had an Insert mode command. We should add it back again, with the following additions.
If we set it up this way, it will be very easy to just add some text to a Vim buffer. If we run into issues with entering text into command, we could make it so you type characters and press enter whilst in insert mode, and we replace that text with the results of the prompt. We also have the option of implementing insert mode prompting that way anyway if it turns out to be nice.
There are some additional features we can add that would make the neural buffer easier to use such as:
This is @Angelchev's idea, and he deserves the credit for dreaming it up. For tools like ChatGPT, when the official API is available, we will implement a "chat" buffer where you can type multiple messages to the machine and get responses inside of Vim. We will make it easy to copy the responses from the buffer.
Now the actual official ChatGPT API has been launched, we can integrate with it. We will need a chat buffer as per #19.
It is possible that a single instance of a generated output is not satisfactory. However, generating from a selection of 3-5 and previewing them might solve this issue, which is particularly helpful for more creative and open-ended prompts.
The OpenAI exposes the ability to generate multiple output instances in a single request, so it should be possible to do this.
We will need to consider how that looks from a UX perspective, perhaps presenting the choices in a floating window that would allow you to navigate through the various versions and confirm the best one.
I copied this from the internet: β‘
Pasting the above into nvim works
Neural shows codes:
:version
NVIM v0.8.2
Build type: RelWithDebInfo
LuaJIT 2.1.0-beta3
Compilation: /usr/bin/gcc -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grec
ord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp
,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-pr
otector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -DNVIM_TS
_HAS_SET_MATCH_LIMIT -DNVIM_TS_HAS_SET_ALLOCATOR -O2 -g -Og -g -Wall -Wextra -pe
dantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wshadow -Wconversio
n -Wdouble-promotion -Wmissing-noreturn -Wmissing-format-attribute -Wmissing-pro
totypes -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fno-common -fdiag
nostics-color=auto -DINCLUDE_GENERATED_DECLARATIONS -D_GNU_SOURCE -DNVIM_MSGPACK
_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -DMIN_LOG_LEVEL=3 -I/builddir/build/BUILD
/neovim-0.8.2/redhat-linux-build/cmake.config -I/builddir/build/BUILD/neovim-0.8
.2/src -I/usr/include -I/usr/include/luajit-2.1 -I/builddir/build/BUILD/neovim-0
.8.2/redhat-linux-build/src/nvim/auto -I/builddir/build/BUILD/neovim-0.8.2/redha
t-linux-build/include
Compiled by mockbuild@koji
Features: +acl +iconv +tui
See ":help feature-compile"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.