Giter Site home page Giter Site logo

dense-analysis / neural Goto Github PK

View Code? Open in Web Editor NEW
426.0 426.0 19.0 215 KB

AI Vim/Neovim code generation plugin (OpenAI, ChatGPT, and more)

License: MIT License

Lua 3.21% Dockerfile 1.54% Vim Script 39.77% Shell 25.39% Python 30.08%
ai chatgpt code-generation gpt-3 linux llm machine-learning macos neovim openai vim windows

neural's People

Contributors

angelchev avatar jiz4oh avatar loneexile avatar mertkarayakuplu avatar w0rp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural's Issues

Implement Neural Actions (prompt snippets)

As we use GPT models, we may come across good prompts that derive the outputs we want e.g.

  • Write a PlantUML diagram for X
  • Summarise X
  • Act as an expert in X and explain Y
  • Create a commit message for this diff
    etc

It would be helpful for people to save these and bring them forward with a custom dropdown behaviour or via a "completion source". Particularly useful for prompt building for #19

Prompt doesn't close correctly with Ctrl-C

The input prompt from nui doesn't close with vim.api.nvim_command(':q') when using the custom mapping of Ctrl-C to exit.

We should probably be calling the input.unmount() instead.

Feat: Add tokens counter

Overview

It would be useful to add the ability for neural to get the token count for some given input. This would help prevent initiating requests that accidentally go over the maximum token count for some given model source.

This will also be useful in situations where we want to extract the maximum possible response from a model via request_token_num = model_max_token_len - context_tokens_len

Implementation

  • The tokenizer should be appropriate for the respective model
  • We should use an open-source (Ideally MIT) tokenizer that we can bundle to not require installing additional dependencies

Automatic context-aware prompting

This issue will likely share some of the same requirements as #16.

We should use awareness of where the user's cursor lies and the surrounding context to automatically modify prompts to produce more accurate results. I believe this prefixing of the prompt should be automatically on by default, but it should be possible to disable it, and maybe even have an extra command to temporarily forgo automatic prompt enhancement.

The Story

Speaking in a broad sense, say you are editing the following code in Go.

package main

func main() {
    // Your cursor lies here!
}

You enter the prompt glob files ending in .csv. Neural should automatically change that prompt to something like Write code in the Go programming language. Do not write a "package" or a main function. glob files ending in .csv.. All of this can be achieved through knowledge of the surrounding text and any semantic information we can get.

Implementation

As in #16, we can integrate with Language Server Protocol (LSP) to gain knowledge of the surrounding code. We can also access basic information from Vim, such as &filetype, and the surrounding text in the buffer. Through some combination of all of the available information, we can build up a library of prompt prefixes.

Note that future machine learning tools will likely make it easier to introduce negative prompts, and to specify context, through separate parameters to the prompt itself. When we build this functionality, we should be sure to logically separate what strings are for context, and what the negative prompts are, and then produce a function that builds a single prompt string. That way, when future tools are ready, we'll be able to integrate with them quickly, without having to go back and re-do our code.

We may be able to automatically adjust the tokens requested for a single prompt. Machine learning text generation tools sometimes need to be told how much text it is that you want exactly. There will likely be some common natural language phrases we can recognise, and automatically adjust the requested tokens for the user to get better results. This too should be configurable.

Debug: Incorrect API key provided

Hello there everyone.
Thanks for building this fantastic plugin!
I've been trying to configure this in my neovim (using astrovim) setup. Still, I cannot configure the API key so far - either via environment variables or straight-up in plain text.

Here are the configs I have in place:

		{
			"dense-analysis/neural",
			config = function()
				require("neural").setup({
					open_ai = {
						-- api_key = os.getenv("OPENAI_API_KEY"),
						api_key = "API_KEY_IN_PLAIN_TEXT",
					},
				})
			end,
			requires = {
				"MunifTanjim/nui.nvim",
				"ElPiloto/significant.nvim",
			},
		},

When I hit CTRL+N, I get:
Screenshot 2023-01-03 at 10 15 08

How can I debug what's the issue here?

Feat: Completion Undo/Redo

It would be useful to undo an entire completion generation in one go, especially in cases where the generation is large.

Implement safe analysis of ranges of code

Everyone and their mother is writing an OpenAI/ChatGPT or similar plugin. People want to analyse their code with machine learning, but are beginning to go about it all wrong. People have started copying and pasting entire regions of code into machine learning tools, whether manually, or through plugins in editors. This approach is fundamentally flawed for the following reasons.

  1. This is a massive security risk. You could very easily leak passwords or other sensitive information to third parties, and you should never trust a third party.
  2. This presents a massive risk for leaking intellectual property. You can be sure managers will ban any plugin from a company which might potentially send information that should not be shared to a third party.
  3. The solution is only good for demos. Machine learning tools need to be prompted carefully to produce reliable results.

Instead of simply firing code or text at machine learning tools blindly, Neural will instead take the following approach.

  1. Analyse code using local tools (and later local models) so information never leaves the host machine.
  2. Produce reliable intermediate representations of code and text that can be pre-processed into safe prompts to send to machine learning tools.
  3. Send safe data to third parties and return the results.

Nothing will ever be able to stop a user manually copying and pasting whole sections of code, but no sane software should automatically or implicitly introduce these risks to unwitting users. Software should lead you in the right direction, not the wrong one. In future, Dense Analysis will be working on and integrating with local FOSS machine learning models, which will offer a lot of power. The future of machine learning is not rate-limited third party providers hosting binary blobs you cannot audit, who share your data with God knows who, but models and tools entirely controlled by you.

Speaking in practical terms, we can very quickly implement this feature pretty easily.

  1. Integrate with Neovim LSP and ALE (for example) LSP tooling that already exists.
  2. Pull out semantic information about code.
  3. Automatically remove potentially sensitive information from the semantic data analysed, producing abstract intermediate representations. (IR)
  4. Prompt machine learning tools with that IR instead of the wholesale code, yielding similar results to wholesale code copying, without the aforementioned risks.

I think this plan can be implemented relatively quickly.

Add feature for :NeuralCommand - Nautral language vim commands

Sometimes we forget specific vim commands.

It would be great to have a :NeuralCommand where you use natural language to describe what the command does, it generates and shows a preview of that command so you can inspect it before you run it. Now you will never forget how to quit vim!

Running neural on range of lines

Hi,

When I select a range of lines to ask a question about (such as shown in the example video) I'm getting, open executing "'<,'>:Neural": "E481: No range allowed". Similarly, I can't ask questions specifically on a selected code. What am I doing wrong?

Better security for API key

Hello !

Thank you for this amazing plugin!
As a lot of people, including me, put our configuration plugins in git, I would prefer to have a way to not write the key into the configuration options.

What about adding a way to retrieve environment variables ?

Fix certificate verification on macOS

We're experiencing errors with certificate verification via default installations of Python on macOS. We either need to attempt to load better versions of Python on macOS by default, such as those installed by brew, or find some means to easily install the correct certificates on macOS so requests to the services on macOS are as secure as they should be. Certificate verification has temporarily been disabled until we can apply a proper patch.

Add Vader test coverage and document how to develop Neural

We've got CI running for our supported Vim and Neovim versions, and we have 100% test coverage for Python scripts. Now we need to add Vim test coverage. Lua testing should be a separate issue we tackle in future.

To Do

  • Cover OpenAI Vim data source with Vader tests
  • Cover configuration Vim functions with Vader tests
  • Cover job functions with Vader tests
  • Cover main neural functions with Vader tests
  • Document how to develop Neural and run tests, etc.

Tech: Add lua test coverage

Currently, the Neovim lua code is missing test coverage. We should add some and set up to run with the CLI.

Integrate with Open Assistant

https://open-assistant.io/

Open Assistant is a developing FOSS alternative to ChatGPT, and we should integrate with it. It should make it possible for us to run local models for better privacy. We will be able to share more information with local models for better results than we will see with OpenAI.

Get CI working properly

I have put the basic files in place for CI testing of all the things, but I haven't had the time to actually get it running yet. This issue is a reminder for me to fix that.

The error notifications showed when I enter the prompt.

This is my config of it (in after/plugin), sorry because I cannot show the api key.

neural.setup({
    mappings = {
        prompt = '<Tab><space>',
    },
    open_ai = {
        api_key = "***",
    }
})

The error I get:

Error executing vim.schedule lua callback: Vim:E885: Not possible to change sign AnimationSign_dots_4
stack traceback:
        [C]: in function 'sign_place'
        ...k/packer/start/significant.nvim/lua/significant/init.lua:134: in function 'sign_place_fn'
        ...k/packer/start/significant.nvim/lua/significant/init.lua:99: in function ''
        vim/_editor.lua: in function <vim/_editor.lua:0>
Press ENTER or type command to continue

Although it's worked after I press q but the result is not completely correct:

print the name in English form of current Vietnamese president  # My request.

Nguyen Xuan Phuc  # Correct result.

Nguyen Phu Trong  # Result I got.

But it's not as important as how I do to hidden the error.

From my Fedora 37, neovim 0.8 in kitty (surely I installed two require plugin).

Thanks for your help and sorry for my non-native English.

Add a Neural Scratch Buffer

Overview

The idea is to have a Neural Buffer, a multi-line generation/completion playground.

This will allow users to write long-form prompts, which will be helpful for larger-sized texts, generate a completion, make amendments to the initial prompt/generated completion and rerun it. This can open up a few applications:

  • Rephrasing large chunks of text
  • Amending the initial input prompt (buffer content itself)
  • Guiding the completion by starting with leading words in the response e.g.
Write a breakdown of the SAML Protocol for a 5-year old

Hello Ben, Let me explain it in 5 points ... <nerual>

Add support for stopping Neural

It should be possible to stop Neural entirely at any point. We should make the following a reality.

  1. You should be able to stop all Neural jobs.
  2. We should be able to stop communication between Neural and third parties, such as OpenAI, so stopping jobs in Vim and NeoVim should communicate well with Python.
  3. Stopping input should be as seamless as possible.

We could support this through a simple :NeuralStop command, and integrate that well with keybinds. I would like to offer a keybind wrapper for <C-c> that both stops Neural running and applies the default behaviour of <C-c>. Maybe we can violate the golden rule of Vim plugins never hijacking keybinds by default by making it so seamless it Just Works ℒ️, with an option to stop Neural doing that.

Changing autocomplete keybind does not appear to work

Hi, today I tested this awesome plugin (great job btw) with my personal configuration which is customized NvChad settings. I created a separated Lua file for this plugin with the example configuration and tried to change because it's my keybind for NvimTree, and it does not seem to work.
neural.lua file:

local neural = {
    mappings = {
        swift = '<C-m>', -- Context completion
        prompt = '<C-space>', -- Open prompt
    },
    -- OpenAI settings
    open_ai = {
        temperature = 0.1,
        presence_penalty = 0.5,
        frequency_penalty = 0.5,
        max_tokens = 2048,
        context_lines = 16, -- Surrounding lines for swift completion
        api_key = 'some_api_key_i_wont_plaintext'
    },
    -- Visual settings
    ui = {
        use_prompt = true, -- Use visual floating Input
        use_animated_sign = true, -- Use animated sign mark
        show_hl = true,
        show_icon = true,
        icon = 'πŸ—²', -- Prompt/Static sign icon
        icon_color = '#ffe030', -- Sign icon color
        hl_color = '#4D4839', -- Line highlighting on output
        prompt_border_color = '#E5C07B',
    },
}

And my init.lua file in custom/plugins folder:

return {
  ["neovim/nvim-lspconfig"] = {
    config = function()
      require "plugins.configs.lspconfig"
      require "custom.plugins.lspconfig"
    end,
    },
  ["simrat39/rust-tools.nvim"] = {
    after = "nvim-lspconfig",
    config = function()
        require('rust-tools').setup({})
    end,
    },

    ["dense-analysis/neural"] = {
    config = function()
            require "custom.plugins.neural"
    end,
    requires = {
        "MunifTanjim/nui.nvim",
        }
    }

}

I hope someone can help and if it's my fault I'll figure it somehow in the next 3 hours πŸ˜†

curl error

I'm getting [Neural Error] -> curl: (92) HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1) when I run C-N. Any advice on how to debug this?

prompts

I've been using GPT for a bit.

Over time, my workflow has evolved into using precrafted pieces of prompts.

As an example, one case was targeting a certain environment which didn't have an otherwise common certain standard library functions available, some had replacements available, and some were stuff you could avoid using that had an alternative available that was just worse, which is the best thing when you don't have the better thing available.

These would become write ... don't use 'x', use 'y' instead, .....

There was also, ..., if you need 'x', you can use the variable 'y', ....

One I just made up is this,

write a javascript prototype. include prototypes for assert and log as well. the assert should verify types and parameters. log should print the parameters. this prototype is named

And this would be followed by, for example,

user. it should have an age, always an integer greater than 0, a name, which is nullable or a string, when it is not null, it should be longer than 10 characters.

This creates the output,

ex

It's made up so it's not really useful, I hope people can see the use case nevertheless.

Once you have a high quality prompt template, it really helps a lot. Even just the boilerplate.

Anyway, in practice, these weren't always the start of a prompt, sometimes you'd get a better result when your prompt ended with certain parameters, other times a prompt in the middle was useful, the latter was a rare case though.

There's a certain art to it, somewhere between being too complicated and too simple, GPT can produce really good results at times.

I'd like to suggest a configuration option that does string replacement. Such as defining x and being able to write x which would place your pre-configured prompt.

I wonder what do people think of something along these lines?

Stop "Press Enter to Continue"

I have been working with Vim plugins for over 6 years now and I still don't understand how to reliably output messages to the status line you can read later that don't result in Vim saying "Press Enter to Continue" ever. I would like to be able to echo Neural's messages without this happening. Maybe someone knows a better way.

non english characters mangled in replies from ChatbotGPT

Hi, when using this plugin to ask questions in Danish, the replies from chatGPT are returned with escaped utf-8 sequences / codes instead of the actual Danish characters.

Example reply:

Chatbot teknologi er en computerprogram, der kan simulere en samtale med et menneske. Dette --->  g\u00f8res  <-----
ved at bruge et  --->  s\u00e6t   <---- af algoritmer at analysere samtaler og generere svar, der er relevante for den
samtale, der er i gang.

So in this case \u00f8 should have been 'ΓΈ'
and \u00e6 should have been 'Γ¦'.

Is there a way to get the right characters back into the buffer?

Model selection

Looking at the README and source, it doesn't look like there's a config option to select the model you'd like to use - this would be a welcome addition I think.

Default look and feel isn't all it could be

For some reason, the prompt for me looks terrible (border, no icon, color) but I was unable to find anything in the documentation regarding highlight groups.

Screenshot 2023-03-22 at 21 32 07

I'm using a perfectly working Nerd Font. Would appreciate some info on how to change these features.

Run Neural to ask a specific question about part of a code sample

Right now, I was using Neural (successfully) to do some changes in my code :

def my_health(host)
  url = URI.parse("#{host}/health")

  req = Net::HTTP::Post.new(url.path)
  req.body_stream = StringIO.new(query)
  req.content_length = query.bytesize
  req.set_content_type("multipart/form-data", { "boundary" => BOUNDARY })

  Net::HTTP.start(url.host, url.port) {|http|
    begin
      response = http.request(req)
      if response.code != '200'
        raise "Error from server #{response.code}"
      end
      return response.body
    rescue => e
      raise e
    end
  }
end

This code was pasted from somewhere else in my application, and is for a POST request.

I just wanted to ask Chatgpt to change the code in order to do a GET Request.

What I did :

Select the content of the function using vim

Copy it to the main register

Run ":Neural Rewrite the following code to do a GET request : <hit Ctrl-R+" to paste the code>"

**What I would have liked: (to avoid copy-pasting) **

What I would have liked would be to ask a question to Neural and provide the context as a visual selection.

`:'<,'>Neural 'Please rewrite the code below to use a GET request on the /health route'

This would make life easier to be able to do that without any registers.

Add Insert mode command and <Plug> keybind again

An early test version of this plugin had an Insert mode command. We should add it back again, with the following additions.

  1. Running the command should make it easy to enter the prompt without "leaving" Insert mode.
  2. We should modify Neural to insert text where the user's cursor is in Insert mode, no matter where it moves to.
  3. We should detect the user leaving Insert mode and automatically cancel Neural's text input if the user does.

If we set it up this way, it will be very easy to just add some text to a Vim buffer. If we run into issues with entering text into command, we could make it so you type characters and press enter whilst in insert mode, and we replace that text with the results of the prompt. We also have the option of implementing insert mode prompting that way anyway if it turns out to be nice.

Feat: Enhance Neural Buffer UX

There are some additional features we can add that would make the neural buffer easier to use such as:

  • Show the currently active model for the buffer
  • Change the active model just for the buffer
  • Change settings for the model just for the buffer
  • Surface various details about the Neural buffer:
    • Live token counter for the current model e.g. 100/4096 through virtual text or floating window
    • Show current temperature, Top P, Frequency Penalty, Presence penalty, etc.
  • Add markdown syntax highlighting for buffer

Implement a "chat" buffer

This is @Angelchev's idea, and he deserves the credit for dreaming it up. For tools like ChatGPT, when the official API is available, we will implement a "chat" buffer where you can type multiple messages to the machine and get responses inside of Vim. We will make it easy to copy the responses from the buffer.

Generate multiple outputs generated from a given prompt

It is possible that a single instance of a generated output is not satisfactory. However, generating from a selection of 3-5 and previewing them might solve this issue, which is particularly helpful for more creative and open-ended prompts.

The OpenAI exposes the ability to generate multiple output instances in a single request, so it should be possible to do this.

We will need to consider how that looks from a UX perspective, perhaps presenting the choices in a floating window that would allow you to navigate through the various versions and confirm the best one.

High voltage emoji doesn't show up in nvim but I can copy it from somewhere else

I copied this from the internet: ⚑
Pasting the above into nvim works
Neural shows codes:

Screenshot_2023-01-19_15-13-11

:version
NVIM v0.8.2
Build type: RelWithDebInfo
LuaJIT 2.1.0-beta3
Compilation: /usr/bin/gcc -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grec
ord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp
,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-pr
otector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
 -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -DNVIM_TS
_HAS_SET_MATCH_LIMIT -DNVIM_TS_HAS_SET_ALLOCATOR -O2 -g -Og -g -Wall -Wextra -pe
dantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wshadow -Wconversio
n -Wdouble-promotion -Wmissing-noreturn -Wmissing-format-attribute -Wmissing-pro
totypes -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fno-common -fdiag
nostics-color=auto -DINCLUDE_GENERATED_DECLARATIONS -D_GNU_SOURCE -DNVIM_MSGPACK
_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -DMIN_LOG_LEVEL=3 -I/builddir/build/BUILD
/neovim-0.8.2/redhat-linux-build/cmake.config -I/builddir/build/BUILD/neovim-0.8
.2/src -I/usr/include -I/usr/include/luajit-2.1 -I/builddir/build/BUILD/neovim-0
.8.2/redhat-linux-build/src/nvim/auto -I/builddir/build/BUILD/neovim-0.8.2/redha
t-linux-build/include
Compiled by mockbuild@koji

Features: +acl +iconv +tui
See ":help feature-compile"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.