9.2 KiB
obj | website | rev |
---|---|---|
application | https://continue.dev | 2024-04-02 |
continue
Continue is an open-source autopilot for VS Code and JetBrains—the easiest way to code with any LLM
Some examples of what you can achieve are:
- Use
cmd/ctrl + I
to generate boilerplate code from natural language - Use our local tab-autocomplete to get inline suggestions and write boilerplate code quickly
- Highlight code, describe how to refactor it, and have changes streamed into your editor
- Ask high-level questions about your codebase, with Continue automatically finding relevant files
- Quickly generate unit tests for any function or class
- Ask a quick question to get immediate answers without leaving your editor
- Have your current changes reviewed for mistakes that the compiler can't catch
- Type
@
to reference dozens of different sources while communicating with the LLM
Continue lets you do all of this with any LLM, whether open-source, commercial, local, or remote. And we provide numerous points of configuration so that you can customize the extension to fit into your existing workflows.
You can run a model on your local computer using:
- Ollama
- LM Studio
- Llama.cpp
- KoboldCpp (OpenAI compatible server)
- llamafile ((OpenAI compatible server)
- LocalAI (OpenAI compatible server)
- Text generation web UI (OpenAI compatible server)
- FastChat (OpenAI compatible server)
- llama-cpp-python (OpenAI compatible server)
- TensorRT-LLM (OpenAI compatible server)
Once you have it running, you will need to configure it in the GUI or manually add it to your `config.json.
Context Providers
Context Providers allow you to type @
and see a dropdown of content that can all be fed to the LLM as context. Every context provider is a plugin, which means if you want to reference some source of information that you don't see here, you can request (or build!) a new context provider.
As an example, say you are working on solving a new GitHub Issue. You type @issue
and select the one you are working on. Continue can now see the issue title and contents. You also know that the issue is related to the files readme.md
and helloNested.py
, so you type @readme
and @hello
to find and select them. Now these 3 "Context Items" are displayed inline with the rest of your input.
Built-in Context Providers
To use any of the built-in context providers, open ~/.continue/config.json
and add it to the contextProviders
list.
Code
Type @code
to reference specific functions or classes from throughout your project.
{ "name": "code" }
Git Diff
Type @diff
to reference all of the changes you've made to your current branch. This is useful if you want to summarize what you've done or ask for a general review of your work before committing.
{ "name": "diff" }
Terminal
Type @terminal
to reference the contents of your IDE's terminal.
{ "name": "terminal" }
Documentation
Type @docs
to index and retrieve snippets from any documentation site. You can add any site by selecting "Add Docs" in the dropdown, then entering the root URL of the documentation site and a title to remember it by. After the site has been indexed, you can type @docs
, select your documentation from the dropdown, and Continue will use similarity search to automatically find important sections when answering your question.
{ "name": "docs" }
Open Files
Type @open
to reference the contents of all of your open files. Set onlyPinned
to true
to only reference pinned files.
{ "name": "open", "params": { "onlyPinned": true } }
Codebase Retrieval
Type @codebase
to automatically retrieve the most relevant snippets from your codebase. Read more about indexing and retrieval here.
{ "name": "codebase" }
Folders
Type @folder
to use the same retrieval mechanism as @codebase
, but only on a single folder.
{ "name": "folder" }
Exact Search
Type @search
to reference the results of codebase search, just like the results you would get from VS Code search. This context provider is powered by ripgrep.
{ "name": "search" }
File Tree
Type @tree
to reference the structure of your current workspace. The LLM will be able to see the nested directory structure of your project.
{ "name": "tree" }
GitHub Issues
Type @issue
to reference the conversation in a GitHub issue. Make sure to include your own GitHub personal access token to avoid being rate-limited:
{ "name": "issue", "params": { "repos": [ { "owner": "continuedev", "repo": "continue" } ], "githubToken": "ghp_xxx" }}
GitLab Merge Request
Type @gitlab-mr
to reference an open MR for this branch on GitLab.
Configuration
You will need to create a personal access token with the read_api
scope. then add the following to your configuration:
{ "name": "gitlab-mr", "params": { "token": "..." }}
Using Self-Hosted GitLab
You can specify the domain to communicate with by setting the domain
parameter in your configurtion. By default this is set to gitlab.com
.
{ "name": "gitlab-mr", "params": { "token": "...", "domain": "gitlab.example.com" }}
Filtering Comments
If you select some code to be edited, you can have the context provider filter out comments for other files. To enable this feature, set filterComments
to true
.
Jira Issues
Type @jira
to reference the conversation in a Jira issue. Make sure to include your own Atlassian API Token.
{ "name": "jira", "params": { "domain": "company.atlassian.net", "email": "someone@somewhere.com", "token ": "ATATT..." }}
Code Outline
Type @outline
to reference the outline of all currently open files. The outline of a files consists of only the function and class definitions in the file. Supported file extensions are '.js', '.mjs', '.go', '.c', '.cc', '.cs', '.cpp', '.el', '.ex', '.elm', '.java', '.ml', '.php', '.ql', '.rb', '.rs', '.ts'
{ "name": "outline" }
Code Highlights
Type @highlights
to reference the 'highlights' from all currently open files. The highlights are computed using Paul Gauthier's so-called 'repomap' technique in Aider Chat. Supported file extensions are the same as for @Outline
(behind the scenes, we use the corresponding tree-sitter grammars for language parsing).
{ "name": "highlights" }
Slash Commands
Slash commands are shortcuts that can be activated by typing /
and selecting from the dropdown. For example, the built-in /edit
slash command let you stream edits directly into your editor.
Built-in Slash Commands
To use any of the built-in slash commands, open ~/.continue/config.json
and add it to the slashCommands
list.
/edit
Select code with ctrl/cmd + M (VS Code) or ctrl/cmd + J (JetBrains), and then type /edit
, followed by instructions for the edit. Continue will stream the changes into a side-by-side diff editor.
{ "name": "edit", "description": "Edit highlighted code"}
/comment
Comment works just like /edit
, except it will automatically prompt the LLM to comment the code.
{ "name": "comment", "description": "Write comments for the highlighted code"}
/share
Type /share
to generate a shareable markdown transcript of your current chat history.
{ "name": "share", "description": "Download and share this session"}
/cmd
Generate a shell command from natural language and (only in VS Code) automatically paste it into the terminal.
{ "name": "cmd", "description": "Generate a shell command"}
/commit
Shows the LLM your current git diff and asks it to generate a commit message.
{ "name": "commit", "description": "Generate a commit message for the current changes"}
/issue
Describe the issue you'd like to generate, and Continue will turn into a well-formatted title and body, then give you a link to the draft so you can submit. Make sure to set the URL of the repository you want to generate issues for.
{ "name": "issue", "description": "Generate a link to a drafted GitHub issue", "params": { "repositoryUrl": "https://github.com/continuedev/continue" }}
/so
The StackOverflow slash command will automatically pull results from StackOverflow to answer your question, quoting links along with its answer.
{ "name": "so", "description": "Reference StackOverflow to answer the question"}