18 KiB
obj | repo | rev |
---|---|---|
application | https://github.com/charmbracelet/mods | 2024-03-30 |
mods
LLM based AI is really good at interpreting the output of commands and returning the results in CLI friendly text formats like Markdown. Mods is a simple tool that makes it super easy to use AI on the command line and in your pipelines. Mods works with OpenAI, Groq, Azure OpenAI, and LocalAI
Mods works by reading standard in and prefacing it with a prompt supplied in the mods
arguments. It sends the input text to an LLM and prints out the result, optionally asking the LLM to format the response as Markdown. This gives you a way to "question" the output of a command. Mods will also work on standard in or an argument supplied prompt individually.
Usage
Option | Description |
---|---|
--settings |
Mods lets you tune your query with a variety of settings. You can configure Mods with mods --settings or pass the settings as environment variables and flags. |
--dirs |
Prints the local directories used by Mods to store its data. Useful if you want to back your conversations up, for example. |
-m , --model , $MODS_MODEL |
Mods uses gpt-4 with OpenAI by default, but you can specify any model as long as your account has access to it or you have installed locally with LocalAI. You can add new models to the settings with mods --settings . You can also specify a model and an API endpoint with -m and -a to use models not in the settings file. |
-M --ask-model |
Ask which model to use with an interactive prompt. |
-t , --title |
Set a custom save title for the conversation. |
-C , --continue-last |
Continues the previous conversation. |
-c , --continue |
Continue from the last response or a given title or SHA1. |
-l , --list |
Lists all saved conversations. |
-S , --show-last |
Show the previous conversation. |
-s , --show |
Show the saved conversation the given title or SHA1. |
--delete |
Deletes the saved conversation with the given title or SHA1. |
--delete-older-than=duration |
Delete conversations older than the given duration (e.g. 10d , 3w , 1mo , 1y ). If the terminal is interactive, it'll first list the conversations to be deleted and then will ask for confirmation. If the terminal is not interactive, or if --quiet is provided, it'll delete the conversations without any confirmation. |
-f , --format , $MODS_FORMAT |
Ask the LLM to format the response in a given format. You can edit the text passed to the LLM with mods --settings then changing the format-text value. You'll likely want to use this in with --format-as . |
--format-as , $MODS_FORMAT_AS |
When --format is on, instructs the LLM about which format you want the output to be. This can be customized with mods --settings . |
--role , $MODS_ROLE |
You can have customized roles in your settings file, which will be fed to the LLM as system messages in order to change its behavior. The --role flag allows you to change which of these custom roles to use. |
-r , --raw , $MODS_RAW |
Print the raw response without syntax highlighting, even when connect to a TTY. |
--max-tokens , $MODS_MAX_TOKENS |
Max tokens tells the LLM to respond in less than this number of tokens. LLMs are better at longer responses so values larger than 256 tend to work best. |
--temp , $MODS_TEMP |
Sampling temperature is a number between 0.0 and 2.0 and determines how confident the model is in its choices. Higher values make the output more random and lower values make it more deterministic. |
--stop , $MODS_STOP |
Up to 4 sequences where the API will stop generating further tokens. |
--topp , $MODS_TOPP |
Top P is an alternative to sampling temperature. It's a number between 0.0 and 2.0 with smaller numbers narrowing the domain from which the model will create its response. |
--no-limit , $MODS_NO_LIMIT |
By default, Mods attempts to size the input to the maximum size the allowed by the model. You can potentially squeeze a few more tokens into the input by setting this but also risk getting a max token exceeded error from the OpenAI API. |
-P , --prompt , $MODS_INCLUDE_PROMPT |
Include prompt will preface the response with the entire prompt, both standard in and the prompt supplied by the arguments. |
-p , --prompt-args , $MODS_INCLUDE_PROMPT_ARGS |
Include prompt args will include only the prompt supplied by the arguments. This can be useful if your standard in content is long and you just a want a summary before the response. |
--max-retries , $MODS_MAX_RETRIES |
The maximum number of retries to failed API calls. The retries happen with an exponential backoff. |
--fanciness , $MODS_FANCINESS |
Your desired level of fanciness. |
-q , --quiet , $MODS_QUIET |
Only output errors to standard err. Hides the spinner and success messages that would go to standard err. |
--reset-settings |
Backup your old settings file and reset everything to the defaults. |
--no-cache , $MODS_NO_CACHE |
Disables conversation saving. |
--word-wrap , $MODS_WORD_WRAP |
Wrap formatted output at specific width (default is 80) |
-x , --http-proxy , $MODS_HTTP_PROXY |
Use the HTTP proxy to the connect the API endpoints. |
Features
Regular usage
By default:
- all messages go to
STDERR
- all prompts are saved with the first line of the prompt as the title
- glamour is used by default if
STDOUT
is a TTY
Basic
The most basic usage is:
mods 'first 2 primes'
Pipe from
You can also pipe to it, in which case STDIN
will not be a TTY:
echo 'as json' | mods 'first 2 primes'
In this case, mods
should read STDIN
and append it to the prompt.
Pipe to
You may also pipe the output to another program, in which case STDOUT
will not be a TTY:
echo 'as json' | mods 'first 2 primes' | jq .
In this case, the "Generating" animation will go to STDERR
, but the response
will be streamed to STDOUT
.
Custom title
You can set a custom title:
mods --title='title' 'first 2 primes'
Continue latest
You can continue the latest conversation and save it with a new title using
--continue=title
:
mods 'first 2 primes'
mods --continue='primes as json' 'format as json'
Untitled continue latest
mods 'first 2 primes'
mods --continue-last 'format as json'
Continue from specific conversation, save with a new title
mods --title='naturals' 'first 5 natural numbers'
mods --continue='naturals' --title='naturals.json' 'format as json'
Conversation branching
You can use the --continue
and --title
to branch out conversations, for
instance:
mods --title='naturals' 'first 5 natural numbers'
mods --continue='naturals' --title='naturals.json' 'format as json'
mods --continue='naturals' --title='naturals.yaml' 'format as yaml'
With this you'll end up with 3 conversations: naturals
, naturals.json
, and
naturals.yaml
.
List conversations
You can list your previous conversations with:
mods --list
# or
mods -l
Show a previous conversation
You can also show a previous conversation by ID or title, e.g.:
mods --show='naturals'
mods -s='a2e2'
For titles, the match should be exact.
For IDs, only the first 4 chars are needed. If it matches multiple
conversations, you can add more chars until it matches a single one again.
Delete a conversation
You can also delete conversations by title or ID, same as --show
, different
flag:
mods --delete='naturals'
mods --delete='a2e2'
Keep in mind that these operations are not reversible.
Examples
Improve Your Code
Piping source code to Mods and giving it an instruction on what to do with it
gives you a lot of options for refactoring, enhancing or debugging code.
mods -f "what are your thoughts on improving this code?" < main.go | glow
Come Up With Product Features
Mods can also come up with entirely new features based on source code (or a README file).
mods -f "come up with 10 new features for this tool." < main.go | glow
Help Write Docs
Mods can quickly give you a first draft for new documentation.
mods "write a new section to this readme for a feature that sends you a free rabbit if you hit r" < README.md | glow
Organize Your Videos
The file system can be an amazing source of input for Mods. If you have music or video files, Mods can parse the output of ls
and offer really good editorialization of your content.
ls ~/vids | mods -f "organize these by decade and summarize each" | glow
Make Recommendations
Mods is really good at generating recommendations based on what you have as well, both for similar content but also content in an entirely different media (like getting music recommendations based on movies you have).
ls ~/vids | mods -f "recommend me 10 shows based on these, make them obscure" | glow
ls ~/vids | mods -f "recommend me 10 albums based on these shows, do not include any soundtrack music or music from the show" | glow
Read Your Fortune
It's easy to let your downloads folder grow into a chaotic never-ending pit of files, but with Mods you can use that to your advantage!
ls ~/Downloads | mods -f "tell my fortune based on these files" | glow
Understand APIs
Mods can parse and understand the output of an API call with curl
and convert it to something human readable.
curl "https://api.open-meteo.com/v1/forecast?latitude=29.00&longitude=-90.00¤t_weather=true&hourly=temperature_2m,relativehumidity_2m,windspeed_10m" 2>/dev/null | mods -f "summarize this weather data for a human." | glow
Read The Comments (so you don't have to)
Just like with APIs, Mods can read through raw HTML and summarize the contents.
curl "https://news.ycombinator.com/item?id=30048332" 2>/dev/null | mods -f "what are the authors of these comments saying?" | glow