commit - a3fa2e888f4f864d7d31fbacd4d5b154ec9805c3
commit + 7a579223d798a09846387b8f947b10603c69e553
blob - 0bc9045f55d1cbdd7e2c8e1c0432a3a965b2d586
blob + 9fae067a5a3b05a4f91134370de26f2662166a6d
--- bin/llama
+++ bin/llama
# llama-3.1-8b-instant
# llama-3.2-3b-preview
# llama-3.2-1b-preview
-model = "llama-3.2-3b-preview"
-big = "llama-3.1-70b-versatile"
+model = "llama-3.1-8b-instant"
+big = "llama-3.3-70b-versatile"
def read_token(name):
with open(name) as f:
blob - 9d2f1051da0c09ddd0921b3a3f3dd2751f0c2f5e
blob + 354b1af9e311030085c2c02107b8b5e0a80793e1
--- man/llama.1
+++ man/llama.1
reads a prompt from the standard input
and sends it to a large language model hosted by Groq.
The reply is written to the standard output.
-The default model is Llama 3.2 3B.
+The default model is Llama 3.1 8B.
.Pp
A Groq API token must be written to
.Pa $HOME/.config/groq/token .
The following flags are understood:
.Bl -tag -width Ds
.It Fl b
-Prompt the "bigger" LLama 3.1 70B model.
+Prompt the "bigger" Llama 3.3 70B model.
.Sh EXAMPLE
.Dl echo 'What is LLM slop?' | llama
.Sh EXIT STATUS