commit - 5e118710cbcb512c83aed6d323d00169344d4806
commit + 1ea57a11f46b6fb2b3f450206f6332d331cfead4
blob - 2c9ad45edef4730dc50c7a123d491e1cbd21f0cc
blob + e02805abc467b2efc5e0501d6d0f6e264acc8dc5
--- README
+++ README
The following commands are provided:
+- bbsnip - create a snippet on BitBucket
- hits - count web traffic
- hlsget - download the contents of a HLS playlist
- jsfmt - format javascript source code
- lemmyverse - find lemmy communities
- llama - prompt a large language model
+- precis - summarise text
- webpaste - create a web paste on webpaste.olowe.co
- ws - web search
- xstream - stream X display over the network
blob - /dev/null
blob + 3ac41db48f0f1929f9d81422b75f1aad3fdd1ca5 (mode 644)
--- /dev/null
+++ man/precis.1
+.Dd
+.Dt PRECIS 1
+.Sh NAME
+.Nm precis
+.Nd summarise text
+.Sh SYNOPSIS
+.Nm
+.Op Ar model
+.Sh DESCRIPTION
+.Nm
+reads text from the standard input
+and prints a short summary using a large language model.
+.Ar model
+is a path to a gguf model file.
+The default is
+.Pa $HOME/llama-3.2-3b-instruct-q4_k_m.gguf .
+.Sh EXIT STATUS
+.Ex
+.Sh SEE ALSO
+.Xr llama-cli 1 ,
+.Lk https://github.com/ggerganov/llama.cpp llama.cpp