Simple bot to interact with Open Source LLM's running in Ollama using Telegram
Find a file
Alexey Skobkin 40b20b1b50
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
Raising the limit of history container size
2024-10-30 16:12:39 +00:00
bot Raising the limit of history container size 2024-10-30 16:12:39 +00:00
extractor fix #15 slog usage. Also adding more logging. 2024-03-12 22:07:22 +03:00
llm Presenting chat history as 'system' messages. Presenting bot replies as 'assistant' messages. Tweaking system prompt. 2024-10-28 02:04:35 +03:00
stats #26 Adding in-memory chat history support. Removing inline queries. Refactoring stats and message processing a bit. Also changing LLM request context building a bit. Also adding alias for summarization and some other small changes. 2024-10-28 00:35:35 +03:00
.drone.yml Also changing image tag for build step. 2024-03-10 05:57:59 +03:00
.gitignore initial. Draft of SIMPLE LLM bot for Telegram chat. 2024-03-08 05:18:45 +03:00
Dockerfile Model configuration and small prompt improvements (#24) 2024-08-16 00:47:07 +00:00
go.mod Refactoring structure from single file to several separated services. Adding new feature: "summarize" to generate bullet points for provided link. 2024-03-10 04:51:01 +03:00
go.sum Refactoring structure from single file to several separated services. Adding new feature: "summarize" to generate bullet points for provided link. 2024-03-10 04:51:01 +03:00
LICENSE LICENSE file. 2024-03-08 05:19:26 +03:00
main.go Model configuration and small prompt improvements (#24) 2024-08-16 00:47:07 +00:00
README.md REAMDE.md fix (#25) 2024-08-16 00:59:04 +00:00

telegram-ollama-reply-bot

Build Status

Usage

Docker

docker run \
  -e OPENAI_API_TOKEN=123 \
  -e OPENAI_API_BASE_URL=http://ollama.localhost:11434/v1 \
  -e TELEGRAM_TOKEN=12345 \
  -e MODEL_TEXT_REQUEST=llama3.1:8b-instruct-q6_K
  -e MODEL_SUMMARIZE_REQUEST=mistral-nemo:12b-instruct-2407-q4_K_M
  skobkin/telegram-llm-bot