Simple bot to interact with Open Source LLM's running in Ollama using Telegram
  • Go 99.5%
  • Dockerfile 0.5%
Find a file
Alexey Skobkin 08af301448
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
Lazily resolve Telegram images with cached metadata, context-aware downloads, and injected image cache.
2025-10-17 01:56:42 +03:00
bot Lazily resolve Telegram images with cached metadata, context-aware downloads, and injected image cache. 2025-10-17 01:56:42 +03:00
config Unify request timeout handling around BOT_PROCESSING_TIMEOUT, rework handler context/typing helpers to use the shared deadline, and update documentation to reflect the new duration-based env var. 2025-10-17 00:52:55 +03:00
extractor Shorter log error on extraction failure. 2025-03-31 22:04:57 +03:00
img Making banner smaller. (#47) 2024-11-22 13:11:50 +00:00
llm Centralized timeout handling with ErrRequestTimeout and runWithTimeout to manage typing, deadline enforcement, and timeout stats in one place. Updated processMention and /summarize handlers to rely on the helper. Tracked timeout occurrences in stats output. Wrapped OpenAI client failures with the original error so deadline hits can be detected upstream. 2025-10-16 23:54:27 +03:00
markdown Fixing dot sanitization. 2025-08-13 03:34:38 +03:00
stats Centralized timeout handling with ErrRequestTimeout and runWithTimeout to manage typing, deadline enforcement, and timeout stats in one place. Updated processMention and /summarize handlers to rely on the helper. Tracked timeout occurrences in stats output. Wrapped OpenAI client failures with the original error so deadline hits can be detected upstream. 2025-10-16 23:54:27 +03:00
.drone.yml Also changing image tag for build step. 2024-03-10 05:57:59 +03:00
.editorconfig .editorconfig added 2025-04-03 01:16:09 +03:00
.gitignore initial. Draft of SIMPLE LLM bot for Telegram chat. 2024-03-08 05:18:45 +03:00
Dockerfile Model configuration and small prompt improvements (#24) 2024-08-16 00:47:07 +00:00
go.mod Telego upgrade (#67) 2025-04-02 13:29:50 +00:00
go.sum Telego upgrade (#67) 2025-04-02 13:29:50 +00:00
LICENSE LICENSE file. 2024-03-08 05:19:26 +03:00
main.go Lazily resolve Telegram images with cached metadata, context-aware downloads, and injected image cache. 2025-10-17 01:56:42 +03:00
README.md Unify request timeout handling around BOT_PROCESSING_TIMEOUT, rework handler context/typing helpers to use the shared deadline, and update documentation to reflect the new duration-based env var. 2025-10-17 00:52:55 +03:00

Telegram Ollama Bot

Build Status

Project Banner

Functionality

  • Context-dependent dialogue in chats
  • Summarization of articles by provided link
  • Image recognition and description

Configuration

The bot can be configured using the following environment variables:

Variable Description Required Default
OPENAI_API_TOKEN API token for OpenAI compatible API Yes -
OPENAI_API_BASE_URL Base URL for OpenAI compatible API Yes -
TELEGRAM_TOKEN Telegram Bot API token Yes -
MODEL_TEXT_REQUEST Model name for text requests Yes -
MODEL_SUMMARIZE_REQUEST Model name for summarization requests Yes -
MODEL_IMAGE_RECOGNITION Model name for image recognition No -
BOT_HISTORY_LENGTH Number of messages to keep in conversation history No 150
LLM_UNCOMPRESSED_HISTORY_LIMIT Recent chat messages sent verbatim to LLM; older ones summarized. Set to 0 to disable summarization No 15
LLM_HISTORY_SUMMARY_THRESHOLD Extra messages beyond the limit before summarization triggers again No 5
BOT_PROCESSING_TIMEOUT Timeout for processing incoming requests (includes LLM calls). Accepts Go duration strings (e.g. 45s, 1m30s). No 30s
SENTRY_DSN Sentry DSN for error tracking No empty
RESPONSE_LANGUAGE Language for bot responses No Russian
RESPONSE_GENDER Gender for bot responses No neutral
MAX_SUMMARY_LENGTH Maximum length of generated summaries No 2000
PROMPT_CHAT System prompt for chat interactions No See config.go
PROMPT_SUMMARIZE System prompt for summarization No See config.go
PROMPT_IMAGE_RECOGNITION System prompt for image recognition No See config.go
BOT_ADMIN_IDS Comma-separated list of admin user IDs No empty

Prompt placeholders

Prompt environment variables support Go's text/template placeholders. The following placeholders are available:

  • PROMPT_CHAT {{.Model}}, {{.Language}}, {{.Gender}}, {{.Context}}
  • PROMPT_SUMMARIZE {{.Language}}, {{.MaxLength}}
  • PROMPT_IMAGE_RECOGNITION {{.Language}}

{{.Model}} is the model name, {{.Language}} is the response language, {{.Gender}} defines how the bot speaks about itself, {{.Context}} is the recent conversation history, and {{.MaxLength}} limits summary size.

Usage

The bot supports the following commands:

Command Description Example
/start Start the bot and get a welcome message /start
/help Show help message with available commands /help
/summarize, /s Summarize text from the provided link /summarize https://ex.co/article, /s https://ex.co/article concentrate on tech stuff
/stats Show bot statistics (admin only) /stats
/reset Reset current chat history (admin only) /reset

You can also interact with the bot by:

  • Mentioning it in a message
  • Replying to its messages
  • Sending direct messages in private chat (if enabled)
  • Sending images (the bot will describe what it sees in the image)

Running

Docker

docker run \
  -e OPENAI_API_TOKEN=123 \
  -e OPENAI_API_BASE_URL=http://ollama.localhost:11434/v1 \
  -e TELEGRAM_TOKEN=12345 \
  -e MODEL_TEXT_REQUEST=gemma3:27b \
  -e MODEL_SUMMARIZE_REQUEST=gemma3:12b \
  -e MODEL_IMAGE_RECOGNITION=gemma3:12b \
  -e BOT_HISTORY_LENGTH=150 \
  -e LLM_UNCOMPRESSED_HISTORY_LIMIT=15 \
  -e SENTRY_DSN=https://your-sentry-dsn \
  -e BOT_ADMIN_IDS=123456789,987654321 \
  skobkin/telegram-llm-bot