What are AI models and the LLM API?
AI models are the language models that power your workflows and agents (e.g. GPT-4, Claude). Flowra connects to providers like OpenAI and Anthropic so you don’t have to manage API keys for each one. Your project has a set of enabled models; when you create or edit a workflow, you choose which model that workflow uses (e.g. a fast model for simple tasks, a stronger one for complex reasoning). The LLM API lets you call the same models directly from your app: send a prompt and get generated text, or send a conversation history and get the next reply. That’s useful for one-off tasks (summarize this text, translate, extract data) or for building your own chat UI that uses Flowra’s models and billing.Example use cases
- Workflow configuration — List models via the API and show a dropdown in your dashboard so users pick which model runs their workflow.
- One-off generation — “Summarize this article”, “Turn this list into a table”, “Extract key points” — send a prompt and get text back.
- Chat in your app — Build a chat interface; send the conversation history to the chat endpoint and stream or display the reply. Flowra handles the model and usage.
How it works
- List models —
GET /ai/modelsreturns the models enabled for your project (id, name, provider). Use these IDs when creating workflows or when calling the LLM endpoints with themodelparameter. - Generate text —
POST /ai/llm/generatesends a single prompt (and optional system prompt, temperature, max tokens). You get back plain text. Good for one-shot tasks. - Chat —
POST /ai/llm/chatsends an array of messages (user, assistant, system) and get the next assistant message. Good for multi-turn conversations.
API endpoints
List AI models
GET/api/v1/ai/models returns AI models you can use in workflows and chat. Only models enabled for your project are returned. Each item includes id, name, provider, and capabilities.
Get model by ID
GET/api/v1/ai/models/{id} returns details for one model (name, provider, capabilities). Use when you need to show model info or validate a model before using it in a workflow.
Generate text
POST/api/v1/ai/llm/generate sends a prompt and returns generated text from the configured LLM. Request body: { "prompt": "Your prompt here" }. Optional fields: model, temperature, maxTokens, systemPrompt. Use for one-off completions or any task that needs AI-generated text. Supports long prompts (see API reference for limits).
Chat with conversation history
POST/api/v1/ai/llm/chat sends a list of messages and returns the next assistant reply. Use for multi-turn conversations. Request body: { "messages": [{ "role": "user", "content": "..." }, { "role": "assistant", "content": "..." }], "model": "optional", "temperature": 0.7, "maxTokens": 4096, "systemPrompt": "optional" }. Each message has role (user, assistant, or system) and content.
Related
- Workflows — Configure a model when creating or updating a workflow.
- API reference: ai/models, ai/llm — Full request/response schemas.