Skip to main content

What are AI models and the LLM API?

AI models are the language models that power your workflows and agents (e.g. GPT-4, Claude). Flowra connects to providers like OpenAI and Anthropic so you don’t have to manage API keys for each one. Your project has a set of enabled models; when you create or edit a workflow, you choose which model that workflow uses (e.g. a fast model for simple tasks, a stronger one for complex reasoning). The LLM API lets you call the same models directly from your app: send a prompt and get generated text, or send a conversation history and get the next reply. That’s useful for one-off tasks (summarize this text, translate, extract data) or for building your own chat UI that uses Flowra’s models and billing.

Example use cases

  • Workflow configuration — List models via the API and show a dropdown in your dashboard so users pick which model runs their workflow.
  • One-off generation — “Summarize this article”, “Turn this list into a table”, “Extract key points” — send a prompt and get text back.
  • Chat in your app — Build a chat interface; send the conversation history to the chat endpoint and stream or display the reply. Flowra handles the model and usage.

How it works

  1. List modelsGET /ai/models returns the models enabled for your project (id, name, provider). Use these IDs when creating workflows or when calling the LLM endpoints with the model parameter.
  2. Generate textPOST /ai/llm/generate sends a single prompt (and optional system prompt, temperature, max tokens). You get back plain text. Good for one-shot tasks.
  3. ChatPOST /ai/llm/chat sends an array of messages (user, assistant, system) and get the next assistant message. Good for multi-turn conversations.
Credits and usage are tied to your project; see the Dashboard for usage and limits.

API endpoints

List AI models

GET /api/v1/ai/models returns AI models you can use in workflows and chat. Only models enabled for your project are returned. Each item includes id, name, provider, and capabilities.
curl -X GET "https://flowra.dev/api/v1/ai/models" \
  -H "x-api-key: YOUR_API_KEY"

Get model by ID

GET /api/v1/ai/models/{id} returns details for one model (name, provider, capabilities). Use when you need to show model info or validate a model before using it in a workflow.

Generate text

POST /api/v1/ai/llm/generate sends a prompt and returns generated text from the configured LLM. Request body: { "prompt": "Your prompt here" }. Optional fields: model, temperature, maxTokens, systemPrompt. Use for one-off completions or any task that needs AI-generated text. Supports long prompts (see API reference for limits).
curl -X POST "https://flowra.dev/api/v1/ai/llm/generate" \
  -H "x-api-key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Summarize the following in one sentence: ..."}'

Chat with conversation history

POST /api/v1/ai/llm/chat sends a list of messages and returns the next assistant reply. Use for multi-turn conversations. Request body: { "messages": [{ "role": "user", "content": "..." }, { "role": "assistant", "content": "..." }], "model": "optional", "temperature": 0.7, "maxTokens": 4096, "systemPrompt": "optional" }. Each message has role (user, assistant, or system) and content.
curl -X POST "https://flowra.dev/api/v1/ai/llm/chat" \
  -H "x-api-key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"messages":[{"role":"user","content":"Hello"},{"role":"assistant","content":"Hi there!"},{"role":"user","content":"What can you do?"}]}'