Llama APIPreview
API platform

Get started

Overview
Quickstart

Essentials

Models
API keys
SDKs & libraries
Rate limits

Features

Chat completion
Image understanding
Structured output
Tool calling
OpenAI compatibility
Moderation
Fine-tuning & evaluation
Accelerated inference

Guides

Chat & conversation
Tool calling
Moderation & security
Best practices

API reference

Chat completion
Models
Moderations

Resources

Legal
Log in to API

Moderations

Llama API offers safeguard models based on LlamaGuard that you can use to moderate user input and model output for problematic content.See the moderation & security guide for more details on using this endpoint as part of a safety-conscious application layer.

POST /v1/moderations

Classifies if given messages are potentially harmful across several categories.

Request body

Content Type: application/json


model
string
optional
Optional identifier of the model to use. Defaults to "Llama-Guard".

messages
array (one of UserMessage, SystemMessage, ToolResponseMessage, AssistantMessage)
required
List of messages in the conversation.

Response

HTTP 200Returns a Moderation object with moderation results.

Content Type: application/json


model
string
required

results
array
required

Moderation request

curl

Response

JSON