Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/owasp/nest/llms.txt

Use this file to discover all available pages before exploring further.

OWASP Nest uses AI to reduce the friction of understanding new projects and issues. Two types of AI-generated content are surfaced across the platform: project summaries and issue guidance.

Project summaries

Every active OWASP project with an active repository is automatically given an AI-generated summary. This summary appears on both the project card in the list view and the project detail page. Summaries are generated by prompting OpenAI with the project’s raw description, tags, and repository metadata. The result is a concise, human-readable paragraph that gives contributors a fast understanding of what the project does and who it is for. Summaries are generated when a project is saved and no existing summary is present. They are stored in PostgreSQL alongside the project record.

Issue guidance

Open issues on the Contribute page include AI-generated guidance accessible via the Read More button. This opens a modal showing:
  • A summary of the issue: what problem it describes and what a resolution looks like.
  • A hint: recommended steps to approach the issue, including relevant files, patterns, or techniques to consider.
This guidance is generated by the Nest AI agent and stored with the issue record in Algolia so it can be served quickly without re-running inference on every page load.

The AI agent

The AI agent is implemented as a LangGraph state machine built on top of LangChain and OpenAI. It uses a retrieval-augmented generation (RAG) approach:
1

Retrieve

The agent queries a vector store (PostgreSQL with pgvector) for context chunks relevant to the input query. Chunks are filtered and ranked using metadata extracted from the query by an LLM call.
2

Generate

The agent sends the query and retrieved context to OpenAI to generate an answer. If prior feedback from an evaluation step is available, it is included in the prompt to guide refinement.
3

Evaluate

The generated answer is assessed by a second LLM call (the evaluator) which checks whether the answer is complete and accurate given the context. The evaluator returns a structured JSON response indicating whether the answer is complete or needs refinement.
4

Refine or complete

If the evaluator determines the answer is incomplete, the agent expands its context retrieval and regenerates. This self-correcting loop continues until the answer passes evaluation or the iteration limit is reached.

Agent graph

The LangGraph state machine has the following nodes:
START → retrieve → generate → evaluate
                      ↑           |
                      |  (refine) |
                      └───────────┘
                                  |
                             (complete)

                                 END

Technology

ComponentLibrary / Service
Agent orchestrationLangGraph (langgraph)
LLM integrationLangChain (langchain, langchain-community)
LLM providerOpenAI (openai)
Vector storePostgreSQL + pgvector (pgvector)
EmbeddingsOpenAI embeddings API

Configuration

The AI features require an OpenAI API key configured in the backend environment:
DJANGO_OPEN_AI_SECRET_KEY=<your-openai-api-key>
Prompts used by the agent (project summary prompt, evaluator system prompt, metadata extractor prompt) are stored as Prompt model records in the database, making them configurable without code changes.
AI-generated content is produced automatically and may not always be perfectly accurate. Project summaries and issue guidance are intended as aids to help contributors get oriented — always verify against the source material on GitHub before acting on AI suggestions.