Woven Offers
Inject AIARCO ad offers into LLM system prompts for AI-native monetization. The model naturally weaves relevant ads into its response.
Inject ad offers into LLM system prompts. The model naturally weaves relevant ads into its response — no manual rendering needed.
Overview
Woven Offers is the most natural way to monetize AI chatbots and assistants. Instead of displaying ads in a sidebar or banner, offers are embedded as structured data in the LLM's system prompt. The model decides which offers are relevant and includes them naturally in its response as linked recommendations.
Example LLM response with woven ad: "Based on your requirements, I'd recommend the Sony WH-1000XM5 for the best noise cancellation. They offer 30-hour battery life and industry-leading ANC technology. You can shop them here for $348."
How It Works
The woven offers flow has 5 steps:
- ●Fetch Offers
Request offers from the API with the user's query and context. Use
limit: 3or higher to give the LLM multiple options. - ●Build System Prompt Suffix
Serialize the offers into an XML block with weaving instructions. The SDK's
getSystemPromptOffers()does this automatically. - ●Append to System Prompt
Concatenate the offer XML to your existing system prompt before calling the LLM.
- ●Call the LLM
Generate the response as usual. The model will naturally include relevant offers as linked recommendations in its output.
- ●Track Impressions
Check which offer
clickUrls appeared in the rendered output and report them via the batch impression endpoint.
Using the JavaScript SDK
The SDK provides getSystemPromptOffers() which handles steps 1 and 2 automatically:
// Step 1-2: Fetch offers and build system prompt suffix
const offerSuffix = await AIARCO.getSystemPromptOffers({
query: userMessage,
context: "product-recommendations",
limit: 3,
});
// Step 3: Append to your system prompt
const systemPrompt = `You are a helpful shopping assistant.
${offerSuffix}`;
// Step 4: Call the LLM
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userMessage },
],
});
const assistantReply = response.choices[0].message.content;
// Step 5: Track which offers appeared in the response
const offersData = await AIARCO.fetchAd({ query: userMessage, limit: 3 });
const appearedIds = offersData.offers
.filter((offer) => assistantReply.includes(offer.clickUrl))
.map((offer) => offer.id);
AIARCO.trackWovenImpressions(appearedIds);Using the REST API Directly
If you're not using the JS SDK, build the system prompt suffix yourself from the offer response:
# Python example
import requests
# Step 1: Fetch offers
resp = requests.post(
"https://ads-api.aiarco.com/api/v1/offers",
headers={
"Content-Type": "application/json",
"X-AIARCO-API-Key": "aiarco_YOUR_KEY",
},
json={"query": user_message, "limit": 3},
)
offers = resp.json().get("offers", [])
# Step 2: Build system prompt XML
offer_xml = "<available_offers>\n"
for offer in offers:
offer_xml += f' <offer id="{offer["id"]}">\n'
offer_xml += f' <title>{offer["title"]}</title>\n'
offer_xml += f' <description>{offer["content"]}</description>\n'
offer_xml += f' <cta>{offer["cta"]}</cta>\n'
offer_xml += f' <url>{offer["clickUrl"]}</url>\n'
if "brand" in offer:
offer_xml += f' <brand>{offer["brand"]["name"]}</brand>\n'
if "price" in offer:
offer_xml += f' <price>{offer["price"]["amount"]} {offer["price"].get("currency", "")}</price>\n'
offer_xml += " </offer>\n"
offer_xml += "</available_offers>\n"
offer_xml += "\nWhen relevant, naturally reference these offers using their <url> as a markdown link."
offer_xml += "\nOnly include offers that genuinely add value to the user's query."
# Step 3: Append to system prompt
system_prompt = f"""You are a helpful assistant.
{offer_xml}"""
# Step 4: Call the LLM (example with OpenAI)
completion = openai.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message},
],
)
reply = completion.choices[0].message.content
# Step 5: Track impressions
appeared_ids = [o["id"] for o in offers if o["clickUrl"] in reply]
if appeared_ids:
requests.post(
"https://ads-api.aiarco.com/api/v1/impressions/batch",
json={"ids": appeared_ids},
)System Prompt XML Format
The getSystemPromptOffers() method generates XML in this format:
<available_offers>
<offer id="abc-123">
<title>Nike Air Zoom Pegasus 41</title>
<description>Responsive cushioning for everyday runs.</description>
<cta>Shop Now</cta>
<url>https://ads-api.aiarco.com/api/v1/clicks/abc-123</url>
<brand>Nike</brand>
<price>129.99 USD</price>
</offer>
<offer id="def-456">
<title>Adidas Ultraboost Light</title>
<description>Lightweight performance running shoe.</description>
<cta>Learn More</cta>
<url>https://ads-api.aiarco.com/api/v1/clicks/def-456</url>
<brand>Adidas</brand>
<price>189.99 USD</price>
</offer>
</available_offers>
When relevant, naturally reference these offers in your response using their <url> as a markdown link.
Only include offers that genuinely add value to the user's query.Best Practices
- Use 3–5 offers — give the LLM enough options to choose the most relevant ones without overwhelming the context window.
- Track accurately — only report impressions for offers whose
clickUrlactually appeared in the rendered response. - Include the weaving instructions — the "When relevant, naturally reference..." prompt helps the model understand how to use the offers.
- Don't force inclusion — the weaving instructions tell the model to only include offers that "genuinely add value." This preserves user trust and response quality.
Send user intent signals alongside offer requests to get more relevant ads for your users.