AI Adapters
Configure AI providers for conversational guidance.
Overview
PACE.js supports multiple AI providers through adapters:
- ClaudeAdapter — Anthropic Claude (recommended)
- OpenAIAdapter — OpenAI GPT models
- Custom — Build your own
ClaudeAdapter
Basic Setup
javascript
import { PACE, ClaudeAdapter } from '@semanticintent/pace-pattern'
const adapter = new ClaudeAdapter({
apiKey: process.env.CLAUDE_API_KEY,
model: 'claude-3-sonnet-20240229'
})
const pace = new PACE({
container: '#app',
products: './products.json',
aiAdapter: adapter
})Configuration Options
javascript
const adapter = new ClaudeAdapter({
// Required
apiKey: 'sk-ant-...',
// Optional
model: 'claude-3-sonnet-20240229', // or claude-3-opus, claude-3-haiku
maxTokens: 1024,
temperature: 1.0,
systemPrompt: 'Custom system prompt...',
// Advanced
stream: true,
stopSequences: ['</response>'],
topP: 0.9
})System Prompt
Customize how Claude behaves:
javascript
const adapter = new ClaudeAdapter({
apiKey: process.env.CLAUDE_API_KEY,
systemPrompt: `
You are a helpful guide for MCP Hub, a storefront for MCP servers.
## Your Role
Help users find the right MCP server for their needs.
## PACE Principles
- Proactive: Suggest products without being asked
- Adaptive: Match user's technical level
- Contextual: Remember previous messages
- Efficient: Be concise and actionable
## Available Products
${JSON.stringify(products, null, 2)}
## Guidelines
- Ask clarifying questions if user's need is vague
- Recommend 2-3 options max
- Explain technical concepts to beginners
- Provide code examples for developers
`
})OpenAIAdapter
Basic Setup
javascript
import { PACE, OpenAIAdapter } from '@semanticintent/pace-pattern'
const adapter = new OpenAIAdapter({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4'
})
const pace = new PACE({
container: '#app',
products: './products.json',
aiAdapter: adapter
})Configuration Options
javascript
const adapter = new OpenAIAdapter({
// Required
apiKey: 'sk-...',
// Optional
model: 'gpt-4', // or gpt-4-turbo, gpt-3.5-turbo
maxTokens: 1024,
temperature: 0.7,
systemPrompt: 'Custom system prompt...',
// Advanced
stream: true,
presencePenalty: 0,
frequencyPenalty: 0
})Custom Adapter
Adapter Interface
typescript
interface AIAdapter {
sendMessage(message: string, context?: object): Promise<{
response: string,
metadata?: object
}>
}Example: Local LLM
javascript
class LocalLLMAdapter {
async sendMessage(message, context) {
const response = await fetch('http://localhost:8080/v1/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
messages: [
{ role: 'system', content: 'You are a helpful guide' },
{ role: 'user', content: message }
],
context: context
})
})
const data = await response.json()
return {
response: data.message,
metadata: {
model: 'llama-3',
tokens: data.usage.total_tokens
}
}
}
}
const pace = new PACE({
aiAdapter: new LocalLLMAdapter()
})Example: Proxy API
javascript
class ProxyAdapter {
constructor(apiUrl) {
this.apiUrl = apiUrl
}
async sendMessage(message, context) {
const response = await fetch(`${this.apiUrl}/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${this.apiKey}`
},
body: JSON.stringify({ message, context })
})
return await response.json()
}
}Streaming Responses
Enable streaming for real-time responses:
javascript
const adapter = new ClaudeAdapter({
apiKey: process.env.CLAUDE_API_KEY,
stream: true
})
// Listen to stream chunks
pace.on('chat:stream', ({ chunk }) => {
console.log('Received:', chunk)
})Context Management
Pass product and conversation context to AI:
javascript
const pace = new PACE({
container: '#app',
products: './products.json',
aiAdapter: adapter,
context: {
// Static context
storeName: 'MCP Hub',
storeUrl: 'https://mcp-hub.com',
// Dynamic context (updated automatically)
products: products,
conversationHistory: [],
selectedProduct: null,
// Custom context
userPreferences: {
expertiseLevel: 'intermediate',
interests: ['databases', 'apis']
}
}
})Error Handling
javascript
pace.on('chat:error', ({ error }) => {
if (error.code === 'rate_limit') {
// Show user-friendly message
showNotification('Too many requests. Please wait a moment.')
} else if (error.code === 'api_key_invalid') {
// Configuration error
console.error('Invalid API key')
} else {
// Generic error
showNotification('Sorry, something went wrong. Please try again.')
}
})Best Practices
1. Environment Variables
Never commit API keys:
javascript
// ✅ Good
const adapter = new ClaudeAdapter({
apiKey: process.env.CLAUDE_API_KEY
})
// ❌ Bad
const adapter = new ClaudeAdapter({
apiKey: 'sk-ant-1234567890'
})2. Rate Limiting
javascript
let lastRequest = 0
const RATE_LIMIT = 1000 // 1 request per second
pace.on('chat:message', async ({ message }) => {
const now = Date.now()
if (now - lastRequest < RATE_LIMIT) {
showNotification('Please wait before sending another message')
return
}
lastRequest = now
})3. Caching
Cache common responses:
javascript
const responseCache = new Map()
class CachedAdapter {
async sendMessage(message, context) {
const cacheKey = `${message}-${JSON.stringify(context)}`
if (responseCache.has(cacheKey)) {
return responseCache.get(cacheKey)
}
const response = await this.adapter.sendMessage(message, context)
responseCache.set(cacheKey, response)
return response
}
}Power PACE with AI adapters! 🤖