Skip to content

Latest commit

 

History

History
115 lines (85 loc) · 2.86 KB

File metadata and controls

115 lines (85 loc) · 2.86 KB

🧠 Part 3: Add AI Capabilities

✅ Goals

  • Integrate Workers AI for text generation

  • Create a /api/chatendpoint that accepts user input and responds with AI-generated content

  • Learn how to set up AI Gateway

🛠️ Instructions

1. Update wrangler.jsonc to enable Workers AI

Add the ai binding if it's not already there:

"ai": {
  "binding": "AI"
}

This makes c.env.AI available in your Worker.

2. Add a basic chat endpoint

Update your Hono app to include a simple /api/chat route:

const app = new Hono<{ Bindings: Env }>();

app.post('/api/chat', async (c) => {
  const ai = c.env.AI;
  const { message } = await c.req.json();

  try {
    const response = await ai.run('@cf/meta/llama-4-scout-17b-16e-instruct', {
      messages: [
        { role: 'system', content: 'You are a helpful assistant.' },
        { role: 'user', content: message },
      ],
    });

    return c.json({ message: response });
  } catch (error) {
    console.error('AI Error:', error);
    return c.json({ error: 'Failed to generate response' }, 500);
  }
});

3. Connect your frontend to the /api/chat route

In public/app.js, update your form handler:

chatForm.addEventListener('submit', async (e) => {
  e.preventDefault();
  const message = userInput.value.trim();

  if (!message) return;

  addMessage(message, 'user');
  userInput.value = '';

  const typingEl = addMessage('Assistant is thinking...', 'assistant', true);

  try {
    const res = await fetch('/api/chat', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ message }),
    });

    if (!res.ok) {
      throw new Error(`Request failed with status ${res.status}`);
    }

    const data = await res.json();
    const response = data.message?.response || 'No response received.';
    typingEl.innerHTML = marked.parse(response);
  } catch (error) {
    console.error('Chat error:', error);
    typingEl.innerHTML = 'Failed to get a response. Please try again.';
  }
});

This will display the AI's response in the chat interface after a short delay.

4. Set Up AI Gateway

Go to the Cloudflare Dashboard → AI Gateway and create a new Gateway. Give it a name like cf-gateway.

Then, in your Worker code, pass the Gateway ID as part of the run() call like this:

const response = await ai.run(
  '@cf/meta/llama-4-scout-17b-16e-instruct',
  {
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: message },
    ],
  },
  {
    gateway: {
      id: 'cf-gateway', // Replace with your Gateway ID
      skipCache: true, // Optional: disables caching
    },
  },
);

🧠 This tells Workers AI to route the request through your Gateway — enabling usage tracking, rate limiting, caching, and model provider control.