---
meta:
  title: "Troubleshooting"
  parentTitle: "AI Copilots"
  description: "Troubleshoot common issues"
---

AI models are often unreliable and tricky to debug. Here are some common issues
you may encounter, and how to troubleshoot them.

## Common issues

Here’s a list of common issues you may encounter, and how to fix them.

### The AI provider is currently overloaded or unavailable

This means that the AI provider you’re using is currently not available. You can
check the status of your provider by visiting their status pages:
[OpenAI](https://status.openai.com/), [Anthropic](https://status.claude.com/),
[Google](https://aistudio.google.com/status). To avoid this issue, you can
[set up fallback models](/docs/guides/how-to-use-fallback-ai-models-in-ai-copilots)
that use other providers.

### The AI provider's rate limit or quota exceeded

This means that you’ve hit token limits defined by your AI provider (e.g.
OpenAI, Anthropic, Google). You can reach out to your provider to increase your
limits, use a model with lower limits, or consider
[setting up fallback models](/docs/guides/how-to-use-fallback-ai-models-in-ai-copilots)
for busy times.

### For this AI provider, 'additionalProperties: false' is required to be set in the properties [#additionalproperties-false-is-required]

This means that your provider requires the `additionalProperties` field to be
set to `false` in tool `parameters`. Fix this by adding the following line to
your tool definition:

```tsx
<RegisterAiTool
  name="weather-tool"
  tool={defineAiTool()({
    description: "Get the weather for a location",
    parameters: {
      type: "object",
      properties: {
        location: { type: "string" },
      },
      required: ["location"],
      // +++
      additionalProperties: false,
      // +++
    },
    // ...
  })}
/>
```

### A model is ignoring my back-end knowledge

If an AI model doesn’t seem to be using your
[back-end knowledge](/docs/ready-made-features/ai-copilots/knowledge#Back-end-knowledge)
correctly, there are a few things you can check:

#### Check you’re using the correct copilot ID in all APIs

Check you’re using the copilot ID throughout your application, specifically in
every use of [`AiChat`](/docs/api-reference/liveblocks-react-ui#AiChat) and
[`useSendAiMessage`](/docs/api-reference/liveblocks-react#useSendAiMessage).

```tsx
import { AiChat } from "@liveblocks/react-ui";

function Chat() {
  return (
    <AiChat
      chatId="my-chat-id"
      // +++
      copilotId="co_tUYtNctLAtUIAAIZBc1Zk"
      // +++
    />
  );
}
```

```tsx
import { useSendAiMessage } from "@liveblocks/react";

function SendMessage() {
  const sendAiMessage = useSendAiMessage("my-chat-id", {
    // +++
    copilotId: "co_tUYtNctLAtUIAAIZBc1Zk",
    // +++
  });

  return (
    <button onClick={() => sendAiMessage("What's new?")}>What's new?</button>
  );
}
```

#### Make sure you’re letting AI know when to use knowledge

Make sure you’re defining
[when AI should use knowledge](/docs/ready-made-features/ai-copilots/copilots#When-should-AI-use-knowledge)
in your copilot settings. When defining copilots from the server, this is the
`knowledgePrompt` field. If it doesn’t seem to be working, try experimenting
with your prompt to be more specific, authoritative, and use examples.

```diff title="When should AI use knowledge?"
- When the user asks a question about code.
+ It's required to do this whenever the user mentions React code, e.g. `useOthers`.
```

It’s worth checking if there’s an official guide for your specific model, which
details how to talk to it. For example the
[official prompting guide for GPT-4.1](https://cookbook.openai.com/examples/gpt4-1_prompting_guide).

### The AI provider response exceeded the maximum token limit

This means that the response from the AI provider exceeded the maximum token
limit.

This error is usually caused:

- When the model’s generated output would exceed the `max tokens` you set in the
  dashboard for a copilot.
- When the request would exceed the model/provider’s token limits (e.g.,
  combined prompt + expected completion), leading the provider to return a
  token-limit error

Check first in your dashboard if the max tokens limit is set correctly.

## Error table

Every error you may encounter is listed in the table below, along with a
suggestion.

| Error                                                                                        | Info                                                                                                                                                                                              |
| -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| The AI provider is currently overloaded or unavailable                                       | Your provider is down, check its status. [Learn more](#The-AI-provider-is-currently-overloaded-or-unavailable)                                                                                    |
| The AI provider's rate limit or quota exceeded                                               | You’ve breached the rate limit or quota for your model. [Learn more](#The-AI-provider's-rate-limit-or-quota-exceeded)                                                                             |
| A technical error happened while invoking the AI provider.                                   | A internal technical error happened. [Contact support](https://liveblocks.io/contact/support) if the problem persists.                                                                            |
| For this AI provider, 'additionalProperties: false' is required to be set in the properties` | You need to add `additionalProperties: false` to your tool `parameters` object. [Learn more](#additionalproperties-false-is-required)                                                             |
| Error from AI provider: ...                                                                  | Generic error received from the AI Provider. [Contact support](https://liveblocks.io/contact/support) if the problem persists.                                                                    |
| The AI copilot’s model does not exist or you have insufficient access to it                  | Most likely the wrong `copilotId` is set. Potentially the current user has no permissions to view it.                                                                                             |
| Invalid credentials or insufficient permissions                                              | The API key used in the copilot is invalid or it not has the required permissions to use the model defined in the copilot.                                                                        |
| The AI copilot could not make the request because it is too large                            | Usually the token limit for the model to process is exceeded.                                                                                                                                     |
| The AI provider blocked the request due to a content filter                                  | The request was blocked by the AI provider own content filter.                                                                                                                                    |
| The AI provider response exceeded the maximum token limit                                    | The response from the AI provider exceeded the maximum token limit. Check your max tokens limit in the copilot settings. [Learn more](#The-AI-provider-response-exceeded-the-maximum-token-limit) |
| An unexpected database error happened.                                                       | An internal database error happened. [Contact support](https://liveblocks.io/contact/support) if the problem persists.                                                                            |
| Timeout error.                                                                               | The request timed out. [Contact support](https://liveblocks.io/contact/support) if the problem persists.                                                                                          |
| Abort error.                                                                                 | The request was aborted by the user but the operation did not succeed to abort. [Contact support](https://liveblocks.io/contact/support) if the problem persists.                                 |
| Aborted by user.                                                                             | The current request was aborted by the user.                                                                                                                                                      |
| Messages deleted by user.                                                                    | A message in the chat no longer exists because it was deleted by a user.                                                                                                                          |
| Chat deleted by user.                                                                        | A chat was deleted by a user and our infra cannot proceed the request.                                                                                                                            |

---

For an overview of all available documentation, see [/llms.txt](/llms.txt).
