Organisation Models

The Models page under Admin → Organisation → Settings is where you decide which models from the platform catalogue your organisation can use, and which credentials authenticate calls to them.

Organisation Models page

Find it at Admin → Organisation → Settings → Models.

What you see

A table titled Organisation Models with the description “Enable and manage organisation and workspace model access, then use PebbleRouter to route across these enabled models.”

Each row shows:

ColumnWhat it shows
Model NameThe model identifier (e.g. au.anthropic.claude-opus-4-6-v1)
ProviderThe underlying AI provider (AWS Bedrock, OpenAI, Anthropic, etc.)
Sharing LevelOrganization (whole org), Workspace (one workspace), or Pebble Router only (PebbleRouter can use it but it doesn’t show in user model selectors)
CredentialWhich credential is used to authenticate calls (e.g. AWS Creds)
Usage CountHow many times this model has been called
ActionsEdit, remove, or change sharing scope

Above the table is an Enable Shared Model button (top-right) and a search box.

A blue banner at the top reads:

Models enabled here become available for organisation and workspace sharing. Use PebbleRouter to create routing profiles from these enabled models.

The provisioning chain

Models flow through three layers in PebbleAI:

Platform admin   →   Organisation admin   →   User
   (catalogue)        (this page — share)     (enables for self)
  1. Platform admin maintains the catalogue of all models PebbleAI knows how to call. Most installs ship with hundreds of model definitions covering OpenAI, Anthropic, AWS Bedrock, Google, Mistral, Cohere and others.
  2. You (organisation admin) pick a subset of that catalogue and enable it for your organisation, choosing the sharing scope and the credential to use.
  3. Users see the models you’ve enabled at User Settings → Models and choose which ones to add to their personal PebbleChat model selector.

This three-step chain means models are always intentional — no model gets used without an admin approving it for the org and a user explicitly enabling it for their account.

Sharing levels

The Sharing Level column controls how a model can be used:

Organization

The model is available to everyone in the organisation. Any user can enable it for their personal PebbleChat selector. This is the most common scope.

Workspace

The model is available only inside the specified workspace. Users in other workspaces don’t see it. Useful when one team has paid for or been granted access to a model that the rest of the org shouldn’t use.

Pebble Router only

A special scope that makes the model available to PebbleRouter for routing decisions, but does not show it in user model selectors. Useful when you want intelligent Auto routing to fall back to a model that you don’t want users to be able to pick directly — for example, a fallback model that’s only there for capacity reasons.

Step-by-step: enabling a Bedrock Claude model for the org

  1. Click Enable Shared Model in the top-right
  2. Browse the platform catalogue and find the model you want, e.g. au.anthropic.claude-opus-4-6-v1
  3. Pick Sharing Level: Organization
  4. Pick the Credential that authenticates Bedrock calls (the AWS credential you added in Credentials)
  5. Save
  6. The model now appears in the table with Organization scope
  7. Tell users they can enable it for their personal selector at Settings → Models

Step-by-step: changing sharing scope

  1. Click the model row to open the editor
  2. Change Sharing Level (e.g. from Workspace to Organization)
  3. Save

The change takes effect immediately. Users who previously couldn’t see the model will see it on their next page load.

Step-by-step: removing a model

  1. Click the delete icon on the model row
  2. Confirm

Removing a model breaks any flow, agent, or PebbleChat conversation that was using it. Check usage counts first; consider switching the model to Pebble Router only scope first to soft-deprecate, then remove later once usage drops to zero.

How models become available in PebbleChat

A model needs all four of these to appear in a user’s PebbleChat model selector:

  1. Defined in the platform catalogue (platform admin’s job)
  2. Enabled here with Organization or Workspace scope
  3. Backed by a working credential in Credentials
  4. Self-enabled by the user at User Settings → Models

If a user complains they can’t see a model they expect, walk through these four checks.

Routing models with PebbleRouter

The Models page is a flat catalogue. The next layer up is PebbleRouter, which lets you compose routing profiles from these enabled models. A routing profile defines:

  • Which subset of enabled models the profile uses
  • The routing strategy (load balance, failover, cost optimise)
  • Per-model weights and thresholds

When a user picks Auto in their PebbleChat model selector, PebbleRouter consults the active routing profile and picks the best model for each message. So even models with Pebble Router only scope can serve user traffic — just not directly via the model selector.

See PebbleRouter for the deep-dive.

Models vs PebbleFlows Models

Don’t confuse this page with PebbleFlows Models:

Models (this page)PebbleFlows Models
PurposeEnable models from the platform catalogue for your orgEdit the JSON definitions of models in the PebbleFlows visual builder
FormatForm-based tableJSON editor
AudienceMost adminsPower admins managing the underlying catalogue
Where you’d touch itDaily / weeklyRarely

For most administration, this is the page you use. PebbleFlows Models is for the rare case where you need to edit raw model definitions.

Tips

  • Enable a small set first. Start with a handful of models — a Fast Haiku, a Smart Opus, maybe a long-context fallback — and grow from there. Hundreds of models in the selector is a confusing UX.
  • Set sharing scope deliberately. Default to Organization for general-purpose models. Use Workspace for team-specific models and Pebble Router only for fallback / cost-optimisation models you don’t want users to pick directly.
  • Watch usage. The Usage Count column is your guide to which models are actually pulling weight. Cull the unused ones.
  • Always tie a model to a working credential. A model with a stale credential is worse than no model — users hit confusing errors. When you rotate credentials, check every model that uses them.