konnect 2.4.1 published on Thursday, Mar 13, 2025 by kong
konnect.getGatewayPluginAiProxy
Explore with Pulumi AI
Using getGatewayPluginAiProxy
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getGatewayPluginAiProxy(args: GetGatewayPluginAiProxyArgs, opts?: InvokeOptions): Promise<GetGatewayPluginAiProxyResult>
function getGatewayPluginAiProxyOutput(args: GetGatewayPluginAiProxyOutputArgs, opts?: InvokeOptions): Output<GetGatewayPluginAiProxyResult>
def get_gateway_plugin_ai_proxy(control_plane_id: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetGatewayPluginAiProxyResult
def get_gateway_plugin_ai_proxy_output(control_plane_id: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetGatewayPluginAiProxyResult]
func LookupGatewayPluginAiProxy(ctx *Context, args *LookupGatewayPluginAiProxyArgs, opts ...InvokeOption) (*LookupGatewayPluginAiProxyResult, error)
func LookupGatewayPluginAiProxyOutput(ctx *Context, args *LookupGatewayPluginAiProxyOutputArgs, opts ...InvokeOption) LookupGatewayPluginAiProxyResultOutput
> Note: This function is named LookupGatewayPluginAiProxy
in the Go SDK.
public static class GetGatewayPluginAiProxy
{
public static Task<GetGatewayPluginAiProxyResult> InvokeAsync(GetGatewayPluginAiProxyArgs args, InvokeOptions? opts = null)
public static Output<GetGatewayPluginAiProxyResult> Invoke(GetGatewayPluginAiProxyInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetGatewayPluginAiProxyResult> getGatewayPluginAiProxy(GetGatewayPluginAiProxyArgs args, InvokeOptions options)
public static Output<GetGatewayPluginAiProxyResult> getGatewayPluginAiProxy(GetGatewayPluginAiProxyArgs args, InvokeOptions options)
fn::invoke:
function: konnect:index/getGatewayPluginAiProxy:getGatewayPluginAiProxy
arguments:
# arguments dictionary
The following arguments are supported:
- Control
Plane stringId
- Control
Plane stringId
- control
Plane StringId
- control
Plane stringId
- control_
plane_ strid
- control
Plane StringId
getGatewayPluginAiProxy Result
The following output properties are available:
- Config
Get
Gateway Plugin Ai Proxy Config - Consumer
Get
Gateway Plugin Ai Proxy Consumer - Consumer
Group GetGateway Plugin Ai Proxy Consumer Group - Control
Plane stringId - Created
At double - Enabled bool
- Id string
- Instance
Name string - Ordering
Get
Gateway Plugin Ai Proxy Ordering - Protocols List<string>
- Route
Get
Gateway Plugin Ai Proxy Route - Service
Get
Gateway Plugin Ai Proxy Service - List<string>
- Updated
At double
- Config
Get
Gateway Plugin Ai Proxy Config - Consumer
Get
Gateway Plugin Ai Proxy Consumer - Consumer
Group GetGateway Plugin Ai Proxy Consumer Group - Control
Plane stringId - Created
At float64 - Enabled bool
- Id string
- Instance
Name string - Ordering
Get
Gateway Plugin Ai Proxy Ordering - Protocols []string
- Route
Get
Gateway Plugin Ai Proxy Route - Service
Get
Gateway Plugin Ai Proxy Service - []string
- Updated
At float64
- config
Get
Gateway Plugin Ai Proxy Config - consumer
Get
Gateway Plugin Ai Proxy Consumer - consumer
Group GetGateway Plugin Ai Proxy Consumer Group - control
Plane StringId - created
At Double - enabled Boolean
- id String
- instance
Name String - ordering
Get
Gateway Plugin Ai Proxy Ordering - protocols List<String>
- route
Get
Gateway Plugin Ai Proxy Route - service
Get
Gateway Plugin Ai Proxy Service - List<String>
- updated
At Double
- config
Get
Gateway Plugin Ai Proxy Config - consumer
Get
Gateway Plugin Ai Proxy Consumer - consumer
Group GetGateway Plugin Ai Proxy Consumer Group - control
Plane stringId - created
At number - enabled boolean
- id string
- instance
Name string - ordering
Get
Gateway Plugin Ai Proxy Ordering - protocols string[]
- route
Get
Gateway Plugin Ai Proxy Route - service
Get
Gateway Plugin Ai Proxy Service - string[]
- updated
At number
- config
Get
Gateway Plugin Ai Proxy Config - consumer
Get
Gateway Plugin Ai Proxy Consumer - consumer_
group GetGateway Plugin Ai Proxy Consumer Group - control_
plane_ strid - created_
at float - enabled bool
- id str
- instance_
name str - ordering
Get
Gateway Plugin Ai Proxy Ordering - protocols Sequence[str]
- route
Get
Gateway Plugin Ai Proxy Route - service
Get
Gateway Plugin Ai Proxy Service - Sequence[str]
- updated_
at float
- config Property Map
- consumer Property Map
- consumer
Group Property Map - control
Plane StringId - created
At Number - enabled Boolean
- id String
- instance
Name String - ordering Property Map
- protocols List<String>
- route Property Map
- service Property Map
- List<String>
- updated
At Number
Supporting Types
GetGatewayPluginAiProxyConfig
- Auth
Get
Gateway Plugin Ai Proxy Config Auth - Logging
Get
Gateway Plugin Ai Proxy Config Logging - Max
Request doubleBody Size - max allowed body size allowed to be introspected
- Model
Get
Gateway Plugin Ai Proxy Config Model - Model
Name boolHeader - Display the model name selected in the X-Kong-LLM-Model response header
- Response
Streaming string - Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
- Route
Type string - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- Auth
Get
Gateway Plugin Ai Proxy Config Auth - Logging
Get
Gateway Plugin Ai Proxy Config Logging - Max
Request float64Body Size - max allowed body size allowed to be introspected
- Model
Get
Gateway Plugin Ai Proxy Config Model - Model
Name boolHeader - Display the model name selected in the X-Kong-LLM-Model response header
- Response
Streaming string - Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
- Route
Type string - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- auth
Get
Gateway Plugin Ai Proxy Config Auth - logging
Get
Gateway Plugin Ai Proxy Config Logging - max
Request DoubleBody Size - max allowed body size allowed to be introspected
- model
Get
Gateway Plugin Ai Proxy Config Model - model
Name BooleanHeader - Display the model name selected in the X-Kong-LLM-Model response header
- response
Streaming String - Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
- route
Type String - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- auth
Get
Gateway Plugin Ai Proxy Config Auth - logging
Get
Gateway Plugin Ai Proxy Config Logging - max
Request numberBody Size - max allowed body size allowed to be introspected
- model
Get
Gateway Plugin Ai Proxy Config Model - model
Name booleanHeader - Display the model name selected in the X-Kong-LLM-Model response header
- response
Streaming string - Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
- route
Type string - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- auth
Get
Gateway Plugin Ai Proxy Config Auth - logging
Get
Gateway Plugin Ai Proxy Config Logging - max_
request_ floatbody_ size - max allowed body size allowed to be introspected
- model
Get
Gateway Plugin Ai Proxy Config Model - model_
name_ boolheader - Display the model name selected in the X-Kong-LLM-Model response header
- response_
streaming str - Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
- route_
type str - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- auth Property Map
- logging Property Map
- max
Request NumberBody Size - max allowed body size allowed to be introspected
- model Property Map
- model
Name BooleanHeader - Display the model name selected in the X-Kong-LLM-Model response header
- response
Streaming String - Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
- route
Type String - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
GetGatewayPluginAiProxyConfigAuth
- Allow
Override bool - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- Aws
Access stringKey Id - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- Aws
Secret stringAccess Key - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- Azure
Client stringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- Azure
Client stringSecret - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- Azure
Tenant stringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- Azure
Use boolManaged Identity - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- Gcp
Service stringAccount Json - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - Gcp
Use boolService Account - Use service account auth for GCP-based providers and models.
- Header
Name string - If AI model requires authentication via Authorization or API key header, specify its name here.
- Header
Value string - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- Param
Location string - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- Param
Name string - If AI model requires authentication via query parameter, specify its name here.
- Param
Value string - Specify the full parameter value for 'param_name'.
- Allow
Override bool - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- Aws
Access stringKey Id - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- Aws
Secret stringAccess Key - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- Azure
Client stringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- Azure
Client stringSecret - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- Azure
Tenant stringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- Azure
Use boolManaged Identity - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- Gcp
Service stringAccount Json - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - Gcp
Use boolService Account - Use service account auth for GCP-based providers and models.
- Header
Name string - If AI model requires authentication via Authorization or API key header, specify its name here.
- Header
Value string - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- Param
Location string - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- Param
Name string - If AI model requires authentication via query parameter, specify its name here.
- Param
Value string - Specify the full parameter value for 'param_name'.
- allow
Override Boolean - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- aws
Access StringKey Id - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- aws
Secret StringAccess Key - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- azure
Client StringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- azure
Client StringSecret - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- azure
Tenant StringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- azure
Use BooleanManaged Identity - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- gcp
Service StringAccount Json - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - gcp
Use BooleanService Account - Use service account auth for GCP-based providers and models.
- header
Name String - If AI model requires authentication via Authorization or API key header, specify its name here.
- header
Value String - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- param
Location String - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- param
Name String - If AI model requires authentication via query parameter, specify its name here.
- param
Value String - Specify the full parameter value for 'param_name'.
- allow
Override boolean - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- aws
Access stringKey Id - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- aws
Secret stringAccess Key - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- azure
Client stringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- azure
Client stringSecret - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- azure
Tenant stringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- azure
Use booleanManaged Identity - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- gcp
Service stringAccount Json - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - gcp
Use booleanService Account - Use service account auth for GCP-based providers and models.
- header
Name string - If AI model requires authentication via Authorization or API key header, specify its name here.
- header
Value string - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- param
Location string - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- param
Name string - If AI model requires authentication via query parameter, specify its name here.
- param
Value string - Specify the full parameter value for 'param_name'.
- allow_
override bool - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- aws_
access_ strkey_ id - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- aws_
secret_ straccess_ key - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- azure_
client_ strid - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- azure_
client_ strsecret - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- azure_
tenant_ strid - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- azure_
use_ boolmanaged_ identity - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- gcp_
service_ straccount_ json - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - gcp_
use_ boolservice_ account - Use service account auth for GCP-based providers and models.
- header_
name str - If AI model requires authentication via Authorization or API key header, specify its name here.
- header_
value str - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- param_
location str - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- param_
name str - If AI model requires authentication via query parameter, specify its name here.
- param_
value str - Specify the full parameter value for 'param_name'.
- allow
Override Boolean - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- aws
Access StringKey Id - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- aws
Secret StringAccess Key - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- azure
Client StringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- azure
Client StringSecret - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- azure
Tenant StringId - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- azure
Use BooleanManaged Identity - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- gcp
Service StringAccount Json - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - gcp
Use BooleanService Account - Use service account auth for GCP-based providers and models.
- header
Name String - If AI model requires authentication via Authorization or API key header, specify its name here.
- header
Value String - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- param
Location String - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- param
Name String - If AI model requires authentication via query parameter, specify its name here.
- param
Value String - Specify the full parameter value for 'param_name'.
GetGatewayPluginAiProxyConfigLogging
- Log
Payloads bool - If enabled, will log the request and response body into the Kong log plugin(s) output.
- Log
Statistics bool - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- Log
Payloads bool - If enabled, will log the request and response body into the Kong log plugin(s) output.
- Log
Statistics bool - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- log
Payloads Boolean - If enabled, will log the request and response body into the Kong log plugin(s) output.
- log
Statistics Boolean - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- log
Payloads boolean - If enabled, will log the request and response body into the Kong log plugin(s) output.
- log
Statistics boolean - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- log_
payloads bool - If enabled, will log the request and response body into the Kong log plugin(s) output.
- log_
statistics bool - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- log
Payloads Boolean - If enabled, will log the request and response body into the Kong log plugin(s) output.
- log
Statistics Boolean - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
GetGatewayPluginAiProxyConfigModel
- Name string
- Model name to execute.
- Options
Get
Gateway Plugin Ai Proxy Config Model Options - Key/value settings for the model
- Provider string
- AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- Name string
- Model name to execute.
- Options
Get
Gateway Plugin Ai Proxy Config Model Options - Key/value settings for the model
- Provider string
- AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- name String
- Model name to execute.
- options
Get
Gateway Plugin Ai Proxy Config Model Options - Key/value settings for the model
- provider String
- AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- name string
- Model name to execute.
- options
Get
Gateway Plugin Ai Proxy Config Model Options - Key/value settings for the model
- provider string
- AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- name str
- Model name to execute.
- options
Get
Gateway Plugin Ai Proxy Config Model Options - Key/value settings for the model
- provider str
- AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- name String
- Model name to execute.
- options Property Map
- Key/value settings for the model
- provider String
- AI provider request format - Kong translates requests to and from the specified backend compatible formats.
GetGatewayPluginAiProxyConfigModelOptions
- Anthropic
Version string - Defines the schema/API version, if using Anthropic provider.
- Azure
Api stringVersion - 'api-version' for Azure OpenAI instances.
- Azure
Deployment stringId - Deployment ID for Azure OpenAI instances.
- Azure
Instance string - Instance name for Azure OpenAI hosted models.
- Bedrock
Get
Gateway Plugin Ai Proxy Config Model Options Bedrock - Gemini
Get
Gateway Plugin Ai Proxy Config Model Options Gemini - Huggingface
Get
Gateway Plugin Ai Proxy Config Model Options Huggingface - Input
Cost double - Defines the cost per 1M tokens in your prompt.
- Llama2Format string
- If using llama2 provider, select the upstream message format.
- Max
Tokens double - Defines the max_tokens, if using chat or completion models.
- Mistral
Format string - If using mistral provider, select the upstream message format.
- Output
Cost double - Defines the cost per 1M tokens in the output of the AI.
- Temperature double
- Defines the matching temperature, if using chat or completion models.
- Top
K double - Defines the top-k most likely tokens, if supported.
- Top
P double - Defines the top-p probability mass, if supported.
- Upstream
Path string - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- Upstream
Url string - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- Anthropic
Version string - Defines the schema/API version, if using Anthropic provider.
- Azure
Api stringVersion - 'api-version' for Azure OpenAI instances.
- Azure
Deployment stringId - Deployment ID for Azure OpenAI instances.
- Azure
Instance string - Instance name for Azure OpenAI hosted models.
- Bedrock
Get
Gateway Plugin Ai Proxy Config Model Options Bedrock - Gemini
Get
Gateway Plugin Ai Proxy Config Model Options Gemini - Huggingface
Get
Gateway Plugin Ai Proxy Config Model Options Huggingface - Input
Cost float64 - Defines the cost per 1M tokens in your prompt.
- Llama2Format string
- If using llama2 provider, select the upstream message format.
- Max
Tokens float64 - Defines the max_tokens, if using chat or completion models.
- Mistral
Format string - If using mistral provider, select the upstream message format.
- Output
Cost float64 - Defines the cost per 1M tokens in the output of the AI.
- Temperature float64
- Defines the matching temperature, if using chat or completion models.
- Top
K float64 - Defines the top-k most likely tokens, if supported.
- Top
P float64 - Defines the top-p probability mass, if supported.
- Upstream
Path string - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- Upstream
Url string - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- anthropic
Version String - Defines the schema/API version, if using Anthropic provider.
- azure
Api StringVersion - 'api-version' for Azure OpenAI instances.
- azure
Deployment StringId - Deployment ID for Azure OpenAI instances.
- azure
Instance String - Instance name for Azure OpenAI hosted models.
- bedrock
Get
Gateway Plugin Ai Proxy Config Model Options Bedrock - gemini
Get
Gateway Plugin Ai Proxy Config Model Options Gemini - huggingface
Get
Gateway Plugin Ai Proxy Config Model Options Huggingface - input
Cost Double - Defines the cost per 1M tokens in your prompt.
- llama2Format String
- If using llama2 provider, select the upstream message format.
- max
Tokens Double - Defines the max_tokens, if using chat or completion models.
- mistral
Format String - If using mistral provider, select the upstream message format.
- output
Cost Double - Defines the cost per 1M tokens in the output of the AI.
- temperature Double
- Defines the matching temperature, if using chat or completion models.
- top
K Double - Defines the top-k most likely tokens, if supported.
- top
P Double - Defines the top-p probability mass, if supported.
- upstream
Path String - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- upstream
Url String - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- anthropic
Version string - Defines the schema/API version, if using Anthropic provider.
- azure
Api stringVersion - 'api-version' for Azure OpenAI instances.
- azure
Deployment stringId - Deployment ID for Azure OpenAI instances.
- azure
Instance string - Instance name for Azure OpenAI hosted models.
- bedrock
Get
Gateway Plugin Ai Proxy Config Model Options Bedrock - gemini
Get
Gateway Plugin Ai Proxy Config Model Options Gemini - huggingface
Get
Gateway Plugin Ai Proxy Config Model Options Huggingface - input
Cost number - Defines the cost per 1M tokens in your prompt.
- llama2Format string
- If using llama2 provider, select the upstream message format.
- max
Tokens number - Defines the max_tokens, if using chat or completion models.
- mistral
Format string - If using mistral provider, select the upstream message format.
- output
Cost number - Defines the cost per 1M tokens in the output of the AI.
- temperature number
- Defines the matching temperature, if using chat or completion models.
- top
K number - Defines the top-k most likely tokens, if supported.
- top
P number - Defines the top-p probability mass, if supported.
- upstream
Path string - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- upstream
Url string - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- anthropic_
version str - Defines the schema/API version, if using Anthropic provider.
- azure_
api_ strversion - 'api-version' for Azure OpenAI instances.
- azure_
deployment_ strid - Deployment ID for Azure OpenAI instances.
- azure_
instance str - Instance name for Azure OpenAI hosted models.
- bedrock
Get
Gateway Plugin Ai Proxy Config Model Options Bedrock - gemini
Get
Gateway Plugin Ai Proxy Config Model Options Gemini - huggingface
Get
Gateway Plugin Ai Proxy Config Model Options Huggingface - input_
cost float - Defines the cost per 1M tokens in your prompt.
- llama2_
format str - If using llama2 provider, select the upstream message format.
- max_
tokens float - Defines the max_tokens, if using chat or completion models.
- mistral_
format str - If using mistral provider, select the upstream message format.
- output_
cost float - Defines the cost per 1M tokens in the output of the AI.
- temperature float
- Defines the matching temperature, if using chat or completion models.
- top_
k float - Defines the top-k most likely tokens, if supported.
- top_
p float - Defines the top-p probability mass, if supported.
- upstream_
path str - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- upstream_
url str - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- anthropic
Version String - Defines the schema/API version, if using Anthropic provider.
- azure
Api StringVersion - 'api-version' for Azure OpenAI instances.
- azure
Deployment StringId - Deployment ID for Azure OpenAI instances.
- azure
Instance String - Instance name for Azure OpenAI hosted models.
- bedrock Property Map
- gemini Property Map
- huggingface Property Map
- input
Cost Number - Defines the cost per 1M tokens in your prompt.
- llama2Format String
- If using llama2 provider, select the upstream message format.
- max
Tokens Number - Defines the max_tokens, if using chat or completion models.
- mistral
Format String - If using mistral provider, select the upstream message format.
- output
Cost Number - Defines the cost per 1M tokens in the output of the AI.
- temperature Number
- Defines the matching temperature, if using chat or completion models.
- top
K Number - Defines the top-k most likely tokens, if supported.
- top
P Number - Defines the top-p probability mass, if supported.
- upstream
Path String - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- upstream
Url String - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
GetGatewayPluginAiProxyConfigModelOptionsBedrock
- Aws
Region string - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- Aws
Region string - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- aws
Region String - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- aws
Region string - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- aws_
region str - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- aws
Region String - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
GetGatewayPluginAiProxyConfigModelOptionsGemini
- Api
Endpoint string - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- Location
Id string - If running Gemini on Vertex, specify the location ID.
- Project
Id string - If running Gemini on Vertex, specify the project ID.
- Api
Endpoint string - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- Location
Id string - If running Gemini on Vertex, specify the location ID.
- Project
Id string - If running Gemini on Vertex, specify the project ID.
- api
Endpoint String - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- location
Id String - If running Gemini on Vertex, specify the location ID.
- project
Id String - If running Gemini on Vertex, specify the project ID.
- api
Endpoint string - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- location
Id string - If running Gemini on Vertex, specify the location ID.
- project
Id string - If running Gemini on Vertex, specify the project ID.
- api_
endpoint str - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- location_
id str - If running Gemini on Vertex, specify the location ID.
- project_
id str - If running Gemini on Vertex, specify the project ID.
- api
Endpoint String - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- location
Id String - If running Gemini on Vertex, specify the location ID.
- project
Id String - If running Gemini on Vertex, specify the project ID.
GetGatewayPluginAiProxyConfigModelOptionsHuggingface
- Use
Cache bool - Use the cache layer on the inference API
- Wait
For boolModel - Wait for the model if it is not ready
- Use
Cache bool - Use the cache layer on the inference API
- Wait
For boolModel - Wait for the model if it is not ready
- use
Cache Boolean - Use the cache layer on the inference API
- wait
For BooleanModel - Wait for the model if it is not ready
- use
Cache boolean - Use the cache layer on the inference API
- wait
For booleanModel - Wait for the model if it is not ready
- use_
cache bool - Use the cache layer on the inference API
- wait_
for_ boolmodel - Wait for the model if it is not ready
- use
Cache Boolean - Use the cache layer on the inference API
- wait
For BooleanModel - Wait for the model if it is not ready
GetGatewayPluginAiProxyConsumer
- Id string
- Id string
- id String
- id string
- id str
- id String
GetGatewayPluginAiProxyConsumerGroup
- Id string
- Id string
- id String
- id string
- id str
- id String
GetGatewayPluginAiProxyOrdering
GetGatewayPluginAiProxyOrderingAfter
- Accesses List<string>
- Accesses []string
- accesses List<String>
- accesses string[]
- accesses Sequence[str]
- accesses List<String>
GetGatewayPluginAiProxyOrderingBefore
- Accesses List<string>
- Accesses []string
- accesses List<String>
- accesses string[]
- accesses Sequence[str]
- accesses List<String>
GetGatewayPluginAiProxyRoute
- Id string
- Id string
- id String
- id string
- id str
- id String
GetGatewayPluginAiProxyService
- Id string
- Id string
- id String
- id string
- id str
- id String
Package Details
- Repository
- konnect kong/terraform-provider-konnect
- License
- Notes
- This Pulumi package is based on the
konnect
Terraform Provider.