1. Packages
  2. Konnect Provider
  3. API Docs
  4. getGatewayPluginAiProxy
konnect 2.4.1 published on Thursday, Mar 13, 2025 by kong

konnect.getGatewayPluginAiProxy

Explore with Pulumi AI

konnect logo
konnect 2.4.1 published on Thursday, Mar 13, 2025 by kong

    Using getGatewayPluginAiProxy

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getGatewayPluginAiProxy(args: GetGatewayPluginAiProxyArgs, opts?: InvokeOptions): Promise<GetGatewayPluginAiProxyResult>
    function getGatewayPluginAiProxyOutput(args: GetGatewayPluginAiProxyOutputArgs, opts?: InvokeOptions): Output<GetGatewayPluginAiProxyResult>
    def get_gateway_plugin_ai_proxy(control_plane_id: Optional[str] = None,
                                    opts: Optional[InvokeOptions] = None) -> GetGatewayPluginAiProxyResult
    def get_gateway_plugin_ai_proxy_output(control_plane_id: Optional[pulumi.Input[str]] = None,
                                    opts: Optional[InvokeOptions] = None) -> Output[GetGatewayPluginAiProxyResult]
    func LookupGatewayPluginAiProxy(ctx *Context, args *LookupGatewayPluginAiProxyArgs, opts ...InvokeOption) (*LookupGatewayPluginAiProxyResult, error)
    func LookupGatewayPluginAiProxyOutput(ctx *Context, args *LookupGatewayPluginAiProxyOutputArgs, opts ...InvokeOption) LookupGatewayPluginAiProxyResultOutput

    > Note: This function is named LookupGatewayPluginAiProxy in the Go SDK.

    public static class GetGatewayPluginAiProxy 
    {
        public static Task<GetGatewayPluginAiProxyResult> InvokeAsync(GetGatewayPluginAiProxyArgs args, InvokeOptions? opts = null)
        public static Output<GetGatewayPluginAiProxyResult> Invoke(GetGatewayPluginAiProxyInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetGatewayPluginAiProxyResult> getGatewayPluginAiProxy(GetGatewayPluginAiProxyArgs args, InvokeOptions options)
    public static Output<GetGatewayPluginAiProxyResult> getGatewayPluginAiProxy(GetGatewayPluginAiProxyArgs args, InvokeOptions options)
    
    fn::invoke:
      function: konnect:index/getGatewayPluginAiProxy:getGatewayPluginAiProxy
      arguments:
        # arguments dictionary

    The following arguments are supported:

    getGatewayPluginAiProxy Result

    The following output properties are available:

    Supporting Types

    GetGatewayPluginAiProxyConfig

    Auth GetGatewayPluginAiProxyConfigAuth
    Logging GetGatewayPluginAiProxyConfigLogging
    MaxRequestBodySize double
    max allowed body size allowed to be introspected
    Model GetGatewayPluginAiProxyConfigModel
    ModelNameHeader bool
    Display the model name selected in the X-Kong-LLM-Model response header
    ResponseStreaming string
    Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
    RouteType string
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
    Auth GetGatewayPluginAiProxyConfigAuth
    Logging GetGatewayPluginAiProxyConfigLogging
    MaxRequestBodySize float64
    max allowed body size allowed to be introspected
    Model GetGatewayPluginAiProxyConfigModel
    ModelNameHeader bool
    Display the model name selected in the X-Kong-LLM-Model response header
    ResponseStreaming string
    Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
    RouteType string
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
    auth GetGatewayPluginAiProxyConfigAuth
    logging GetGatewayPluginAiProxyConfigLogging
    maxRequestBodySize Double
    max allowed body size allowed to be introspected
    model GetGatewayPluginAiProxyConfigModel
    modelNameHeader Boolean
    Display the model name selected in the X-Kong-LLM-Model response header
    responseStreaming String
    Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
    routeType String
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
    auth GetGatewayPluginAiProxyConfigAuth
    logging GetGatewayPluginAiProxyConfigLogging
    maxRequestBodySize number
    max allowed body size allowed to be introspected
    model GetGatewayPluginAiProxyConfigModel
    modelNameHeader boolean
    Display the model name selected in the X-Kong-LLM-Model response header
    responseStreaming string
    Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
    routeType string
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
    auth GetGatewayPluginAiProxyConfigAuth
    logging GetGatewayPluginAiProxyConfigLogging
    max_request_body_size float
    max allowed body size allowed to be introspected
    model GetGatewayPluginAiProxyConfigModel
    model_name_header bool
    Display the model name selected in the X-Kong-LLM-Model response header
    response_streaming str
    Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
    route_type str
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
    auth Property Map
    logging Property Map
    maxRequestBodySize Number
    max allowed body size allowed to be introspected
    model Property Map
    modelNameHeader Boolean
    Display the model name selected in the X-Kong-LLM-Model response header
    responseStreaming String
    Whether to 'optionally allow', 'deny', or 'always' (force) the streaming of answers via server sent events.
    routeType String
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.

    GetGatewayPluginAiProxyConfigAuth

    AllowOverride bool
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    AwsAccessKeyId string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    AwsSecretAccessKey string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    AzureClientId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    AzureClientSecret string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    AzureTenantId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    AzureUseManagedIdentity bool
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    GcpServiceAccountJson string
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    GcpUseServiceAccount bool
    Use service account auth for GCP-based providers and models.
    HeaderName string
    If AI model requires authentication via Authorization or API key header, specify its name here.
    HeaderValue string
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    ParamLocation string
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    ParamName string
    If AI model requires authentication via query parameter, specify its name here.
    ParamValue string
    Specify the full parameter value for 'param_name'.
    AllowOverride bool
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    AwsAccessKeyId string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    AwsSecretAccessKey string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    AzureClientId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    AzureClientSecret string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    AzureTenantId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    AzureUseManagedIdentity bool
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    GcpServiceAccountJson string
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    GcpUseServiceAccount bool
    Use service account auth for GCP-based providers and models.
    HeaderName string
    If AI model requires authentication via Authorization or API key header, specify its name here.
    HeaderValue string
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    ParamLocation string
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    ParamName string
    If AI model requires authentication via query parameter, specify its name here.
    ParamValue string
    Specify the full parameter value for 'param_name'.
    allowOverride Boolean
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    awsAccessKeyId String
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    awsSecretAccessKey String
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    azureClientId String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    azureClientSecret String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    azureTenantId String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    azureUseManagedIdentity Boolean
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    gcpServiceAccountJson String
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    gcpUseServiceAccount Boolean
    Use service account auth for GCP-based providers and models.
    headerName String
    If AI model requires authentication via Authorization or API key header, specify its name here.
    headerValue String
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    paramLocation String
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    paramName String
    If AI model requires authentication via query parameter, specify its name here.
    paramValue String
    Specify the full parameter value for 'param_name'.
    allowOverride boolean
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    awsAccessKeyId string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    awsSecretAccessKey string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    azureClientId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    azureClientSecret string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    azureTenantId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    azureUseManagedIdentity boolean
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    gcpServiceAccountJson string
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    gcpUseServiceAccount boolean
    Use service account auth for GCP-based providers and models.
    headerName string
    If AI model requires authentication via Authorization or API key header, specify its name here.
    headerValue string
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    paramLocation string
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    paramName string
    If AI model requires authentication via query parameter, specify its name here.
    paramValue string
    Specify the full parameter value for 'param_name'.
    allow_override bool
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    aws_access_key_id str
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    aws_secret_access_key str
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    azure_client_id str
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    azure_client_secret str
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    azure_tenant_id str
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    azure_use_managed_identity bool
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    gcp_service_account_json str
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    gcp_use_service_account bool
    Use service account auth for GCP-based providers and models.
    header_name str
    If AI model requires authentication via Authorization or API key header, specify its name here.
    header_value str
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    param_location str
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    param_name str
    If AI model requires authentication via query parameter, specify its name here.
    param_value str
    Specify the full parameter value for 'param_name'.
    allowOverride Boolean
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    awsAccessKeyId String
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    awsSecretAccessKey String
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    azureClientId String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    azureClientSecret String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    azureTenantId String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    azureUseManagedIdentity Boolean
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    gcpServiceAccountJson String
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    gcpUseServiceAccount Boolean
    Use service account auth for GCP-based providers and models.
    headerName String
    If AI model requires authentication via Authorization or API key header, specify its name here.
    headerValue String
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    paramLocation String
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    paramName String
    If AI model requires authentication via query parameter, specify its name here.
    paramValue String
    Specify the full parameter value for 'param_name'.

    GetGatewayPluginAiProxyConfigLogging

    LogPayloads bool
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    LogStatistics bool
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    LogPayloads bool
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    LogStatistics bool
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    logPayloads Boolean
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    logStatistics Boolean
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    logPayloads boolean
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    logStatistics boolean
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    log_payloads bool
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    log_statistics bool
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    logPayloads Boolean
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    logStatistics Boolean
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.

    GetGatewayPluginAiProxyConfigModel

    Name string
    Model name to execute.
    Options GetGatewayPluginAiProxyConfigModelOptions
    Key/value settings for the model
    Provider string
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    Name string
    Model name to execute.
    Options GetGatewayPluginAiProxyConfigModelOptions
    Key/value settings for the model
    Provider string
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    name String
    Model name to execute.
    options GetGatewayPluginAiProxyConfigModelOptions
    Key/value settings for the model
    provider String
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    name string
    Model name to execute.
    options GetGatewayPluginAiProxyConfigModelOptions
    Key/value settings for the model
    provider string
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    name str
    Model name to execute.
    options GetGatewayPluginAiProxyConfigModelOptions
    Key/value settings for the model
    provider str
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    name String
    Model name to execute.
    options Property Map
    Key/value settings for the model
    provider String
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.

    GetGatewayPluginAiProxyConfigModelOptions

    AnthropicVersion string
    Defines the schema/API version, if using Anthropic provider.
    AzureApiVersion string
    'api-version' for Azure OpenAI instances.
    AzureDeploymentId string
    Deployment ID for Azure OpenAI instances.
    AzureInstance string
    Instance name for Azure OpenAI hosted models.
    Bedrock GetGatewayPluginAiProxyConfigModelOptionsBedrock
    Gemini GetGatewayPluginAiProxyConfigModelOptionsGemini
    Huggingface GetGatewayPluginAiProxyConfigModelOptionsHuggingface
    InputCost double
    Defines the cost per 1M tokens in your prompt.
    Llama2Format string
    If using llama2 provider, select the upstream message format.
    MaxTokens double
    Defines the max_tokens, if using chat or completion models.
    MistralFormat string
    If using mistral provider, select the upstream message format.
    OutputCost double
    Defines the cost per 1M tokens in the output of the AI.
    Temperature double
    Defines the matching temperature, if using chat or completion models.
    TopK double
    Defines the top-k most likely tokens, if supported.
    TopP double
    Defines the top-p probability mass, if supported.
    UpstreamPath string
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    UpstreamUrl string
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    AnthropicVersion string
    Defines the schema/API version, if using Anthropic provider.
    AzureApiVersion string
    'api-version' for Azure OpenAI instances.
    AzureDeploymentId string
    Deployment ID for Azure OpenAI instances.
    AzureInstance string
    Instance name for Azure OpenAI hosted models.
    Bedrock GetGatewayPluginAiProxyConfigModelOptionsBedrock
    Gemini GetGatewayPluginAiProxyConfigModelOptionsGemini
    Huggingface GetGatewayPluginAiProxyConfigModelOptionsHuggingface
    InputCost float64
    Defines the cost per 1M tokens in your prompt.
    Llama2Format string
    If using llama2 provider, select the upstream message format.
    MaxTokens float64
    Defines the max_tokens, if using chat or completion models.
    MistralFormat string
    If using mistral provider, select the upstream message format.
    OutputCost float64
    Defines the cost per 1M tokens in the output of the AI.
    Temperature float64
    Defines the matching temperature, if using chat or completion models.
    TopK float64
    Defines the top-k most likely tokens, if supported.
    TopP float64
    Defines the top-p probability mass, if supported.
    UpstreamPath string
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    UpstreamUrl string
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    anthropicVersion String
    Defines the schema/API version, if using Anthropic provider.
    azureApiVersion String
    'api-version' for Azure OpenAI instances.
    azureDeploymentId String
    Deployment ID for Azure OpenAI instances.
    azureInstance String
    Instance name for Azure OpenAI hosted models.
    bedrock GetGatewayPluginAiProxyConfigModelOptionsBedrock
    gemini GetGatewayPluginAiProxyConfigModelOptionsGemini
    huggingface GetGatewayPluginAiProxyConfigModelOptionsHuggingface
    inputCost Double
    Defines the cost per 1M tokens in your prompt.
    llama2Format String
    If using llama2 provider, select the upstream message format.
    maxTokens Double
    Defines the max_tokens, if using chat or completion models.
    mistralFormat String
    If using mistral provider, select the upstream message format.
    outputCost Double
    Defines the cost per 1M tokens in the output of the AI.
    temperature Double
    Defines the matching temperature, if using chat or completion models.
    topK Double
    Defines the top-k most likely tokens, if supported.
    topP Double
    Defines the top-p probability mass, if supported.
    upstreamPath String
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    upstreamUrl String
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    anthropicVersion string
    Defines the schema/API version, if using Anthropic provider.
    azureApiVersion string
    'api-version' for Azure OpenAI instances.
    azureDeploymentId string
    Deployment ID for Azure OpenAI instances.
    azureInstance string
    Instance name for Azure OpenAI hosted models.
    bedrock GetGatewayPluginAiProxyConfigModelOptionsBedrock
    gemini GetGatewayPluginAiProxyConfigModelOptionsGemini
    huggingface GetGatewayPluginAiProxyConfigModelOptionsHuggingface
    inputCost number
    Defines the cost per 1M tokens in your prompt.
    llama2Format string
    If using llama2 provider, select the upstream message format.
    maxTokens number
    Defines the max_tokens, if using chat or completion models.
    mistralFormat string
    If using mistral provider, select the upstream message format.
    outputCost number
    Defines the cost per 1M tokens in the output of the AI.
    temperature number
    Defines the matching temperature, if using chat or completion models.
    topK number
    Defines the top-k most likely tokens, if supported.
    topP number
    Defines the top-p probability mass, if supported.
    upstreamPath string
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    upstreamUrl string
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    anthropic_version str
    Defines the schema/API version, if using Anthropic provider.
    azure_api_version str
    'api-version' for Azure OpenAI instances.
    azure_deployment_id str
    Deployment ID for Azure OpenAI instances.
    azure_instance str
    Instance name for Azure OpenAI hosted models.
    bedrock GetGatewayPluginAiProxyConfigModelOptionsBedrock
    gemini GetGatewayPluginAiProxyConfigModelOptionsGemini
    huggingface GetGatewayPluginAiProxyConfigModelOptionsHuggingface
    input_cost float
    Defines the cost per 1M tokens in your prompt.
    llama2_format str
    If using llama2 provider, select the upstream message format.
    max_tokens float
    Defines the max_tokens, if using chat or completion models.
    mistral_format str
    If using mistral provider, select the upstream message format.
    output_cost float
    Defines the cost per 1M tokens in the output of the AI.
    temperature float
    Defines the matching temperature, if using chat or completion models.
    top_k float
    Defines the top-k most likely tokens, if supported.
    top_p float
    Defines the top-p probability mass, if supported.
    upstream_path str
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    upstream_url str
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    anthropicVersion String
    Defines the schema/API version, if using Anthropic provider.
    azureApiVersion String
    'api-version' for Azure OpenAI instances.
    azureDeploymentId String
    Deployment ID for Azure OpenAI instances.
    azureInstance String
    Instance name for Azure OpenAI hosted models.
    bedrock Property Map
    gemini Property Map
    huggingface Property Map
    inputCost Number
    Defines the cost per 1M tokens in your prompt.
    llama2Format String
    If using llama2 provider, select the upstream message format.
    maxTokens Number
    Defines the max_tokens, if using chat or completion models.
    mistralFormat String
    If using mistral provider, select the upstream message format.
    outputCost Number
    Defines the cost per 1M tokens in the output of the AI.
    temperature Number
    Defines the matching temperature, if using chat or completion models.
    topK Number
    Defines the top-k most likely tokens, if supported.
    topP Number
    Defines the top-p probability mass, if supported.
    upstreamPath String
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    upstreamUrl String
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.

    GetGatewayPluginAiProxyConfigModelOptionsBedrock

    AwsRegion string
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    AwsRegion string
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    awsRegion String
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    awsRegion string
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    aws_region str
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    awsRegion String
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.

    GetGatewayPluginAiProxyConfigModelOptionsGemini

    ApiEndpoint string
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    LocationId string
    If running Gemini on Vertex, specify the location ID.
    ProjectId string
    If running Gemini on Vertex, specify the project ID.
    ApiEndpoint string
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    LocationId string
    If running Gemini on Vertex, specify the location ID.
    ProjectId string
    If running Gemini on Vertex, specify the project ID.
    apiEndpoint String
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    locationId String
    If running Gemini on Vertex, specify the location ID.
    projectId String
    If running Gemini on Vertex, specify the project ID.
    apiEndpoint string
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    locationId string
    If running Gemini on Vertex, specify the location ID.
    projectId string
    If running Gemini on Vertex, specify the project ID.
    api_endpoint str
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    location_id str
    If running Gemini on Vertex, specify the location ID.
    project_id str
    If running Gemini on Vertex, specify the project ID.
    apiEndpoint String
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    locationId String
    If running Gemini on Vertex, specify the location ID.
    projectId String
    If running Gemini on Vertex, specify the project ID.

    GetGatewayPluginAiProxyConfigModelOptionsHuggingface

    UseCache bool
    Use the cache layer on the inference API
    WaitForModel bool
    Wait for the model if it is not ready
    UseCache bool
    Use the cache layer on the inference API
    WaitForModel bool
    Wait for the model if it is not ready
    useCache Boolean
    Use the cache layer on the inference API
    waitForModel Boolean
    Wait for the model if it is not ready
    useCache boolean
    Use the cache layer on the inference API
    waitForModel boolean
    Wait for the model if it is not ready
    use_cache bool
    Use the cache layer on the inference API
    wait_for_model bool
    Wait for the model if it is not ready
    useCache Boolean
    Use the cache layer on the inference API
    waitForModel Boolean
    Wait for the model if it is not ready

    GetGatewayPluginAiProxyConsumer

    Id string
    Id string
    id String
    id string
    id str
    id String

    GetGatewayPluginAiProxyConsumerGroup

    Id string
    Id string
    id String
    id string
    id str
    id String

    GetGatewayPluginAiProxyOrdering

    GetGatewayPluginAiProxyOrderingAfter

    Accesses List<string>
    Accesses []string
    accesses List<String>
    accesses string[]
    accesses Sequence[str]
    accesses List<String>

    GetGatewayPluginAiProxyOrderingBefore

    Accesses List<string>
    Accesses []string
    accesses List<String>
    accesses string[]
    accesses Sequence[str]
    accesses List<String>

    GetGatewayPluginAiProxyRoute

    Id string
    Id string
    id String
    id string
    id str
    id String

    GetGatewayPluginAiProxyService

    Id string
    Id string
    id String
    id string
    id str
    id String

    Package Details

    Repository
    konnect kong/terraform-provider-konnect
    License
    Notes
    This Pulumi package is based on the konnect Terraform Provider.
    konnect logo
    konnect 2.4.1 published on Thursday, Mar 13, 2025 by kong