This version is still in development and is not considered stable yet. For the latest snapshot version, please use Spring AI 1.0.0-SNAPSHOT!spring-doc.cn

The property spring.ai.azure.openai.chat.options.model has been renamed to spring.ai.azure.openai.chat.options.deployment-name.
If you decide to connect to OpenAI instead of Azure OpeanAI, by setting the spring.ai.azure.openai.openai-api-key=<Your OpenAI Key> property, then the spring.ai.azure.openai.chat.options.deployment-name is treathed as an OpenAI model name.
Refer to the Dependency Management section to add the Spring AI BOM to your build file.
Property Description Default

spring.ai.azure.openai.api-keyspring-doc.cn

The Key from Azure AI OpenAI Keys and Endpoint section under Resource Managementspring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.endpointspring-doc.cn

The endpoint from the Azure AI OpenAI Keys and Endpoint section under Resource Managementspring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.openai-api-keyspring-doc.cn

(non Azure) OpenAI API key. Used to authenticate with the OpenAI service, instead of Azure OpenAI. This automatically sets the endpoint to api.openai.com/v1. Use either api-key or openai-api-key property. With this configuraiton the spring.ai.azure.openai.chat.options.deployment-name is threated as an OpenAi Model name.spring-doc.cn

-spring-doc.cn

Property Description Default

spring.ai.azure.openai.chat.enabledspring-doc.cn

Enable Azure OpenAI chat model.spring-doc.cn

truespring-doc.cn

spring.ai.azure.openai.chat.options.deployment-namespring-doc.cn

In use with Azure, this refers to the "Deployment Name" of your model, which you can find at oai.azure.com/portal. It’s important to note that within an Azure OpenAI deployment, the "Deployment Name" is distinct from the model itself. The confusion around these terms stems from the intention to make the Azure OpenAI client library compatible with the original OpenAI endpoint. The deployment structures offered by Azure OpenAI and Sam Altman’s OpenAI differ significantly. Deployments model name to provide as part of this completions request.spring-doc.cn

gpt-4ospring-doc.cn

spring.ai.azure.openai.chat.options.maxTokensspring-doc.cn

The maximum number of tokens to generate.spring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.chat.options.temperaturespring-doc.cn

The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict.spring-doc.cn

0.7spring-doc.cn

spring.ai.azure.openai.chat.options.topPspring-doc.cn

An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass.spring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.chat.options.logitBiasspring-doc.cn

A map between GPT token IDs and bias scores that influences the probability of specific tokens appearing in a completions response. Token IDs are computed via external tokenizer tools, while bias scores reside in the range of -100 to 100 with minimum and maximum values corresponding to a full ban or exclusive selection of a token, respectively. The exact behavior of a given bias score varies by model.spring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.chat.options.userspring-doc.cn

An identifier for the caller or end user of the operation. This may be used for tracking or rate-limiting purposes.spring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.chat.options.nspring-doc.cn

The number of chat completions choices that should be generated for a chat completions response.spring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.chat.options.stopspring-doc.cn

A collection of textual sequences that will end completions generation.spring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.chat.options.presencePenaltyspring-doc.cn

A value that influences the probability of generated tokens appearing based on their existing presence in generated text. Positive values will make tokens less likely to appear when they already exist and increase the model’s likelihood to output new topics.spring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.chat.options.responseFormatspring-doc.cn

An object specifying the format that the model must output. Using AzureOpenAiResponseFormat.JSON enables JSON mode, which guarantees the message the model generates is valid JSON. Using AzureOpenAiResponseFormat.TEXT enables TEXT mode.spring-doc.cn

-spring-doc.cn

spring.ai.azure.openai.chat.options.frequencyPenaltyspring-doc.cn

A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text. Positive values will make tokens less likely to appear as their frequency increases and decrease the likelihood of the model repeating the same statements verbatim.spring-doc.cn

-spring-doc.cn

All properties prefixed with spring.ai.azure.openai.chat.options can be overridden at runtime by adding a request specific Runtime Options to the Prompt call.
In addition to the model specific AzureOpenAiChatOptions.java you can use a portable ChatOptions instance, created with the ChatOptionsBuilder#builder().
you can pass multiple images as well.
replace the api-key and endpoint with your Azure OpenAI credentials.
Refer to the Dependency Management section to add the Spring AI BOM to your build file.
The spring-ai-azure-openai dependency also provide the access to the AzureOpenAiChatModel. For more information about the AzureOpenAiChatModel refer to the Azure OpenAI Chat section.
the gpt-4o is actually the Deployment Name as presented in the Azure AI Portal.