SmallAi
Back to Discover
OpenAI

GPT-4 Turbo Vision 0409

gpt-4-turbo-2024-04-09
The latest GPT-4 Turbo model features visual capabilities. Now, visual requests can be made using JSON format and function calls. GPT-4 Turbo is an enhanced version that provides cost-effective support for multimodal tasks, balancing accuracy and efficiency, making it suitable for applications requiring real-time interaction.
128K

Providers Supporting This Model

OpenAI
OpenAIOpenAI
OpenAIgpt-4-turbo-2024-04-09
Maximum Context Length
128K
Maximum Output Length
--
Input Price
$10.00
Output Price
$30.00

Model Parameters

Randomness
temperature

This setting affects the diversity of the model's responses. Lower values lead to more predictable and typical responses, while higher values encourage more diverse and uncommon responses. When set to 0, the model always gives the same response to a given input. View Documentation

Type
FLOAT
Default Value
1.00
Range
0.00 ~ 2.00
Nucleus Sampling
top_p

This setting limits the model's selection to a certain proportion of the most likely words: only selecting those top words whose cumulative probability reaches P. Lower values make the model's responses more predictable, while the default setting allows the model to choose from the entire range of vocabulary. View Documentation

Type
FLOAT
Default Value
1.00
Range
0.00 ~ 1.00
Topic Freshness
presence_penalty

This setting aims to control the repetition of words based on their frequency in the input. It attempts to use less of those words that appear frequently in the input, with usage frequency proportional to appearance frequency. Word penalties increase with frequency. Negative values encourage word repetition. View Documentation

Type
FLOAT
Default Value
0.00
Range
-2.00 ~ 2.00
Frequency Penalty
frequency_penalty

This setting adjusts the frequency of specific words that have already appeared in the input. Higher values reduce the likelihood of such repetitions, while negative values have the opposite effect. Word penalties do not increase with frequency. Negative values encourage word repetition. View Documentation

Type
FLOAT
Default Value
0.00
Range
-2.00 ~ 2.00
Single Response Limit
max_tokens

This setting defines the maximum length that the model can generate in a single response. Setting a higher value allows the model to generate longer responses, while a lower value limits the length of the response, making it more concise. Adjusting this value appropriately based on different application scenarios can help achieve the desired response length and detail. View Documentation

Type
INT
Default Value
--
Reasoning Intensity
reasoning_effort

This setting is used to control the reasoning intensity of the model before generating answers. Low intensity prioritizes response speed and saves tokens, while high intensity provides more complete reasoning but consumes more tokens and slows down response speed. The default value is medium, balancing reasoning accuracy and response speed. View Documentation

Type
STRING
Default Value
--
Range
low ~ high
Encountering issues during the process? Contact customer service via WeChat: SmallAI2024