OpenAI - Send messages to ChatGPT action

This action interfaces with OpenAI's popular ChatGPT models. Use these simply as a single question and answer action or to provide a chat experience as a part of your recipes. This action allows you to interact with ChatGPT models directly within your workflows.

Send messages to ChatGPT actionSend messages to ChatGPT action

Input

FieldDescription
Single MessageContentThe message to be sent over to the model with the role as user.
System role messageAn optional message that provides specific instructions to the model before starting a conversation.
Chat TranscriptRoleSelect the role of the user corresponding to the specific message.
ContentThe message to be sent over to the model for each corresponding role.
System role messageOptional message to provide specific instructions to the model before starting a conversation.
NameName of the author of a specific message. Often utilized for tracking chat transcripts easily. This can contain uppercase and lowercase characters, numbers, and underscores with a maximum length of 64 characters (no blank spaces).
ModelUse the Model drop-down menu to select the OpenAI model you plan to use. You can click into the Model field and enter the model if it isn't listed.
TemperatureEnter a value between 0 and 2 for controlling the randomness of completions. Higher values will make the output more random, while lower values will make it more focused and deterministic. We recommend to use this or top p but not both. Refer to the OpenAI temperature parameter documentation for more information.
Number of chat completionsThe number of completions to generate as the message response.
Stop phraseA specific stop phrase that will end generation. For example, if you set the stop phrase to a period (.) the model will generate text until it reaches a period, and then it will stop. Use this to control the amount of text generated.
Maximum tokensThe maximum number of tokens to generate in the completion. The token count of your prompt plus the value here cannot exceed the model's context length. For longer prompts, we recommend setting a low value and if the prompt is likely to vary in length, we recommend leaving it blank.
Presence penaltyA number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
UserA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

Output

FieldDescription
CreatedThe datetime stamp of when the response was generated.
IDA unique identifier denoting the specific request and response that was sent over.
ModelThe model used to generate the text completion.
ChoicesMessageThe response of the model for the specified input. The role will always be Assistant.
Finish reasonThe reason why the model stopped generating more text. Can be one of "stop", "length", "content_filter" and "null". Refer to the OpenAI chat completions response format documentation for more information.
ResponseContains the response which OpenAI probabilistically considers to be the ideal selection.
UsagePrompt tokensThe number of tokens utilized by the prompt.
Completions tokensThe number of tokens utilized for the completions of text.
Total tokensThe total number of tokens utilized by the prompt and response.

Last updated: