OpenAI API Playground
Once you sign up for an account, login. You want to choose the API, not ChatGPT. We're looking for API.
This is going to be our dashboard. Here we have documentation. But for now, I want to go to the playground. This is where we can test the API.
If you have used ChatGPT before, this window probably looks familiar. Essentially, yes, we can use this as ChatGPT. This is more of a joke, but if you have signed up for ChatGPT and use it once a month, this probably is a cheaper option because here you get charged only for the tokens, not a monthly fee.
The idea is the same: provide questions and get back answers. On the left-hand side, you see the system. The system sets guidelines and expectations for how the AI is supposed to interact within the session. The default is "you are a helpful assistant," but you can change it to "you are a famous chef," "you are a tour guide," "you are a ten x coder," etc. This sets how the AI is supposed to provide the answers.
I've read that it actually doesn't affect the responses that much, but since we'll use it in our API calls, I just wanted to cover what the system is doing. For now, I'll leave it as a helpful assistant. Then we have the user section where we type the question and look for the answer. Let me type, "Hello, how are you?" and submit. If everything is correct, we'll get back an answer.
On the right-hand side, we have some options we'll use when making our API calls. The first one is the model. The API provides a bunch of options, but we're going to stick with GPT-3.5 and turbo.
Next, we have the temperature, ranging from 0 to 2. In simplified terms, the AI predicts the most reasonable next word. If the temperature is set to 0, the model will be very predictable. If set to 2, it can get very creative. For example, with temperature 2, if I ask, "Are you aware of France?" we might get a creative answer. With temperature 0, the answer will be more predictable.
We also have maximum length, the maximum number of tokens. OpenAI counts both incoming and outgoing tokens. This limits the response length. For example, with a maximum length of 1 token, if I ask, "Are you aware of pizza?" the response might just be "Yes." We will implement this feature later in the application.
Lastly, I want to talk about the context window. This is an important factor that distinguishes between models. The context window refers to the amount of text from the conversation the AI model can consider when structuring a response. For example, the model we're using has a context window of 4000 tokens. The bigger the context window, the better the chance of getting a relevant answer from the API.
If you exceed the context window, the AI starts removing older messages. If you ask about something mentioned earlier in the conversation, the AI might not remember without sufficient context. A bigger context window generally means higher costs for tokens.
Post a Comment
0Comments