Using OpenAI API: Methods and Properties

Marickian
By -
0
Using OpenAI API: Methods and Properties

Using OpenAI API: Methods and Properties

In this article, we'll explore the methods and properties used to make API calls to OpenAI. We'll start by looking at the API reference in the dashboard, specifically focusing on the chat and chat completion object.

API Reference

In the OpenAI dashboard, navigate to the API Reference section. More specifically, look for the chat and then chat completion object. This object will be the response we receive from the API call.

Response Object

The response object includes a usage property that shows the tokens used in both the question and the answer, along with the total tokens. The actual response will be found in the choices array, specifically under the message property with the role assistant.

Setting Up NodeJS

If you don't have NodeJS, make sure to install it. The following code will be used to set up our API calls:

const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
(async () => {
  const response = await openai.createChatCompletion({
    model: "gpt-3.5-turbo",
    messages: [{ role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Hello, how are you?" }],
  });
  console.log(response.data.choices[0].message.content);
})();

Instance and API Key

We will need an instance where we provide the API key. This is something we'll set up in the following article.

Creating API Calls

Once we have the instance, we'll use the OpenAI.chat method. Remember to use await in front of it:

await openai.createChatCompletion({
  model: "gpt-3.5-turbo",
  messages: [{ role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Hello, how are you?" }],
});

Message Object

The messages property stores our conversation. It starts with the role system:

{ role: "system", content: "You are a helpful assistant." }

We then set up our question with the role user:

{ role: "user", content: "Hello, how are you?" }

The response will have the role assistant.

Model, Max Tokens, and Temperature

We need to specify the model (e.g., GPT-3.5 turbo), the max_tokens property, and the temperature property:

{
  model: "gpt-3.5-turbo",
  max_tokens: 100,
  temperature: 0.5,
  messages: [{ role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Tell me a joke." }],
}

Set the temperature to 0 for predictable answers and 2 for more creative responses.

Note: The properties and structure of the API call are straightforward. Construct the object correctly, provide the necessary properties, and you'll receive the response to use in your application.

With this understanding of the API documentation and the methods and properties involved, you are now equipped to make API calls and handle responses effectively in your applications.

Post a Comment

0Comments

Post a Comment (0)