A Simple Guide to OpenAI API with Python
ChatGPT was introduced as the primary API by OpenAI some time ago. At the moment, OpenAI provides access to GPT-4 via ChatGPT Plus. The LLM that powers the free ChatGPT tool, on the other hand, is GPT-3.5.
You will be able to access OpenAI’s sophisticated models such as GPT-3 for natural language jobs, Codex for translating natural language to code, and DALL-E for creating and editing original graphics if you master the OpenAI API today.
This guide will teach you how to use the OpenAI API with Python.
First Things First — Generate Your API Key
Before we start working with the OpenAI API, we need to login into our OpenAI account and generate our API keys.
Remember that OpenAI won’t display your secret API key again after you generate it, so copy your API key and save it.
I’ll create an environment variable named OPENAI_API_KEY that will contain my API key and will be used in the next sections.
Exploring the OpenAI API with Python
To communicate with the OpenAI API, use the following command to install the official Python bindings.
pip install openai
This API allows us to do a variety of things. We’ll do text completion, code completion, and image generation in this guide.
1. Text Completion
Text completion can be used for a variety of purposes, including classification, text generation, dialogues, transformation, conversion, and summarization. To work with it, we must use the completion endpoint and prompt the model. The model will subsequently produce text that seeks to match the provided context/pattern.
Assume we wish to categorize the following text:
Decide whether a Comment sentiment is positive, neutral, or negative.
Comment: The music was underrated!
Sentiment:
Here’s how we’ll do this with OpenAI API.
import os
import openai
openai.api_key = os.getenv(“OPENAI_API_KEY”)
prompt = “””
Decide whether a Tweet’s sentiment is positive, neutral, or negative.
Tweet: I didn’t like the new Batman movie!
Sentiment:
“””
response = openai.Completion.create(
model=”text-davinci-003″,
prompt=prompt,
max_tokens=100,
temperature=0
)
print(response)
According to the OpenAI docs, GPT-3 models are meant to be used with the text completion endpoint. That’s why we’re using the model text-davinci-003 for this example.
Here’s part of the printed output.
{
“choices”: ,
…
}
In this example, the sentiment of the tweet was classified as Negative.
Let’s have a look at the parameters used in this example:
– model : ID of the model to use (here you can see all the models available)
– prompt: The prompt(s) to generate completions for
– max_token: The maximum number of tokens to generate in the completion (here you can see the tokenizer that OpenAI uses)
– temperature: The sampling temperature to use. Values close to 1 will give the model more risk/creativity, while values close to 0 will generate well-defined answers.
You can also insert and edit text using the completion and edit endpoint respectively.
2. Code Completion
Code completion works similarly to text completion, but here we use the Codex model to understand and generate code.
The Codex model series is a descendant of the GPT-3 series trained on natural language and billions of lines of code. With Codex, we can turn comments into code, rewrite code for efficiency, and more.
Let’s generate Python code using the model code-davinci-002 and the prompt below.
Create an array of weather temperatures for Los Angeles
import os
import openai
openai.api_key = os.getenv(“OPENAI_API_KEY”)
response = openai.Completion.create(
model=”code-davinci-002″,
prompt=””””nCreate an array of weather temperatures for Los Angelesn””””,
temperature=0,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
print(response)
Here’s part of the printed output.
{
“choices”: ,
…
}
}
If you give proper format to the text generated, you’ll get this.
import numpy as np
def create_temperatures(n):
temperatures = np.random.uniform(low=14.0, high=20.0, size=n)
return temperatures
You can do a lot more, but first I recommend you test Codex in the Playground (here are some examples to get you started)
Also, we should follow best practices to make the most out of Codex. We should specify the language and libraries to use in the prompt, put comments inside of functions, etc.
3. Image Generation
We can generate images using DALL-E models. To do so, we have to use the image generation endpoint and provide a text prompt.
Here’s the prompt we’ll use (remember that the more detail we give in the prompt, the more likely we are to get the result we want).
A fluffy white cat with blue eyes sitting in a basket of flowers, looking up adorably at the camera
import openai
response = openai.Image.create(
prompt=”A fluffy white cat with blue eyes sitting in a basket of flowers, looking up adorably at the camera”,
n=1,
size=”1024×1024″
)
image_url = response
print(image_url)
After opening the URL printed, I got the following image.
Source: OpenAI
But that’s not all! You can also edit an image and generate a variation of a given image using the image edits and image variations endpoints.
That’s it for now! In case you want to check more things you can do with the OpenAI API, check its documentation.
Conclusion
Even this is fun exercise, it’s quite limited if you want to have a cost free experience. OpenAI and other similar platforms like Google Cloud Natural Language or Azure Cloud QnA are not free of usage nor hassle-free.
If you think to use openai.com plan in advance to take the maximum of and the short free trial of one month.
Google provides a good user experience for free in a low rate — see the prices here
The Azure Cloud solution from Microsoft might be the most appropriated solution for medium/large organizations, but the learning curve might be longer if you are completely new to the Azure Cloud services as the list is quite extensive — more than 200