Chat

For performing any chat specific tasks you can define the following function and execute it with your desired prompts.

Python3




# function that takes in string argument as parameter
def chat(MSGS, MaxToken=50, outputs=3):
    # We use the Chat Completion endpoint for chat like inputs
    response = openai.ChatCompletion.create(
    # model used here is ChatGPT
    # You can use all these models for this endpoint: 
    # gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, 
    # gpt-3.5-turbo, gpt-3.5-turbo-0301
    model="gpt-3.5-turbo",
    messages=MSGS,
    # max_tokens generated by the AI model
    # maximu value can be 4096 tokens for "gpt-3.5-turbo" 
    max_tokens = MaxToken,
    # number of output variations to be generated by AI model
    n = outputs,
    )
    return response.choices[0].message
  
# Messages must consist of a collection of message objects, 
# each of which includes a role (either "system," "user," or "assistant") 
# and a content field for the message's actual text. 
# Conversations might last only 1 message or span several pages.
MSGS = [
        {"role": "system", "content": "<message generated by system>"},
        {"role": "user", "content": "<message generated by user>"},
        {"role": "assistant", "content": "<message generated by assistant>"}
    ]


Here, we have used the Chat Completion module from OpenAI library to execute chat specific tasks using ChatGPT model. Here are the important parameters involved with Chat Completion module:

  • model [required]: ID of the appropriate model. For information on which models are compatible with the Chat API, go to the model endpoint compatibility table (https://platform.openai.com/docs/models/model-endpoint-compatibility).
  • message: A chronological list of the conversation’s messages. The list of objects must contain dictionary with the following parameters only:
    • Role: The role of the author of this message. It can be either of “system”, “user”, or “assistant”.
    • Content: The contents of the message.
  • max_tokens: The maximum number of tokens to generate in the completion. The default value is 16.
  • temperature: Sampling temperature ranges between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
  • n: number of completions to generate for each prompt.

Examples for Chat Completion

Now Let’s see some examples to gain a better understanding of the Chat Completion API

Prompt 1:

Python3




ch = [
        {"role": "user", "content": "When did India win the world cup"}
    ]
  
chat(ch, MaxToken=500, outputs=1)


Output:

<OpenAIObject at 0x7fb078845710> JSON: {
  "content": "India has won the Cricket World Cup twice. The first time was in 1983 and the second time was in 2011.",
  "role": "assistant"
}

Prompt 2:

Python3




ch = [
        {"role": "user", "content": "Do you live with Snow White?"},
      {"role": "assistant", "content": "No, I live with Willy Wonka in the chocoalte factory."},
      {"role": "user", "content": "Can you tell me some good chocolate names?"}
  
    ]
  
chat(ch, MaxToken=500, outputs=1)


Output:

<OpenAIObject at 0x7fb078847c90> JSON: {
  "content": "Sure, here are some popular chocolate names:\n\n1. Lindt\n2. Ghirardelli\n3. Ferrero Rocher\n4. Toblerone\n5. Godiva\n6. Hershey's\n7. Cadbury\n8. Nestle\n9. Lindor \n10. Milka",
  "role": "assistant"
}

OpenAI Python API – Complete Guide

OpenAI is the leading company in the field of AI. With the public release of software like ChatGPT, DALL-E, GPT-3, and Whisper, the company has taken the entire AI industry by storm. Everyone has incorporated ChatGPT to do their work more efficiently and those who failed to do so have lost their jobs. The age of AI has started and people not adapting to AI could introduce some difficulties for them. 

In this article, we will be discussing how you can leverage the power of AI and make your day-to-day tasks a lot easier by using the OpenAI APIs (Application Programming Interface) that allow developers to easily access their AI models and Integrate them into their own applications using Python.

Table of Content

  • What is OpenAI?
  • What is OpenAI API?
  • Generate OpenAI API key
  • Installation of OpenAI package
  • Prompt Engineering
  • Text
  • Chat
  • Image
  • Audio
  • Embeddings
  • Fine-Tuning
  • API Error Codes
  • Conclusion
  • FAQs on OpenAI Python API

Similar Reads

What is OpenAI?

...

What is OpenAI API?

OpenAI is a Leading Company in the field of Artificial Intelligence(AI). It was originally founded in 2015 by Sam Altman and Elon Musk as a Non-profit Organization. They primarily focus on AI-based Software products Such as ChatGPT 3, ChatGPT 4 and DALL-E etc. They develop next-generation AI products holding incredible capabilities, for example, OpenAIs GPT-3 which is a Content filtering model that allows you to implement advanced text classification, outline, question-answering, and other chatbot applications....

Generate OpenAI API key

OpenAI API is a powerful cloud-based platform, hosted on Microsoft’s Azure, designed to provide developers with seamless access to state-of-the-art, pre-trained artificial intelligence models. This API empowers developers to effortlessly integrate cutting-edge AI capabilities into their applications, regardless of the programming language they choose to work with. By leveraging the OpenAI Python API, developers can unlock advanced AI functionalities and enhance the intelligence and performance of their software solutions....

Installation of OpenAI package

For you to use OpenAI’s models in your Python environment, you must first generate an API key. You can follow the below steps to generate the API key:...

Prompt Engineering

Step 1: Now open a text editor of your choosing or an online notebook like Google Colab or Jupyter Notebook. Here, we’re using a Google Colab notebook to run the command indicated below in order to install the Open AI library in Python....

Text

...

Chat

Giving the AI brain a unique set of instructions to increase its intelligence and responsiveness is what AI prompt engineering entails. To comprehend what we want from AI models like ChatGPT or GPT-4, they need to be gently nudged in the right direction. Prompt engineering can help with it. The finest answers from the AI may be ensured by carefully structuring the prompts. Now, prompt engineering doesn’t only happen once. The process of adjusting and experimenting is continuing. When we ask the AI a question, we experiment with varied wording and the addition of unique rules. We seem to be concocting a miraculous concoction of instructions! Let’s take a look at some rules to construct good prompts to generate accurate results for AI....

Image

For performing any text-specific tasks you can define the following function and execute it with your desired prompts....

Audio

...

Embeddings

...

Fine-Tuning

...

API Error Codes

...

Conclusion

...

OpenAI Python API – FAQs

...

Contact Us