ChatGPT can now see, speak, listen, understand, and respond.

OpenAI is launching a new chapter of ChatGPT with multimodal features.

 2078
ChatGPT can now see, speak, listen, understand, and respond.

ChatGPT can now see your images, hear your voice, and respond to your questions with a human-like voice.

Multimodality, which means the ability to process images and voice in addition to text, has finally arrived.

Here's what you need to know:

ChatGPT can now see your images, hear your voice, and respond to your questions with a human-like voice.

Multimodality, which means the ability to process images and voice in addition to text, has finally arrived.

Here's what you need to know:

Voice

• You can now use voice to engage in back-and-forth conversations with ChatGPT and use it as an assistant.

• You can enable voice from the New Features section in Settings. You can then choose between 5 different voices.

• You can listen to voice samples here.

Images

• You'll be able to take or upload images into ChatGPT and ask questions about the image.

• Questions can range from why your grill isn't working to what the meaning of a graph is.

• You'll be able to focus and ask questions about a specific part of the image using the drawing tool.

• You will also be able to upload screenshots and documents containing both text and images.

• You can watch a sample video of the feature here..

Both features will be available in 2 weeks to ChatGPT Plus and Enterprise users.

Developers will also be given access in the future. You can get more details on the new features here.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow