OpenAI ChatGPT DALL-E 3 is to be launched in October 2023. The ChatGPT DALL-E 3 will help users to produce custom images, not just text. OpenAI unveiled a new version of its DALL-E image generator and it will be incorporated into ChatGPT for paid users starting in October.
The DALL-E 3 text-to-image tool is better at rendering details and has more ability to create explicit images of artists’ styles. When DALL-E 3 is prompted with an idea it will automatically generate tailored and more accurate, vivid, and impressive Generative AI images. Read Here: What is generative AI, and how does it work?
DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images. The OpenAI ChatGPT has said that “DALL-E 3 is significantly better at being able to grasp a user’s intention, especially if the prompt is long and detailed. If a user can’t articulate what they want in a way that can maximize the image generator’s abilities, then ChatGPT can help them write a comprehensive prompt for it.”
OpenAI ChatGPT DALL-E 3: Significant highlights that users must know
About DALL·E 3: It is now in research preview and will be available to ChatGPT Plus and Enterprise customers in October, via the API and in the Labs. Modern text-to-image systems have a tendency to ignore words or descriptions, forcing users to learn prompt engineering. DALL·E 3 represents a leap forward in our ability to generate images that exactly adhere to the text you provide.
DALL·E 3 in ChatGPT: When prompted with an idea, ChatGPT will automatically generate tailored, detailed prompts for DALL·E 3 that brings your idea to life. If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.
A focus on safety: Like previous versions, we’ve taken steps to limit DALL·E 3’s ability to generate violent, adult, or hateful content.
Preventing harmful generations: DALL·E 3 has mitigations to decline requests that ask for a public figure by name. The system has improved safety performance in risk areas like the generation of public figures and harmful biases related to visual over/under-representation, in partnership with red teamers—domain experts who stress-test the model—to help inform our risk assessment and mitigation efforts in areas like propaganda and misinformation.
Internal testing: The company is also researching the best ways to help people identify when an image was created with AI. The experiment is also going for a ‘provenance classifier’ — a new internal tool that can help us identify whether or not an image was generated by DALL·E 3—and hope to use this tool to better understand the ways generated images might be used. We’ll share more soon.