Summary:
OpenAI’s DevDay has set a new benchmark in AI with its latest announcements. The event showcased significant enhancements, including the more proficient GPT-4 Turbo, a new Assistants API, and pioneering multimodal capabilities.
Key Points:
- The GPT-4 Turbo model now boasts a 128K context window and improved affordability.
- GPT-3.5 Turbo has been enhanced to support a 16K context window and better instruction following.
- New multimodal capabilities are introduced, including vision, image creation with DALL·E 3, and text-to-speech features.
- Assistants API is launched to aid developers in creating AI-driven applications.
- Reproducible outputs and log probabilities for more predictable and controlled model interactions.
- New pricing makes these cutting-edge technologies more accessible.
New Pricing:
Feature | Old Price (per 1,000 tokens) | New Price (per 1,000 tokens) |
---|---|---|
GPT-4 Turbo Input | $0.03 | $0.01 |
GPT-4 Turbo Output | $0.06 | $0.03 |
GPT-3.5 Turbo Input (New Model) | $0.0015 | $0.001 |
GPT-3.5 Turbo Output (New Model) | $0.002 | $0.002 |
Fine-tuned GPT-3.5 Turbo Input | $0.012 | $0.003 |
Fine-tuned GPT-3.5 Turbo Output | $0.016 | $0.006 |
OpenAI’s latest DevDay was a landmark event, unveiling transformative advancements in AI technology. The highlight was the GPT-4 Turbo, a supercharged iteration of the already powerful GPT-4, featuring a staggering 128K context window, which is equivalent to over 300 pages of text in a single prompt. This model has been fine-tuned for superior performance, with world knowledge up to April 2023, all at a price point that’s considerably reduced from its predecessor.
The newly announced Assistants API revolutionizes the way developers can build AI-driven applications. It enables the creation of purpose-built AIs that can execute tasks with greater precision and context understanding, paving the way for a plethora of innovative applications.
Moreover, OpenAI has introduced multimodal capabilities within its platform, including vision, image creation with DALL·E 3, and text-to-speech functionalities. These features are set to redefine user interactions with AI, offering more intuitive and dynamic interfaces.
In an effort to democratize access to these advanced AI tools, OpenAI has announced a significant price reduction across the board. This strategic move is set to empower a wider range of developers and businesses to leverage AI for their specific needs.
By providing these cutting-edge technologies at more affordable prices, OpenAI continues to foster an environment where developers can innovate and integrate AI into a variety of applications, making the technology more accessible and scalable than ever before.
New Features Summary:
- GPT-4 Turbo: Offers a larger context window and knowledge up to April 2023, with a significant price reduction.
- Function Calling Updates: Now includes the ability to call multiple functions within a single message.
- Improved Instruction Following and JSON Mode: Enhanced performance for tasks requiring precision and supports valid JSON responses.
- Reproducible Outputs and Log Probabilities: Allows for consistent outputs and provides log probabilities for output tokens.
- Updated GPT-3.5 Turbo: Introduces a 16K context window and improved format following tasks performance.
- Assistants API, Retrieval, and Code Interpreter: Aims to help developers create agent-like AI experiences with improved capabilities.
- GPT-4 Turbo with Vision: Enables image inputs in the Chat Completions API, catering to various visual-based use cases.
Stay Connected for More Information:
For the latest updates and insights, follow our social media channels: