OpenAI recently announced a significant update to its API. The release of the new GPT-4-0613 and GPT-3.5-Turbo-0613 models, along with pricing modifications. Including function calling which will revolutionize the way developers can interact with the OpenAI API.
Function Calling for Automation and Connectivity
One of the biggest new features on GPT-4 is called function calling. New models now allow developers to describe functions to the model and have it make calls to external tools, APIs and databases. GPT-4 can now convert natural language, user inputs and queries into API calls or database queries and return structured data back.
For example, rather than returning a plain text answer to a question like “What’s the weather like in Boston?”, GPT-4 can now return a JSON object containing a result – in this case the current temperature in Boston. This makes it easier to connect your GPT-4 model to external services, allowing your model to generate useful data in response to user queries.
Function calling is not limited to this specific example. Developers will be able to create bots that answer questions using external tools, converting queries into function calls, extract structured data from text, and much more. Over time, developers will come up with new and exciting ways to integrate GPT-4 with existing technologies.
GPT-4 Updates and Models
In addition to function calling, GPT-4 also includes numerous other updates. OpenAI released the gpt-4-0613 and gpt-3.5-turbo models earlier this year, both with updated versions of gpt-3.5-turbo and gpt-4. There’s also a new 16k context version of gpt-3.5-turbo which can handle four times the content as gpt-3.5-turbo in a single call, and a significant 75% cost reduction on the embeddings model.
All of these updates add up to an unprecedented level of functionality and flexibility in GPT-4. Whether you’re creating a chatbot, a natural language processing system, or an AI-driven engine, GPT-4 gives you the power to take your project to the next level.
Lower Pricing for Developers
OpenAI has also announced a reduction in GPT-4 pricing, reducing the cost of embeddings model, and input tokens for gpt-3.5-turbo. GPT-4-32k now offers extended context length for better comprehension of larger texts, and GPT-3.5-turbo-16K is now available for $0.003 per 1K input tokens. This means developers can now access more cost-effective AI services when building a range of applications.
Developer Feedback is Key to OpenAI’s Progress
OpenAI values developer feedback and makes adjustments based on user feedback. As GPT-4 usage increases, OpenAI will continue to improve its models and capabilities. Developers have a unique opportunity to shape the development of GPT-4 and help OpenAI get closer to its goal of generalized Artificial Intelligence.
Getting Started with GPT-4
If you’re interested in GPT-4, you’re not alone. The new model has generated a great deal of excitement amongst developers and OpenAI’s waitlist is being reduced as more users get access to the model.
The good news is that getting started with GPT-4 is quite simple, especially with the various resources available. OpenAI’s own developer documentation is a great starting point, as it provides an overview of the features available on GPT-4 and provides instructions on how to use them.
There are also several community resources for developers interested in GPT-4 such as the GPT-4 Forum and Github page where users can discuss and share code related to GPT-4.
GPT-4: The Next Generation in AI
GPT-4 from OpenAI has brought a new level of power, flexibility and cost efficiency to developers. With its function calling, extended context length, and lower pricing for developers, GPT-4 has opened up a range of possibilities that will undoubtedly revolutionize a wide range of applications.
At the same time, OpenAI’s commitment to continually evolving GPT-4 based on user feedback has earned it the respect and trust of the developer community. With GPT-4, developers have a unique opportunity to shape the future of AI and create the applications of tomorrow. You can read the full post on the OpenAI Blog