Gemini 1.5 Pro and 1.5 Flash Now Generally Available

Gemini 1.5 Pro and 1.5 Flash Now Generally Available

Google has announced the stable release of its Gemini 1.5 Flash and 1.5 Pro models, along with a host of API updates and improvements to the Google AI Studio. These updates will provide developers with a more efficient and cost-effective way to build and deploy AI applications at scale.

The new Gemini 1.5 Flash model is optimized for speed and efficiency, is highly capable of multimodal reasoning and features our breakthrough long context window.

One of the key highlights is the increased rate limit for Gemini 1.5 Flash, which now supports up to 1000 requests per minute (RPM) with no daily request limit. This change comes in response to developer feedback requesting lower latency and cost for high-volume tasks. The 1.5 Pro rate limit remains unchanged for now, but Google encourages developers to reach out if they need higher limits or have feedback.

Starting June 17th, Gemini 1.5 Flash will also support model tuning, allowing developers to customize models for better performance in production environments. Tuning will be available both in Google AI Studio and the Gemini API, with tuning jobs currently being free of charge and no additional per-token costs for using a tuned model.

To unlock higher API rate limits, developers can now set up a billing account in Google AI Studio. Pricing information for the Gemini 1.5 models is available on the Google AI pricing page, and developers can seek assistance on the developer forum if they encounter any issues during the billing setup process. For those requiring enterprise-grade features, the same models are also accessible via Vertex AI, Google's enterprise-ready AI platform.

Lastly is the introduction of JSON schema mode, which will allow developers to specify the desired JSON schema for model responses. This will open up new possibilities for use cases that require the model to adhere to specific output constraints, such as following a predefined structure or outputting only specific text.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe