JetBrains plans to use the Google Cloud Vertex AI development platform to bring Google Gemini AI models into JetBrains’ AI Assistant, the company’s AI-powered programming assistant that integrates with JetBrains IDEs (integrated development environments).
JetBrains announced the plan on June 18. AI Assistant will combine the functionality of OpenAI’s GPT-4o, the Gemini models, and several of JetBrains’ proprietary models, automatically selecting the most suitable LLM for each task, JetBrains said. Gemini Pro 1.5 and Gemin Flash 1.5 on Vertex AI will unlock new uses cases for AI Assistant with a long context window and advanced reasoned, the company added. Recently released Gemini 1.5 Flash helps with use cases where cost efficiency at high volume and low latency are paramount. The new models will be rolled out in coming weeks.
Integrated with JetBrains IDEs, AI Assistant can generate code, suggest fixes, refactor functions, and answer questions with contextual understanding, JetBrains said. Developers can ask questions in a chat within the IDE and AI Assistant will collect the context to provide the answer. AI Assistant also can generate tests, documentation, and commit messages.
JetBrains previously launched full-line code autocompletion powered by locally run AI models. This ensures that data processing and analysis occur directly on the developer’s device, resulting in offline code completion with minimal latency, JetBrains said.