Tabnine, the “OG” AI-powered coding assistant, offers several powerful features that improve code generation and code generation, including context-aware suggestions, a chat window with a strong array of selectable AI models, and personalization of its models. The Tabnine Protected model supports about 15 popular programming languages at the “excellent” or “good” level, and another 65 or so languages and frameworks at varying levels of support. Tabnine Protected 2 (the latest update, which dropped as I was writing this review) supports more than 600 programming languages and frameworks. Tabnine expects its prompts to be in English, although other languages may work.
The use cases for Tabnine cover the full software development life cycle (SDLC), but currently lack any support for the command-line interface (CLI). Tabnine answers common developer questions and requests, such as “Where in our code base do we …,” “Write a unit test for this code,” “Generate documentation for this function,” and “Explain what this code does.” Its capabilities include generating code from plain language, onboarding developers to new code bases, autonomous generation of tests and documentation, code refactoring, and AI-generated fixes.
Tabnine competes directly with GitHub Copilot, JetBrains AI Assistant, Sourcegraph Cody, and Amazon Q Developer, and indirectly competes with a number of large language models (LLMs) and small language models (SLMs) that know about code, such as Code Llama, StarCoder, Bard/Gemini Pro, OpenAI Codex, and Mistral Codestral. Because Tabnine currently allows you to choose among seven good AI models for its chat window, it’s hard to take its indirect competition very seriously.