Copilot, GitHub’s AI-powered coding tool, will be free for students

Last June, Microsoft-owned GitHub and OpenAI launched Copilot, a service that provides suggestions for whole lines of code inside development environments like Microsoft Visual Studio. Available as a downloadable extension, Copilot is powered by an AI model called Codex that’s trained on billions of lines of public code to suggest additional lines of code and functions given the context of existing code. Copilot can also surface an approach or solution in response to a description of what a developer wants to accomplish (e.g., “Say hello world”), drawing on its knowledge base and current context.

While Copilot was previously available in technical preview, it’ll become generally available starting sometime this summer, Microsoft announced at Build 2022. Copilot will also be available free for students as well as “verified” open-source contributors. On the latter point, GitHub said it’ll share more at a later date.

The Copilot experience won’t change much with general availability. As before, developers will be able to cycle through suggestions for Python, JavaScript, TypeScript, Ruby, Go, and dozens of other programming languages and accept, reject, or manually edit them. Copilot will adapt to the edits developers make, matching particular coding styles to autofill boilerplate or repetitive code patterns and recommend unit tests that match implementation code.

Copilot extensions will be available for Noevim and JetBrains in addition to Visual Studio Code, or in the cloud on GitHub Codespaces.

One new feature coinciding with the general release of Copilot is Copilot Explain, which translates code into natural language descriptions. Described as a research project, the goal is to help novice developers or those working with an unfamiliar codebase.

“Earlier this year we launched Copilot Labs, a separate Copilot extension developed as a proving ground for experimental applications of machine learning that improve the developer experience,” Ryan J. Salva, VP of product at GitHub, told TechCrunch in an email interview. “As a part of Copilot Labs, we launched ‘explain this code’ and ‘translate this code.’ This works fits into a category of experimental capabilities that we are testing out that give you a peek into the possibilities and lets us explore use cases. Perhaps with ‘explain this code,’ a developer is weighing into an unfamiliar codebase and wants to quickly understand what’s happening. This feature lets you highlight a block of code and ask Copilot to explain it in plain language. Again, Copilot Labs is intended to be experimental in nature, so things might break. Labs experiments may or may not progress into permanent features of Copilot.”

Copilot Explain

Copilot’s new feature, Copilot Explain, translates code into natural language explanations.

Owing to the complicated nature of AI models, Copilot remains an imperfect system. GitHub warns that it can produce insecure coding patterns, bugs, and references to outdated APIs, or idioms reflecting the less-than-perfect code in its training data. The code Copilot suggests might not always compile, run, or even make sense because it doesn’t actually test the suggestions. Moreover, in rare instances, Copilot suggestions can include personal data like names and emails verbatim from its training set — and worse still, “biased, discriminatory, abusive, or offensive” text.

GitHub said that it’s implemented filters to block emails when shown in standard formats and offensive words, and that it’s in the process of building a filter to help detect and suppress code that’s repeated from public repositories. “While we are working hard to make Copilot better, code suggested by Copilot should be carefully tested, reviewed, and vetted, like any other code,” the disclaimer on the Copilot website reads.

While Copilot has presumably improved since its launch in technical preview last year, it’s unclear by how much. The capabilities of the underpinning model, Codex — a descendent of OpenAI’s GPT-3 —  have since been matched (or even exceeded) by systems like DeepMind’s AlphaCode and the open source PolyCoder.

“We are seeing progress in Copilot generating better code … We’re using our experience with [other] tools to improve the quality of Copilot suggestions — e.g., by giving extra weight to training data scanned by CodeQL, or analyzing suggestions at runtime,” Salva asserted — “CodeQL” referring to GitHub’s code analysis engine for automating security checks. “We’re committed to helping developers be more productive while also improving code quality and security. In the long term, we believe Copilot will write code that’s more secure than the average programmer.”

The lack of transparency doesn’t appear to have dampened enthusiasm for Copilot, which Microsoft said today suggests about 35% of the code in languages like Java and Python generated by the developers in the technical preview. Tens of thousands have regularly used the tool throughout the preview, the company claims.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter