GitHub invites programmers to talk to Copilot directly

GitHub invites programmers to talk to Copilot directly

In brief GitHub is testing a new feature that will allow developers to instruct its AI-powered programming assistant Copilot to generate code using voice commands.

The legally troubled, license-bothering technology isn’t a plain speech-to-text dictation engine that would require developers to read out their program source line by line. Instead, the “Hey, GitHub!” works as a voice interface for Copilot, which automatically suggests code from prompts.

It’s hoped that coders will be able to say out loud things like general descriptions of functions, and have Microsoft-owned Copilot recommend the source to fulfill that request.

As usual, developers can decide whether they want to keep or scrap Copilot’s suggestions. Hey, GitHub! is designed to help them program faster using their voice. They can instruct the software to autocomplete boilerplate code, and manually edit any suggested output with their keyboards. The new feature can also be used to move code around or provide summarizations to make it easier to read and understand scripts.

Hey, Github! will be made available as part of the $10 subscription fee for Copilot. If you’re interested you can sign up for the technical preview here

New Amazon AI robot

Amazon showed off a new robotic arm, named Sparrow, that runs machine-learning algorithms to automatically identify and sort items for packaging. Useful as Amazon workers try to unionize or complain about working conditions and long hours.

Sparrow was presented on stage at Amazon’s Delivering the Future conference this week. It’s a big L-shaped arm with a gripper at one end; it uses suction cups at the end of the gripper’s fingers to pick objects up and sort them into bins. Jason Messinger, principal technical product manager of robotic manipulation at Amazon Robotics, said Sparrow can successfully grab all manner of objects in various sizes, even if they have curved surfaces.

Using computer vision technology, the computer system controlling the robotic arm is capable of object recognition and can reportedly identify around 65 per cent of Amazon’s inventory. “This is not just picking the same things up and moving it with high precision, which we’ve seen in previous robots,” Messinger said, according to CNBC. 

Amazon is investing in AI robots to perform tedious and repetitive tasks, potentially relieving itself of the need to hire quite so many humans.

Midjourney releases upgraded AI text-to-image tool

Midjourney, best known for creating a subscription-based text-to-image software that’s particularly artsy, has released version four. 

“V4 is an entirely new codebase and totally new AI architecture,” Midjourney founder David Holz said in the company’s Discord channel. “It’s our first model trained on a new Midjourney AI supercluster and has been in the works for over 9 months.”

Folks over at Ars Technica tested the model and noticed an improvement in v4’s ability to transform text prompts into images that featured better scene compositions and generated more appropriate sizes of objects relative to one another, compared to v3. The latest version was also better at producing more realistic-looking pictures. 

Holz previously told The Register he didn’t want Midjourney to get too good at generating images that were realistic enough to pass as fake photographs. “For us, when we were optimizing it, we wanted it to kind of look beautiful, and beautiful doesn’t necessarily mean realistic.

“If anything, actually we do bias it a little bit away from photos. … I know this technology can be used as a deep fake super machine. And I don’t think the world needs more fake photos. I don’t really want to be a source of fake photos in the world.” ®

Leave a Comment

Your email address will not be published. Required fields are marked *

Thirteen

Share

Share this post with your mates!

Scroll to Top