Source:  VentureBeat, December 2018


 

Ctrl-labs, a New York startup that’s raised $39 million in funding to develop a device capable of translating electrical muscle impulses into digital signals, this week detailed its forthcoming developer kit at Slush in Helsinki, Finland. Dubbed Ctrl-kit, it’ll begin shipping to developers in Q1 2019.

“With Ctrl-kit, you become the controller. Extracting the meanings of your movements. Taking biological neurons, sending that directly into a computational neurons. This way humans can dominate those computational neurons with their own neurons, to leverage things like machine learning,” Ctrl-labs cofounder and CEO Thomas Reardon said during his keynote at Slush.

Much has changed since the team began demoing prototype devices last year, Adam Berenzweig, director of research and development at Ctrl-labs, told VentureBeat in a phone interview. The device is no longer tethered by wires to a Raspberry Pi, as was the case with previous incarnations, but the current form factor emphasizes robustness rather than sleekness. Radios are packed tightly into a wrist-worn enclosure that’s the size of a “large watch,” and wired to a component with electrodes that sits further up the arm, as seen in the latest demo videos. Furthermore, latency is reduced, and the algorithms that predict intent are “significantly better” than they were before.

“We’re at the point of the launch where … we want to get it out [to] developers,” Berenzweig said.

On the software side of the equation, the accompanying software development kit is “more mature,” with built-out JavaScript and TypeScript toolchains and new prebuilt demos that give an idea of the hardware’s capabilities. Programming is largely done through WebSockets, which provide a full-duplex communications channel.

“Part of the launch will be showing how to connect the [device] and do simple operations,” Berenzweig said. “The plan is to [maintain] a continuous pace of rolling out new and more sophisticated demos [and features] in the SDK, and … real working examples [that demonstrate] best practices around how to design around the technology.”

The final version of Ctrl-kit, seen above, will be in one piece, and it won’t be an entirely self-contained affair. The developer kit has to be tethered to a PC for some processing, but the goal is to get to the point where overhead is such that it can run on wearable system-on-chips.

The underlying technology remains the same. Ctrl-kit leverages differential electromyography (EMG) to translate mental intent into action, specifically by measuring changes in electrical potential caused by impulses traveling from the brain to hand muscles. Sixteen electrodes monitor the motor neuron signals amplified by the muscle fibers of motor units, from which they measure signals, and, with the help of a machine learning algorithm trained using Google’s TensorFlow, distinguish between the individual pulses of each nerve.

The system works independently of muscle movement; generating a brain activity pattern that Ctrl-labs’ tech can detect requires no more than the firing of a neuron down an axon, or what neuroscientists call action potential. That puts it a class above wearables that use electroencephalography (EEG), a technique that measures electrical activity in the brain through contacts pressed against the scalp. EMG devices draw from the cleaner, clearer signals from motor neurons, and as a result are limited only by the accuracy of the software’s machine learning model and the snugness of the contacts against the skin

As for what Ctrl-labs expects its early adopters to build with Ctrl-kit, video games top the list — particularly virtual reality games, which Berenzweig believes are a natural fit for the sort of immersive experiences EMG can deliver. (Imagine swiping through an inventory screen with a hand gesture, or piloting a fighter jet just by thinking about the direction you want to fly.)

Ctrl-labs is also thinking smaller. Not too long ago, the company demonstrated a virtual keyboard that maps finger movements to PC inputs, allowing a wearer to type messages by tapping on a tabletop. And at the 2018 O’Reilly AI conference in New York City, Reardon spoke about text messaging apps for smartphones and smartwatches that let you peck out replies one-handed.

It remains to be seen if Ctrl-labs can succeed where others have failed. In October, Amazon-backed wearables company Thalmic Labs killed its gesture- and motion-guided Myo armband, which similarly tapped the electrical activity in arm muscles to control devices. But Berenzweig is convinced that Ctrl-labs’ early momentum, plus the robustness of its developer tools, will help it gain an early lead in the brain-machine interface race.

“We’re enabling experiences we can’t imagine doing with an existing controller,” he said. “It’s real-time control mapped to a high-dimensional space.”

 

Source:  VentureBeat, December 2018