Nvidia Inception identifies the top 4 AI startups for autonomous systems - knowledge

Post Top Ad

Responsive Ads Here

Nvidia Inception identifies the top 4 AI startups for autonomous systems

Share This


Nvidia created its Inception program to identify the best artificial intelligence startups — those that will change the world and tap the computing power of Nvidia’s graphics processing unit (GPU) and AI processors. To that end, the company created a contest with a $1 million prize pool and entertained pitches from more than 200 companies, with 12 of them presenting last week at its new headquarters in Santa Clara, California.
I attended the Shark Tank-styled event, as did Nvidia CEO Jensen Huang, a panel of four judges, and various sponsors. Twelve semi-finalists gave their eight-minute pitches, six finalists were selected, and the final winners will be picked at the company’s GPU Technology Conference on March 27 in San Jose, California. Successful pitches covered everything from legged robots to machine learning for sound analysis.
“AI is enjoying a revolution: software that writes software, machines that learn by themselves, solving problems that human software engineers had no possibility of addressing until now have finally hit the scene,” said Huang, who cofounded Nvidia in 1993. “I believe one of the great companies of the future is in our presence today.”
The 12 speakers were divided into the categories of AI for autonomous systems, health care, and enterprise. Here are the details for the four pitches for autonomous systems.

Kinema Systems

Above: Sachin Chitta, CEO of Kinema Systems.
Image Credit: Dean Takahashi
Sachin Chitta, CEO of Kinema Systems, started his company to get robots to do what is a very simple task from a human perspective: pick up boxes and put them on conveyor belts. People are good at this kind of intuitive work, but machines are not. Kinema had to develop sophisticated 3D vision systems and deep learning neural networks to create robots that were smart enough to do it.
Kinema Pick is software designed to pick and “depalletize” boxes for logistics, manufacturing, and shipping uses. It has 3D vision to discern the sizes and shapes of boxes — which vary by the millions. The software is installed on a PC with a GPU, and it integrates with Kinema’s 3D/2D camera. What’s good about this, Chitta said, is that it is a lot more flexible than prior robotic systems and enables robots to move around in their own work areas without being controlled by humans. It can also do the work at a high speed.
Above: Kinema Systems can power a warehouse robot.
Image Credit: Kinema Systems
“We do fast, real-time inferencing for locating boxes,” Chitta said. “And we know there are millions of types of boxes in the world.”
The company was founded in June 2015 and currently has fewer than five people. It has raised $1.8 million from Formation 8, to date. The founders came from Willow Garage, which developed an open source robotic operating system, and SRI, the Silicon Valley research lab that also did pioneering work in AI and robotics. The camera and software sell for about $50,000.
Rivals include Cognex, Omron Adept Technologies, and others. Chitta said his company is very focused on navigating 3D environments, and the software works with off-the-shelf robots.

Morpheus Labs

Above: Kirsty Lloyd-Jukes, managing director of Morpheus Labs.
Image Credit: Dean Takahashi
Kirsty Lloyd-Jukes, managing director at Morpheus Labs in the United Kingdom, said her company is dedicated to deciphering the way humans behave. It looks at the raw data in publicly available traffic cameras that show how humans behave as pedestrians, bikers, and motor vehicle drivers.
“The No. 1 cause of death of adults under 44 is road accidents,” Lloyd-Jukes said, adding that “40,000 people died last year in the U.S. in car accidents. Ninety-five percent [of those] are caused by human error.”
Humans don’t follow the rules. We speed. Drive tired. Get distracted. Text our friends. Autonomous cars can solve this, but they have a hard time navigating the road along with other humans because those humans break the rules that the autonomous cars are built to obey. So Morpheus Labs took all of the data from traffic cameras showing real human behavior and fed it into a prediction model and simulator.
“We start with raw video data in traffic cameras and throw it at our computer vision network,” Lloyd-Jukes said. “It identifies behaviors. We then go into learning algorithms. Autonomous can solve this problem. But the single biggest challenge they face is dealing with other humans. The last piece of the puzzle is prediction. How do you anticipate what a human is going to do? To do that, you need a really good model of human behavior. That is what Morpheus Labs is going to do.”
Now it can do a much better job of predicting how humans will behave in a specific environment, such as a road roundabout, and it can feed that data to the autonomous cars. The cars can then navigate better based on what real humans are likely to do in the same situation. Morpheus-based agents can go through a roundabout without getting in a wreck or hitting a pedestrian.
The company has fewer than 15 people and was founded in February 2017. It has raised $2 million from Oxford Sciences Innovation and Oxford Capital Partners.
“So many lives are lost on the roads unnecessarily,” Lloyd-Jukes said.

Cochlear.ai

Above: Yoonchang Han, CEO of Cochlear.ai
Image Credit: Dean Takahashi
Yoonchang Han, CEO of Cochlear.ai, traveled from South Korea for his pitch. He noted that AI experts have studied machine learning, computer vision, natural language, and speech for a while. Han himself has been studying audio systems and machine learning for a long time. And Cochlear.ai believes we’re missing out on a source of valuable data that has been largely ignored in our AI studies.
“Through this powerful machine language technology, computer vision can detect images, see the humans, and understand what they are doing,” Han said. “We are missing something important here: sound.”
Cochlear.ai uses signal processing and machine learning to capture the data and makes it available through a cloud applications programming interface. The data can be used to develop devices for hearing-impaired people or self-driving cars by supplying additional information that enables these devices to make better decisions.
If a computer hears the sound of rain, it can suggest that you take an umbrella as you head out the door. If you hear footsteps, you may be able to identify them as made by high heel shoes.
“Humans use this non-verbal information all the time,” Han said. “Just imagine a robot that can understand the context of a situation rather than wait to be ordered to do something.”
If your AI-driven home security system detects sounds, it may be able to tell you the water is running or a baby is crying. It’s a hard problem to solve because sounds can take up a lot of storage. It also requires a lot of GPUs to process all that material. Han believes that systems such as Amazon Alexa could be much more useful if they listened to sounds and interpreted them. This would allow them to conclude, for example, that someone who is coughing a lot might need medicine. He said a self-driving car that “hears” a car crash can try to figure out if there’s a threat to the car.
“We are focusing on simulating the auditory system,” Han said.
The company was founded in July 2017 and has six employees. It has raised $269,00 from K Cube Ventures.

Ghost Robotics

Above: Jiren Parikh, CEO of Ghost Robotics.
Image Credit: Dean Takahashi
Jiren Parikh, CEO of Ghost Robotics, said that legged robots have been around for a couple of decades. But his Philadelphia-based team took some university learning about the complex systems and spun out a startup that is focusing on making robots with legs that can do things like manipulate a door, climb a fence, and accomplish tasks in a very energy-efficient way.
“We took a typical hardware platform, simplified it, made it programmable, made it lightweight, and reduced the cost by 10 times,” Parikh said. “Today’s robots are very complex systems. We believe that software will democratize robots.”
Rivals include Boston Dynamics, which recently showed how one of its dog-like robots could open a door. Ghost Robotics one-upped that as a student programmed one of its robots to climb a chain-link fence. The company hired that student. While other robots take a long time to set up, Parikh said his robots can be up and running in about 20 minutes.
The robots are designed to go over any kind of unstructured terrain, from stairs to rocky paths. The market for these robots could be industrial production or Department of Defense applications.
Each robot sells for about $15,000. The company sold about $300,000 worth of robots last year, and it is shipping a new version this year. It expects a number of enterprise deals to come in late this year.
Ghost Robotics was founded in October 2015. It has raised $800,000 from Brain Robot Capital, Asimov Ventures, CoCoon Ignite, Polis Seed Ventures, 4428 Investments, Prosper Creation, and others.
“We are trying to take something very complex and open it up for innovation,” Parikh said.

The results of judging

The judges included Nvidia’s vice president of business, Jeff Herbst; Tammy Kiely, head of semiconductor investment banking practice at Goldman Sachs; Steve Wymer from Fidelity Investments; and Jaimin Rangwalla of Coatue Management. The judges chose Ghost Robotics and Kinema Systems to go forward to the final round at GTC.
In the end, three winners in the three categories will split the $1 million prize, with about $330,000 going to each. All told, there are now 2,800 startups in the Inception program.
Last year’s batch included 14 finalists who had raised $28 million from investors before the Inception event. A full year later, those 14 companies have raised $200 million. Huang said that Inception really turbocharged development for the startups.

No comments:

Post a Comment

Post Bottom Ad

Responsive Ads Here

Pages