AutoML, an artificial intelligence (AI) by Google Brain that is capable of generating its own AI, recently created a “child” that outperformed all of its human-made counterparts, Futurism reports.
To achieve this AI offspring, Google researchers “automated the design of machine learning models” through a process called reinforcement learning. “AutoML acts as a controller neural network that develops a child AI network for a specific task,” Futurism reports. “For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.”
From there, AutoML evaluates NASNet’s performance, uses that data to improve its child AI, then repeats the process thousands of times. According to Futurism, NASNet has so far outperformed all other computer vision systems, including ImageNet image classification and COCO object detection.
Futurism also says that the automation of efficient AI systems like NASNet will eventually do the heavy lifting of imputing algorithms and data, reducing the time and effort it takes for humans to do it, and opening up the field of machine learning/AI to non-experts. NASNet’s highly efficient algorithms are “highly sought after” due to its potential real-world applications, such as assisting visually impaired people to regain their sight, or improving the functionalities and safety features of autonomous vehicles.
What this means for decision makers:
Decision makers who work in fields that tap into AI, such as in medicine or manufacturing, might see a boost in workflow and production. As mentioned above, for example, autonomous vehicle designers can use AI-powered robots to make autonomous cars detect objects in their paths faster, thus reacting faster to the obstruction and avoiding a crash.
However, while NASNet’s efficiency is unprecedented and slated to positively impact human life, Futurism suggests that the creation of AI that can reproduce its own AI is concerning: “For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?” There’s also the risk that this type of AI can be used to create autonomous weapons. Luckily, various governments and other AI organizations are working on regulations to make sure the robots don’t take over, and to make sure AI solutions are used for good instead of evil. As a result, if decision makers are considering advancing their AI capabilities, it might be a good idea to stay abreast of AI news, laws and regulations. Doing so can keep workplaces and employees safe, and keep their jobs flowing smoothly.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Navdeep Singh says
thanks for this valuable information