In order to demonstrate how human brainpower can still beat out artificial intelligence (AI) brawn, The Verge turns to the DAWNBench challenge, “an athletics meet” for AI engineers designed by Stanford University.
During DAWNBench, teams and single participants from universities, government departments, and industry “competed to design the best algorithms,” The Verge says. Participants were judged on multiple metrics that reflect real-world AI demands, including how long it took entrants to train an algorithm, and how much a solution cost. Based on the outcome of the competition, The Verge says “raw computing isn’t the be-all and end-all for AI success;” the proof was in how smaller teams and individuals beat out big tech companies like Google.
For example, Fast.AI, a nonprofit group that creates learning resources and is dedicated to making learning available to everyone, snagged the top three spots for the fastest and cheapest algorithms to train. Fast.AI’s co-founder, Jeremy Howard, said that the team’s victory was owed to thinking creatively, using basic resources, and utilizing lesser-known techniques, such as “super convergence.”
While the small group did well here, Google stole the top three positions in training time, and first and second in training cost in the competition.
But, that doesn’t necessarily mean Google is king, The Verge says:
“However, Google’s algorithms were all running on the company’s custom AI hardware; chips designed specially for the task known as Tensor Processing Units or TPUs. In fact, for some of the tasks Google used what it calls a TPU “pod,” which is 64 TPU chips running in tandem. By comparison, Fast.AI’s entries used regular Nvidia GPUs running off a single bog-standard PC; hardware that’s more readily available to all.”
The takeaway for decision makers:
The Verge shows that the outcome of DAWNBench is a real life David and Goliath tale. Google seemed to dominate due to its big business status and customized solutions; however, Fast.AI’s technique, creative thinking and ability to work at a cheaper cost is both more attractive and more applicable to other businesses in the real world.
“The fact that Google has a private infrastructure that can train things easily is interesting but perhaps not completely relevant,” Howard told The Verge. “Whereas, finding out you can do much the same thing with a single machine in three hours for $25 is extremely relevant.”
As a result, decision makers might find value in keeping an eye on the developments coming out of smaller AI groups and companies, such as Fast.AI. Doing so might lend ideas more cost effective solutions, creative uses for basic resources, and the discovery of new talent that can enhance an AI team.