Originally published on HPE.
As artificial intelligence applications continue to shrink, they’re increasingly appearing in a device near you.
Artificial intelligence has traditionally been confined to the data center, where powerful computers have been tasked with running complex algorithms that are managed by seasoned professionals. In many parts of the enterprise, that is changing, as the power of AI is rapidly making its way to devices on the edge.
To understand why this is happening and how it works, it’s important to first understand what the edge is in today’s enterprise. In the world of AI, there are two fundamental edges where AI is deployed, commonly known as the near edge (which is relatively near the central data center) and the far edge (which is close to where data is generated). The application of AI is different at each of them.
Glyn Bowden, CTO of the AI and data practice at Hewlett Packard Enterprise, explains that the near edge refers to areas that are located outside the data center but have capabilities similar to those in the data center. Near edge locations “could be something like a factory building or a hospital, where you can do computing in a reasonably robust environment,” he says, though such locations may lack the traditional design of a classic server farm. As well, those locations are remotely managed and rarely have trained IT professionals on site—despite having ample computing, networking, and storage capabilities available.
The near edge is defined by fairly typical computing equipment and resources. Model training and predictive analysis can take place here, thanks to the availability of traditional computing power. Conversely, the far edge refers to a different class of devices, generally Internet of Things products such as cameras, industrial sensors, drones, and even personal devices like a user’s smart phone.
The far edge refers to the point where data is actually being captured or where a device is interacting with the end user directly. These devices can’t train an AI model, but they can use one that’s already been developed. AI at the far edge has to be designed to make faster decisions with less available data and is generally hyper-focused on a singular task.
Edge-based AI of both varieties is already here. Ian Hughes, an analyst at 451 Research, an S&P Global Market Intelligence company, says a Voice of the Enterprise 2020 survey found that 60 percent of respondents were analyzing and/or storing data at the edge.
The case for the edge
Why does AI at the edge matter? It’s not just becoming useful to have edge-based AI functions at the edge. In many cases, it’s absolutely essential.
Take the near edge example of a hospital. AI has vast applicability in medical environments, from helping physicians make a proper diagnosis based on a large amount of data to research on treatments and vaccines for all manner of diseases. But while the modern hospital is filled with sensors, scanners, and mountains of patient data, making use of all that data isn’t easy in a traditional computing environment, in part because privacy regulations like HIPAA and GDPR may prevent that data from leaving the confines of the building. “But if you can train your models on that data within the hospital boundaries,” says Bowden, you can still effectively leverage AI tools.
The same is true for environments that are disconnected from the Internet or don’t have reliable or fast enough network service to connect to a corporate data center. Many modern factories are located in developing regions, remote areas, or both, places where high-speed Internet access just isn’t possible today—and if it is possible, it’s extremely costly.
By moving AI functions directly into the factory environment, machine learning can be used to optimize operations, predict equipment failures, and uncover production errors quickly, minimizing the potential for financial loss. Relimetrics offers one example of this technology in action, developing industrial equipment that uses computer vision and machine learning to inspect components as they come off customers’ production lines. Incorporating AI at the machine level in this way has reduced defects for the company’s clients by an estimated 25 percent.
Without an AI algorithm monitoring production, a machine will run for longer when it’s in a less-than-ideal condition, and it can raise the risk of becoming damaged. “A machine can make decisions about whether to stop or not when things are flowing through, rather than going back to a data center and having an operator make that call,” says Bowden. “Being able to make that decision in real time, at the machine, means that you can prevent a lot more loss and operate with a lot more efficiency.”
At the far edge, the uses for AI are more focused. Here we’re talking about individual devices that have nowhere near the processing power of the data center—such as the microchip inside an MRI machine or a robotic arm on the production line—and none of the stores of data that a broader, near edge environment would enjoy.
In these examples, AI has a simpler and more dedicated function, albeit one of equal importance. “Here, we have small models doing very specific things,” says Bowden. That might include a medical imaging machine (which may not be connected to the data center at all) determining whether a patient is holding the correct pose before a photo is snapped or a single piece of industrial equipment that is constantly analyzing its operating temperature and making minor adjustments to keep production quality as high as possible.
With medical technology, there’s a strong data privacy use case for edge-based AI as well. “The more decisions that can be handled at the edge where the data is stored, the better it is for the customer from a privacy standpoint,” says Iveta Lohovska, senior data scientist at HPE Pointnext Services. “I think customers are becoming increasingly aware of those issues.”
Ultimately, this is a market that’s undergoing a massive amount of growth. “We’re seeing lots of capabilities at the edge,” says Bowden, “and it’s changing all of the time.”
AI hits the road
Edge-based AI isn’t just improving things in hospitals and factories; it’s also becoming a key tool for mobile applications—namely self-driving vehicles. “It makes autonomous driving possible,” says Bowden. “You wouldn’t want your car having to communicate with a data center before deciding whether to stop in front of the wall or not. You want that decision being made there and then, as you’re in the vehicle.”
As well, consider drones that carry high-resolution cameras. If the drone can perform a real-time analysis on what it’s observing to determine, say, the condition of crops in a field under observation or the legitimacy of a military target, that improves both performance and accuracy. “The drone might not have connectivity where it can push enough data to get a meaningful inference back,” says Bowden. “So it has to be able to make its own inferences on the device.”
For example, some drones are being used to examine tunnels where wireless signals won’t penetrate, often a fleet of them flying at a time. “If an accident happens,” which is common, “AI makes it easier for the rest of the fleet to fill in the gaps,” he says.
AI is filling another key role in air travel. Luuk van Dijk, CEO of Daedalean, a maker of autonomous piloting systems, says, “Our applications give aircraft eyes and a visual cortex,” and edge-based AI is a key technology on commercial airplanes. A plane can’t carry a full data center onboard—it would be too large and too heavy, and consume too much power. Edge-based AI tools are now being used to provide navigation, air traffic monitoring, and landing guidance, much like their terrestrial counterparts.
“It’s essential when you have bad connectivity, no remote dispatcher, or no GPS signal and have to rely on your eyes,” van Dijk says. And the key is that all of this needs to happen virtually instantaneously: “A remote AI that receives signals from your cameras, processes them, and sends the situational information back to you is just not an option in flight.”
Edge-based AI is only now becoming possible because of a combination of factors all culminating at once. Naturally, miniaturization is allowing the creation of smaller and lower power microprocessors and denser storage, so more processing power can be wedged into a smaller space than ever. But perhaps more important is that the industry is getting better at streamlining AI models so they can become truly portable.
“Models are being compressed and made more lightweight,” says Lohovska. Techniques like alpha-beta pruning are being used in edge scenarios to decrease the total number of nodes in an AI algorithm, essentially trimming away the least likely possibilities to improve speed and decrease the size of the model.
Smaller AI doesn’t just mean a more efficient AI; it means more AI. “The fact that we’re shrinking them also means that we’re able to use many more of them,” says Lohovska. That proliferation, in turn, is creating a virtuous cycle where more and more processing is able to be pushed to the edge.
“The more you can do at the edge—the more you distribute that workload—the less you have to worry about congestion on the network and available computing power at the data center,” says Bowden. “Edge-based AI is really about the immediacy of data and the ability to do things in a disconnected environment.”
It’s also, of course, about making technology more capable at the point of usage. “Applications that understand us and what we need to do should over time remove some of the complexity of the devices we use,” says Hughes. “Most computing technology has evolved around the needs of the machine, not the people using them … and AI has the potential to help in that area.”