Visual AI at the Edge: Turning Passive Cameras into Active Sensors

demo

In a recent episode of the Over the Edge podcast, host Bill Pfeifer chatted with SparkCognition’s Chief Technology Officer, Sridhar Sudarsan, on the topic of deploying visual AI at the edge and at scale. It’s a great interview chock full of insights on the state of AI and the capabilities of computer vision technology-based use cases, and we highly recommend listening to the full podcast episode. 

At the outset, Sudarsan recounted his journey to date in the technology space, from his grade school experiments coding in BASIC—“I saw the results of my first program…I was hooked”—to his formative years at IBM building enterprise distributed systems and then being one of the original architects of the legendary IBM Watson Platform before joining SparkCognition to continue working with cutting-edge technologies, bringing AI solutions into the industrial world. 

The following are excerpts from their wide-ranging 40+ minute conversation (lightly edited for readability). You’ll want to listen to the whole thing here.

 

How SparkCognition Visual AI Advisor transforms passive cameras into active sensors

Sudarsan: I think if you look at the world around us, cameras are everywhere, right? It’s almost unthinkable to have a building without cameras, whether it’s a retail store, a gas station, a bank,  a school building, an office building, etc. There are probably around a billion or two billion plus cameras in the world. If you think about what these cameras are doing, they’re very passively recording information. What does that mean? It means that they’re recording information; their eyes are on the ground or in the air or on the location at all times, but it’s almost like we’re seeing things and we’re not [seeing them]. It doesn’t go to the brain. They are—so to speak—dumb from that perspective. 

So what are we doing [deploying Visual AI at the edge]? Our whole objective is to convert these passive cameras into active sensors. As in: if you add a brain to that, all the things that the camera is seeing, just like we react to situations and we process things in certain ways—that’s what the AI models that we deploy [will do] to these live stream video feeds from the video cameras. And based on the business or the enterprise or the location where those cameras are, we draw from about 150 odd use cases that we have [plus] the ability in our platform to add any number of additional use cases, and then we alert the appropriate people [about what’s happening on the feed]—a safety manager responsible for safety of that location; a security person responsible for the security of that location; an operations manager responsible for the operations, the customer service manager making sure that the customers are getting serviced properly. 

 

How Visual AI Advisor leverages customers’ existing camera infrastructure

Pfeiffer: it sounds like, by and large, you at least have the potential of using existing camera infrastructure rather than installing something net-new, just to do this. 

Sudarsan: Absolutely. In fact, that was one of the foundational principles when we started building this technology—to use existing cameras. What we don’t want to do is go to a customer who has cameras and say, ‘Okay, the first thing you need to do is rip and replace all your cameras’ because that’s a cost that they have to incur. It’s a lot less expensive to process things in software than rip and replace hardware, wiring, and networking. So we support, you know, many, many types of cameras, multiple generations, older, including analog cameras. And you’ll be surprised at how many of those are out there. 

Pfeiffer: Well, if they’re not broken, then why not? They’re already hanging there. Just leave them on.[…] If you can just use what’s already there. It’s a much less expensive, cost-effective solution if you can use what’s already. That’s fantastic. I love it. 

Sudarsan: And it’s a faster time to value, right? […] So the other thing is also about how quickly you can deploy it and how large is your deployment. So the fact that we’ve deployed 140,000 plus cameras across about 16 countries, we’ve kind of learned a lot of things along the way. And I think that’s what the software is now designed for is for scale.

 

How to measure the economic value of Visual AI

Pfeiffer: How does a typical company develop an economic model that supports edge deployments with AI? How do they quantify the value that’s generated—the return on investment? 

Sudarsan: It’s a great question because that’s what it ultimately boils down to: ‘Do I need this? Do I need this now? And how quickly am I going to start seeing the return on my investment?’ […] You know, if we take safety: there’s a cost of safety, depending on what location you’re in. Right? I mean, you cannot put a price on a life. But, you know, when there are industrial environments […], and you have incidents […], even if one of these could be avoided through a proactive alert, it is a valuable sort of investment. Even if they go beyond that, I think if you look at safety slips, trips, and falls, they make up about 70-odd percent of claims that happen in our workplace, right? 

And so being able to prevent and avoid those translates into some cost [avoidance] for the [organization]. Being able to prevent those is one of the elements where they’re seeing value. Another example would be around top-line growth. We’ve seen, for example, I mentioned in the case earlier about [hot food items] in a C-store. Those are some of the highest-margin products, along with beer. And seeing that you are not converting those customers is money lost. 

So that’s a top-line sort of impact. And we’ve actually run some tests, and we’ve seen that at C-stores, for example, there’s a 14 to 18% increased sales that they see just based on one item that is monitored, as long as people are sort of responding to the alerts and refilling the [stock]. 

Pfeiffer: That becomes a great ROI by itself. That’s fantastic. 

Sudarsan: Exactly. 

 

Thanks again to Over the Edge podcast for inviting us to talk about deploying visual AI at the edge. To learn more, check out our upcoming webinars on Visual AI for HSE, school safety, and more.

 

Latest blogs

Earth Day Renewing Renewables and Scaling Renewable Intelligence hero image
Blog
Campbell LeFlore

Scaling Renewable Intelligence

On Monday, April 22, more than one billion people in 193 countries will participate in events celebrating the environmental movement and renewing their commitment to

Read More

SparkCognition is committed to compliance with applicable privacy laws, including GDPR, and we provide related assurances in our contractual commitments. Click here to review our Cookie & Privacy Policy.