This week’s demo of Google Lens showcases to what’s really coming: The rise of the all-purpose super-sensor fueled by software-based A. I.
Google dazzled developers this week with a new feature called Google Lens.
Appearing first in Google Assistant and Google Photos, Google Lens uses artificial intelligence (A. I.) to specifically identify things in the frame of a smartphone camera.
In Google’s demo, not only did Google Lens identify a flower, but the species of flower. The demo also showed the automatic login to a wireless router when Google Lens was pointed at the router barcodes. And finally, Google Lens was shown identifying businesses by sight, popping up Google Maps cards for each establishment.
Google Lens is shiny and fun. But from the resulting media commentary, it was clear that the real implications were generally lost.
The common reaction was: “Oooh, look! Another toy for our smartphones! Isn’t A. I. amazing!” In reality, Google showed us a glimpse of the future of general-purpose sensing. Thanks to machine learning, it’s now possible to create a million different sensors in software using only one actual sensor — the camera.
In Google’s demo, it’s clear that the camera functions as a “super-sensor.” Instead of a flower-identification sensor, a bar-code reader and a retail-business identifier, Google Lens is just one all-purpose super-sensor with software-based, A. I.-fueled “virtual sensors” built in software either locally or in the cloud.
Talking about the Internet of Things (IoT) four years ago, the phrase “trillion sensor world” came into vogue in IT circles. Futurists vaguely imagined a trillion tiny devices with a trillion antennas and a trillion batteries (that had to be changed a trillion times a year) .
In this future, we would be covered in wearable sensors. All merchandise and machinery would be tagged with RFID chips that would alert mounted readers to their locations. Special purpose sensors would pervade our homes, offices and workplaces.
We were so innocent then — mostly about the promise and coming ubiquity of A. I. and machine learning.
In the past four years, another revolution has disrupted the expected “trillion sensor world” revolution, namely the rise in cloud A. I., which changes everything. Instead of different, single-purpose sensors installed all over every vehicle, person, wall, machine and road, we’ll have general-purpose super-sensors, and their data will be used for software-based virtual sensors.
Researchers at Carnegie Mellon University (CMU) last week unveiled their “super sensor” technology, which they also call a ” synthetic sensor .”
Carnegie Mellon University researchers this month unveiled their super-sensor technology, which can detect just about anything happening on a factory floor. int
Despite the name, there are real sensors involved. The researchers have developed a board containing a small range of sensors commonly used in enterprise and commercial environments. The encased board functions like a single black-box sensor that plugs into a wall or USB power source and connects via Wi-Fi.
In other words, one small device functions as an all-purpose super sensor, which you can plug in and deploy for any sensing application. These sensors can detect sound, vibration, light, electromagnetic activity and temperature. The boards do not include regular cameras, largely to address concerns about user or employee privacy. You could also imagine a more powerful version that does include a camera.
As events occur near the sensor board, data is generated in specific, uniquely identifying patterns, which are processed by machine learning algorithms to enable the creation of a “synthetic sensor” in software.
Here’s a simplified version of how such a sensor might work in a warehouse setting. You plug in one or a few super sensors. Then somebody uses a forklift. The resulting vibration, sound, heat and movement detected by the super sensor generate patterns of data that are fed into the system. You can identify this as “forklift in operation.” (Further tweaking might determine not only when a forklift is in use, but where it is, how fast it’s moving, how much weight it’s carrying and other data.)
You can then program next-level applications that turns on a warning light when the forklifts are moving, calculates wear-and-tear on forklift equipment or detects unauthorized operation of forklifts.
The output from these “synthetic sensors” can be used by developers to create any kind of application necessary, and applied to semantic systems for monitoring just about anything.
The best part is that you could go in and create another “synthetic sensor” — or 10 or 100 — that detect all movement, activity, inventory, hazards and other things — without any additional sensors.
A video produced by the CMU researchers showed applications in factories, offices, homes and bathrooms. For example, in the bathroom, it kept track of how many paper towels were used, all based on the sound produced by the paper towel dispenser. It could also monitor the total amount of water used there.
Again, the revolution here is not the ability to monitor everything. The revolution is to install a super sensor once, then all future sensing (and the actions based on that sensing) is a software solution that does not involve new devices, changing batteries or any of the other inflexible solutions imagined with the “trillion sensor world.”
Imagine being able to buy cheap hardware that plugs into a wall, then from that point on all monitoring of equipment, safety, inventory, personnel and so on is done entirely through software. Going forward, there would be no need to upgrade sensors or IoT devices.
And get this: CMU’s research is funded mostly by… Google!
These two projects — Google Lens and the Google-funded CMU “synthetic sensor” project — represent the application of A. I. to enable both far fewer physical sensors and far better sensing.
A. I. has always been about getting machines to replicate or simulate human mental abilities. But the fact is, A. I. is already better than human workers in some areas.
Think about a traditional office building lobby. You’ve got a security guard at a desk reading a magazine. He hears the revolving doors turn and looks up to see a man approaching the desk. He doesn’t recognize the man. So he asks the visitor to sign in, then lets him proceed to the elevator.
Now let’s consider the A. I. version. The sound of the revolving door signals another person entering the building (the system is keeping an exact count of how many are inside) . As the man approaches, cameras scan his face and gait, identifying him positively and eliminating the need for antiquated “signing in.” The microphone also processes the full range of subtle sounds he makes while walking. This, combined with thermal, chemical and other sensors, concludes that he is unarmed, eliminating the need to process him through a metal detector. By the time he reaches the door, the A. I. sends a command to unlock the door and allow him into the building. The record of his visit is not written in ink on paper, but in a searchable electronic form.
The best part is, any number of new applications could be written to detect different things, all without changing the physical sensors. Those same sensors in the lobby could replace lighting controls, smoke detectors and thermostat controls. They could alert maintenance when the windows need cleaning or the trash needs emptying.
It’s easy to imagine applying this kind of all-purpose sensor-plus-A. I. system beyond the lobby to the boardroom, offices, factories, warehouses and shipping systems. Even as the cloud A. I. rapidly learns and increases its capabilities, companies will be able to build custom virtual sensors as the need for them arises.
Cameras can be deployed where appropriate, such as in the lobby, with other, non-camera sensors deployed where not, such as in the building’s bathrooms.
Best of all, the super-sensor revolution will be widely distributed. The physical sensors are very inexpensive, and the A. I. is available through the cloud as a service, not just from Google but from a wide range of providers.
The rise of cloud-based A. I. as a service has been obvious for a couple of years now. But this month, we’ve seen one of the most profound changes this revolution will usher in. With a few inexpensive cameras, microphones and other sensors, we’ll be able to create just about any sensor in software on the fly at low cost.
The old model of the “trillion sensor” IoT is dead, killed by A. I. and the rise of the super sensor.
© Source: http://www.computerworld.com/article/3197685/internet-of-things/google-a-i-and-the-rise-of-the-super-sensor.html
All rights are reserved and belongs to a source media.