Cheap sensors, fast networks, and distributed computing

Nice wrap up article that is easy to read;

The trifecta of cheap sensors, fast networks, and distributing computing are changing how we work with data. But making sense of all that data takes help, which is arriving in the form of machine learning.

They spend the first half of the article talking about how ‘computing’ has swung like a pendulum from cental based (think mainframe computers) to distributed (think PC’s talking to a central server).

They are saying that we are now fully at the edge. Both cell phones, tablets and PC’s have got lighter and thus further away from the central servers than ever before.
This is largely due to the ubiquity of Wifi. Most places we go have network connectivity. Our cell phones and smart devices can ‘phone home’ almost anywhere we go.
Each app can reach it’s mother ship and can do its function with very little intervention from the user.

There’s a renewed interest in computing at the edges — Cisco calls it “fog computing”: small, local clouds that combine tiny sensors with more powerful local computing — and this may move much of the work out to the device or the local network again.

Systems architects understand well the tension between putting everything at the core, and making the edges more important. Centralization gives us power, makes managing changes consistent and easy, and cuts on costly latency and networking; distribution gives us more compelling user experiences, better protection against central outages or catastrophic failures, and a tiered hierarchy of processing that can scale better.

It’s this edge computing or ‘fog computing’ that I am really interested in at the moment.
Both in new sensors, but also in existing sensors.
There is just such a huge installed base of sensors and data that is not tapped into, that is not really getting much attention in the whole IoT hype.
This existing base of data is ripe for the picking….. There is no need to spend a lot of time or money tapping into it either… With ‘just a few lines of code’ we can expose this data to AI and deeper analysis.
It can come with very little work since it’s just a matter of exposing it.
I feel that by pushing this edge data to a central server can be done in a secure way (by pushing the data, no outside code (ie, hacker) can reach in and affect the process) and without impacting on the smooth running of the existing process.
As a first step, lets get the data uploaded. Get it stored and start the AI process. Once clear trends or trouble shots are identified, then we can look at how best to the correction sent back down to the process.

I’m really excited for the cheaper, faster, lighter aspects of new IoT devices, but hope that we don’t forget about the installations that have been installed and running for 10+ years.