• Category Archives Computers
  • Imagine a life with no computers……ahhhh……bliss…..

  • See that flicker out of the corner of your eye?

    Li-Fi.
    Communications over light.
    It’s not new, it’s been around for years.
    What’s new, is they have refined it and say they can move massive amounts of data to lots of people.
    (Think spamming an entire supermarket of people at once).

    http://purelifi.com/what_is_li-fi/

    The term Li-Fi was coined by pureLiFi’s CSO, Professor Harald Haas, and refers to visible light communications (VLC) technology that delivers a high-speed, bidirectional networked, mobile communications in a similar manner as Wi-Fi.

    With VLC, data is transmitted by modulating the intensity of the light, which is then received by a photo-sensitive detector, and the light signal is demodulated into electronic form. This modulation is performed in such a way that it is not perceptible to the human eye.

    Your cell phone picks up the change in intensity and converts it to info.
    Be it an ad or a message from your other half.

    Once again, I am struggling to think where this could be used rather than WiFi which everyone has and understands.
    (Just because you can do something – does not mean it will make you a market leader (or rich) just because you do it).

    I guess the real use case is for security. If you can’t see the light, you can not get access to the information that is on it.

    All that aside, I blog about it because I think its worth keeping an eye on tech like this. It might be a low power way for a sensor to upload data.
    If there is less overhead on flashing an LED then there is in joining a WiFi access point, they could really be onto something.


  • Crowdsourced tracking

    A bunch of people are trying this one, so many in fact that I sadly have to declare them all DOA. Dead On Arrival.

    http://betanews.com/2014/11/26/cloud-based-tracking-creates-an-internet-of-lost-things/

    I mean no ill will to any of them, and I just picked this one to blog about because it popped up on my feed, there are just too many of them to list.

    Electronic tags to help stop you losing stuff are nothing new. But usually they rely on Bluetooth or similar to sound an alarm when an object goes out of range.

    A new solution from Canadian company Linquet mixes the cloud and the sharing economy to track tagged devices in a kind of internet of lost things.

    “We’ve all been there. We always lose or misplace our phones, keys, wallets, laptops, pretty much everything,” says Pooya Kazerouni, Linquet’s president. “Now, we have a product that is much more than a great anti-loss solution. Not only does Linquet prevent your valuables from getting lost in the first place, but it also allows for smart sharing and connecting of important items with guests, acquaintances and customers”.

    [Note, there is nothing ‘new’ about this, nothing special, nothing exciting, it’s just the same old thing with a press release crafted to entice your money from your wallet].

    So the idea is that you have a small Bluetooth ‘tag’. It’s like an iBeacon.
    It attaches to the thing you want to track. A handbag, a wallet, a bike, a dog, what ever.
    It transmits a serial number every second or so, if your phone is within range (around 50 feet – 15m), its all good. Go out of range, and your phone beeps.

    But. What happens if you don’t hear your phone beep and you walk off, get in your car and get home, then realize the object is not in range?
    You are out of luck?
    Enter the crowdsourced tracking. If another person that is running the same app as you is within range of the tag, the cloud computer knows that the tag belongs to you, so alerts your phone, not the phone that is within range.
    Thus you can rush to the location and try and get your item back.

    Notice the glitch?
    The challenge is, your tag only works with the app that you buy. If the other person has a different tag tracking app, you’re out of luck.
    Since there are so many companies that think they have something unique and special (greed) to offer, the chances of anyone being so close to your tag and having your app is almost zero.

    What should have happen is that all the app companies should have a central tag database. That way, no matter who the maker of the tag, no matter what the app, if any of the several apps or tag companies were within range, you get a notification.

    The problem is a human one, not a technical one. No one wants to share or play nice with others.
    The result is you lose the item with the tag attached.

    The end result is that because of greed, we all lose.


  • Chips or software for IoT?

    Pretty interesting read here;

    http://electronics360.globalspec.com/article/4733/chips-to-build-the-iot-on

    I have been thinking about this for a while….
    In my (small) mind, the IoT will be built on hardware for the most part.
    Just stop and think about it…. they are talking 20 billion sensors (or ‘Things’). Thats not 20 billion lines of code, or web sites, no, they are things, real physical things.
    Each and every thing will need a battery (power source), a transmitter, and a sensor at the very very very least.
    So, before you get to AI or user interfaces or mobile apps, you have the thing which is made up of chips doing their function, getting data and sending it up (to some other chip).

    Yes. Each sensor will need what we call firmware, or software that the user does not really mess with, its buried deep inside the device (for example, even your most basic digital watch (or clock on the microwave) is running firmware).

    My point is, I think this article has it backwards.
    At the very end they conclude with;

    “Most people first decide what their software is going to be, and then they look around for hardware that will support it”.

    Should you not rather start with what, in the real world, you are going to be sensing and or controlling? Start where the rubber meets the road?
    No point in building the control interface if you don’t have the raw data from the real world?
    I get that you will need to start with the end in mind. If you are going to build a wearable, you will need to define exactly what data points you want to present to the user, and thus the sensors (chips) you will need to do that, but starting at the software and then figuring out what hardware will support that seems wrong.
    Is this why we have so many variables that have terrible battery life or sensor accuracy? Because most of these have been built from the top down?
    They have forced inefficient software on top of under powered chips that are having to run at full power to get the job done?

    Am I all wet here in thinking thats weirdly wrong?
    Perhaps I am too much of hardware guy, a tech, and electronics geek (that rides a skateboard and drives a smart car)?

    Duno.

    Going to have to think about this one some more.


  • Physical and virtual

    Great thought piece here.

    http://radar.oreilly.com/2014/12/physical-virtual-hardware-software-manufacturing-iot.html

    Real and virtual are crashing together. On one side is hardware that acts like software: IP-addressable, programmable with high-level procedural languages and APIs, able to be stitched into loosely coupled systems — the mashups of a new era. On the other is software that’s newly capable of dealing with the complex subtleties of the physical world — ingesting huge amounts of data, learning from it, and making decisions in real time.

    It’s so very true, and thats the goal of IoT. To mash up your real world with computers and AI to the point where you don’t even know what’s acting and reacting to what any more.
    The whole point of technology is to disappear. When it does that, it’s working like it’s supposed to.
    When you don’t think about interacting with a thing, when it just does it’s job and with very little interaction from you, someone somewhere is smiling at a job well done.
    Case in point is a smart lightbulb. When it turns on when you walk into a room and turns off when the last person walks out and when it does not turn off if you sit still. Things are working as they should.
    Sounds easy right?
    It aint. Trust me on this one.
    Take some time to think about this at your house. Its what’s called an ‘easy hard’ problem. Sounds easy, but is actually really hard.

    Connectivity options are now so cheap that some companies are building it into their products even if they don’t need them.
    Computing power is now so cheap that every product has some sort of ability to run code of some kind. Gone are the days of fixed function devices.
    It’s a light bulb, you turn it on, you turn it off. Not any more. It can now sense your smartphone. It can now talk to the Internet and know when you leave work, when you get close to your house, when you leave your home.

    It can be updated with no intervention from you and now it knows about the weather. It can flash out a warning when rain is predicted for your neighbourhood. It did not have that function when you bought it, but hey, its a useful feature (at least to your wife it is).

    It’s a physical light, but the virtual connections and functions it has is way past just a simple light switch.

    Pretty interesting stuff.


  • Eyelid movies

    My mate Dan has a saying that goes something like this at the end of the day….”time to go and see what’s on at the eyelid movie theater”….

    In other words, time to go to sleep and dream…..

    Sorry mate, I’m here to shatter that for ya….

    http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity

    UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.

    I just can’t believe this is happening for real, but according to Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology—”this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds.”

    Indeed, it’s mindblowing. I’m simultaneously excited and terrified. This is how it works:

    They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain’s blood flow through their brains’ visual cortex.

    The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.

    Yeah, we are getting closer to making a movie of your dreams.
    Real close.
    Scarily close.