I came across an article about energy companies trying to wrangle the great influx of data that they have been receiving since they started employing smart meters and thermostats in homes and businesses. The devices assist with power distribution management by collecting information about individual entity usage and are considered to be part of the ambient intelligence involved with Ubiquitous Computing (UBICOMP). There are currently more of these free-range semi-intelligent RFID and GPS data collecting devices transmitting their data home to the mothership via the interwebs than there are humans on the Earth. What is more “Big Data” than that? Here is installment one of a two part series on Big Data in Ubiquitous Computing.
Ubiquitous computing Part I – UBICOMP
Back in September of this year I attended a Research Colloquium at the UT iSchool. The distinguised speaker, Dr. Niklas Elmqvist, Assistant Professor of Electrical and Computer Engineering at Purdue University, was speaking to the group about his recent work at Purdue in Ubiquitous Computing. He has a long and respectable publication history in the Human Computer Interaction (HCI) field. Not surprisingly a large body of his research is in display technology, however he also looks at motion-capture interaction hardware (Kinect) and its associated software, analytic middleware for collocated environments (“ubilytics”), distributed network control software, networked display capabilities, application sharing and revision and version control software. His projects include new hardware configurations like tabletop touch interactive displays that allow you to manipulate them from inches above the surface, linked multi-display walls where many computers serve a small fraction of a complete picture to be integrated for large screen display, networked displays where each computer in an area could have the same display shared by a single one, and software configurations that facilitate interactions between shared networked computers using “branch-control-merge” revision control.
Figure 1. shows a simplified visualization of this process. It is a commonly used technique in computer software development, but Dr. Almqvist has extended that technique to allow multiple network threads to access a file simultaneously.This allows user control to switch easily and with no data loss.
The green boxes [1,4,9] represent the Trunks of the software build. The Trunk is the main body of the project code that has been determined to be the one perfect working version of a project’s program code, also known as the production version. When developers need to add new functions to the code, they “check out” a copy of the project code on which to make the changes. These new functions represent new development Branches [2,3,5-8] in the code. These branches will go through some development life in testing and may be used for the specific projects or functions they were designed to perform, but the primary project code does not change until the new branches have been approved. When this happens they can be reintegrated with the Trunk through a Merge. The blue Tags boxes in the figure represent places where a Trunk merge has been approved and a new version of the final production software has been released. Once a Branch has gone through a merge and its functions merged into the production code they are typically no longer maintained. The purple box [10] represents a development branch whose functions were abandoned during the course of the project and never released in the production software.
Dr. Elmqvist’s grand vision as described to us that day, is to walk into a room with a group of other people, all of whom (including yourself) have multiple computational / communication devices on their person, and share a networked process application for the purpose of collaborative, interactive computing experience. That might mean desinging a piece of software, operating an application to produce some collaborative output or watching a movie on a big screen. All the networked devices would not only be connected through the wireless network, but also be AWARE of each other in such a way that applications and output files could be shared and collaboratively edited while revision and version control concerns are handled flawlessly and automatically behind the scenes. All computing devices, desktop, tabletop, tablet, smartphone, laptop, whatever, would be persistently available and connected to each other and capable of sharing applications as soon as they were within range of each other.
This idea that information processing should be an integrated function in all the devices we use to communicate and interact with our world (from phones and computers to lamps, refrigerators, thermostats and cars) in such a way as to be seamless, intuitive, natural and ultimately unconscious, has come to be known as Ubiquitous Computing or “Ubicomp”. Wikipedia formally defines it as “machines that fit the human environment instead of forcing humans to enter theirs.” It is a natural extension of Sir Tim Berners-Lee’s vision of the Semantic Web. He described a Utopian environment in which computing and communication devices and all the services they support are interconnected and intelligent enough to process semantic, conversational language as their input command system. The invention of the internet and hypertext protocol is only the first step in the interconnectedness of things. Recently the UBICOMP conference organizers have recognized at least two primary subdivisions of this broader topic, pervasive computing and physical computing.
Look for Ubiquitous Computing Part II – “The Internet of Things” to learn more about the realm of physical computing research. You might be surprised.
#words 908
Post# 7 of 10