This is the third and final installment of our Big Data blog series focusing on audio and visual portrayals of information. To quickly sum up, the first two focused on how humans communicate Big Data through the eyes of a business technology Eco- system. The first blog and second blog spoke to a formal hierarchical reading of information while the second blog discussed a less formal portrayal using round table visualization of information. This week will focus on communicating Big Data through the ears and the eyes of a computer, yes a computer; without their incidental humans accessories at either end.
Remember Kristanna Loken in Terminator 3 – Rise of the Machines? She was sent back to eliminate her target John Connor. Early on, she dials up Connor’s old high school to get an address. The phone on the other end rings, it gets answered, and the next thing you hear, is that harsh fax machine sound on the other end we dread getting because we dialed one number off. But do you remember what Cyborg Loken did? Without hesitation, she started speaking ‘fax’ over the phone. It’s how machines talk, it is what they sound like. Click on play button to hear the fax at 10KB/SEC.
Truly, machine language in the cloud exists. Humans are the endpoints, but machines are the intermediaries. Humans use dialect at the endpoints, but what about machine dialect? And, why should we care? Because, we are becoming a well-connected worldwide system, and machines embody sentiment, at different levels, at different frequency, i.e., the Striatal Beats of Neuroscience.
Below, one operator is sending a webpage and another is receiving the web-page. The wire-in-the- cloud transmits the machine language. Click on play button to hear the machines talkback and forth at 10MB/SEC; that’s FAX on steroids!
Now the picture in the middle with all the blue and red dots, represents the written language of what the machines are saying. Like human language, the computers have their own set of phonemes to go with their particular brand of dialect. In any event, what we are hearing and seeing is an audio/visual evaluation of Big Data via machine to machine IP communication. The catch acronym for this is AVIPE.
Now the natural question is can this AVIPE technique be applied to all signal types? The answer is that pretty much, yes. The trick is to find the type of phoneme transform that makes sense. Returning to the first blog, recall that the raw Data Source was pictured as a pure blue rectangle and next to it a rectangle with obvious vertical segmentation strips (see figure below). In terms of audio the former rectangle would be monotone in contrast to latter rectangle sounding off with something much less trivial, as you can imagine.
So this is the end of this three parts series on the Big Data Eco-System from the perspectives of ‘The Formal Script’, ‘The Round-table discussion’, and ‘The Comp’s sense of A/V’. We hope that as you read though the three blogs you picked up on the somewhat unorthodox way in which the data was visualized; not your standard dashboard displays. Dashboards are great tools for displaying the after-processing, while most of our visualization focused on pre-processing/investigation stage where are all the pieces are put on display to enable the round-table discussion. The techniques employed are testament to yet another aspect of the ‘Data Scientist’ and that is, the ‘Signal Processor’. A future blog will compare and contrast the roles of Data Scientist and Signal Processor. (Note: Reddit tackled the difference between data scientist and statistician which might serve as a good starting point.
Have a question for our experts? Leave us a comment below, check out our Big Data page or contact our team directly at at [email protected].