The TruthTable is a large multitouch surface, used for accesssing and aggregating of web searches in an exploratory way. It uses social knowledge from the web to create links and associations between different ideas, and then display media relating to them, all in a playful and accessible manner. It was created as a project with [http://www.inf.ed.ac.uk/ Informatics] at Edinburgh University, in particular [http://homepages.inf.ed.ac.uk/jon/ Jon Oberlander] and [http://www.mimetics.com/ Richard Brown]. A refined version (shown left), with [http://www.eca.ac.uk/staff_profiles/view/lisa-mackenzie/ Lisa Mackenzie] and Sam Booth, was displayed as part of the Senses of Place exhibition at [http://www.thelighthouse.co.uk/ The Lighthouse].
== Video ==
== Experience ==
The table presents an empty space, with virtual keyboards scattered around it. The keyboards are used to type in ideas of interest (or select from some previously entered possibilities). The words typed then appear in bubbles, which bounce around in the space. If a participant reaches out to touch a bubble, it starts a web search, which will produce images relating to that word, and also other related words. Images appear as objects on the table, which can be rotated, scaled, and slid across the table to show others, all with physics simulation, so they feel like real objects. Drawing a line between two bubbles asks the table to search for words and images which relate to both the words - creating linkages between different ideas.
Since the table can produce a lot of new information in a very short time, it all has a limited lifespan - anything not interacted with will gradually fall down within the virtual space, until it disappears. This makes sure that the users are not swamped by pictures and words, but gives them a window of opportunity to reclaim interesting concepts if they see them falling away.
== Using the social web ==
In general, creating links between concepts is a difficult task - words can have many meanings, and chains of links between words can be obscure. Projects such as [http://conceptnet.media.mit.edu/ Concept Net] attemp to formalise this, in order to allow computers to reason about these links. This was trialled in this system, but found to produce non-interesting links between words - both obtuse and repetitive. Instead, folksonomies were used, in particular [http://delicious.com/ del.icio.us], which allows users to tag websites. If you look up a particular word, or tag, on del.icio.us, it will give you a list of tags which were also applied to pages with that tag - a set of related concepts. Similarly, if you look up pages tagged with two tags, it will return tags which were applied to pages with both of those tags - concepts which link the two original tags. Finally, [http://flickr.com flickr] can be used to find photos and images which the community has tagged with that word, to provide related images
== Hardware ==
The table is a rear projection, FTIR computer vision based multitouch surface. This means:
* a projector is put inside a box, used to create the image, on the translucent top surface of the box
* a system of infrared LEDs and acrylic is used to create an infrared glow whenever anyone touches the top of the box
* a computer inside the box uses a camera and computer vision techniques to decode these glows into finger touches
== Software ==
The software consists of three parts:
* a blob detection system, which translates the camera images into individual points of touch, and communicates these using the [http://www.tuio.org/ TUIO protocol]
* a 3D framework, built using [http://www.jmonkeyengine.com/ JME], which supports common multitouch operations and easy access to media and web content
* final applications, i.e. the Truth Table "Bubbleworld" demo.