Started off in the Social Networks and Multi-Agent Systems room again, a talk by Davide Donetto, about "The emergence of shared social representations in complex networks". He talked about the difference between "reified" roles, which are selected by rules according to competences, and "consensual" roles, which are freely chosen by the agents. His model involved a social network of agents, constructed with a small world distribution, so that when a new node is added, the probability of it linking to an existing node is proportional to the share of the links that node has:
He then generated a set of agents, with links, and a 30 element Griewank fitness function. The agents swapped parts of their fitness functions with their neighbours, to see if coherence emerged.
Next up, a bit of "Mental States, Emotions and their Embodiment". Lots of mentions of the [[wp:Uncanny Valley]]!
Ariel Beck talked about using virtual characters to train medical students in dealing with patients. Actors performed emotions in motion-capture suits, and participants situated the performances of the actors and characters animated from their motion capture using the [http://www.unige.ch/fapse/emotion/ Geneva Emotion Wheel]. The results need more discussion that I'm going to give here, but there was definitely something missing from the animated versions.
Marina Fridin talked about "Computational model and the human perception of emotional body language (EBL)".
She developed a feature based [[wp:Mutual Information]]
classifier for classifying the emotion portayed in photographs of people. She also presented an eye tracking experiment to show which body parts people focussed on for photos of different emotions
(e.g. when looking at fearful photos, people tended to focus on the hands). The other interesting result was that responses that were given quicker were more likely to be correct 1
Paula Igareda talked about "Audio Description of Emotions in Films using Eye tracking". I'd not really hear of [[wp:Audio Description]] before, but it's a track put over films to describe what's happening for visually impaired people. She had an interesting hypothesis: if you track sighted people's eye movements when watching a film with and without audio description, the better the audio description is, the less the eye movements change.
Back in the SNAMAS room, Jeremy Pitt talked about "Micro-Social Systems: Interleaving Agents, Norms and Social Networks". Ad-hoc networks have the issues of:
So, borrowing solutions from social sciences, we look at this as a Micro Social System, and analyse the intelligence in each node, and the network linking them. We can then add a stack of protocols, where each level is concerned with changing the way the behaviour at the lower level works. For example, the lowest level might be the security schema, where there are several options with different security/power usage tradeoffs. The next level could then provide a mechanism for all the nodes in the network to vote on which schema was useful in any given situation.
The day finished with a talk on the [[wp:Secretary Problem]], which I'd never heard of. Long story short, if you're in a situation where:
* you are presented with a stream of possible options to choose from
* you can either choose each option, or pass it up forever
* once you choose an option, that's the end
* you know there are n options in total
then you should:
* evaluate the first n/e options (where e is natural log)
* pick the next one which is better than any of the sample.
And that was it for me at AISB09
- 1. c.f. B. de Gelder, N.Hadjikhani. Non-conscious recogn body language. Neuroreport, 17(6): 583-586 (2006).