After some research into some of the things that came up in the first meeting Maria and met up again and took forward our ideas on how technology could be used in actor training.

A common approach to these ideas is now beginning to emerge:

  • The teacher might have time to set up things before a class – but during the class they would not be able to operate any technology – so during that phase the system needs to be totally automatic
  • There is a blurred line between using these for teaching and performance – a system might be used initially in a purely training context but over time students might take that and develop their own piece with it that gets turned into a performance
  • There are many actor training methodologies – these solutions should be neutral to these and adaptable to several different methodologies

We also now have more of an idea of the kinds of outcomes we might expect from using these technologies in a training context:

  • To get students to think about the image and sound they are projecting
  • To get them to starting looking at what other performers are doing more closely
  • To get them out into the real world and interacting with it more
  • To allow a period of reflection by producing a digital artifact (e.g. a video) that can be posted and then commented on my peers and tutors
  • To allow a group of students to take a system and use for self-directed learning or as part of a MOOC

We have discussed many different possible uses of technology during this process but we have now come up with 5 systems that we think are the most interesting to look at further:

  • A system to automatically generate a multi-camera video. The smartphones of several users are used to capture video shots from different angles of the performance. The app then randomly cuts between these camera shots to create a video output. The camera operators get advance notice with icons on their screen that their shot is next, and then that their shot is currently being used. Optionally a “sync” track of video/audio/powerpoint can be mixed in to the output. Effects such as freezing a frame and slow motion can also optionally be turned on.
  • A system to project powerpoint slides down onto the studio surface. The teacher in advance can prepare “slides” that are projected for the students to react to. Slides could contain blocks of colour, grids, body outlines, simple lines or text across the space depending on what the teacher requires. Slides could auto advance or be triggered by a standard presentation remote controller.
  • A system to move a number of sounds through a 3D space of 4 speakers at the corners of a room. Using smartphones users can change aspects of the sound (tempo, amplitude, filtering, pitch, vibrato etc) by moving their smartphone in different ways through space. Alternatively moving the smartphone can be used to “scroll” through a sample – perhaps of some spoken text.
  • An app the provides a set of users with a random set of “situations”. The situations are set up by the tutor beforehand and then randomly given to participants – with the option to have some situations dependent on where they are or who they are with.
  • An app that tracks the GPS location of a set of participants. The location and trajectory of those participants can be displayed on a projected screen (optionally with a map or ariel photograph overlay). Text instructions can be sent to participant’s phones.

Our plan now is to take these 5 systems and create 2 page exec summaries for each one that can be used as the starting point for developing them further by raising some funding and/or working with external partners.