Leeds Creative Labs

Collaborations for Academics & Creative Innovators

Category: Summer 2014 (page 1 of 4)

Reflections on the Summer 2014 edition

Following the final presentations from our Summer 2014 cohort, we asked each team to reflect on their experiences and how their process and perspectives were affected by participating in the Labs.

Here are there thoughts on last Summer’s work…

Christine Farion and Mark Taylor-Batty


Jo Merrygold, Tim Waters and Dr. Seán McLoughlin


Dr. Lisa Blackmore and Ian Pringle


Jane Wood, Dr. Kia Ng and Shay Moradi

Sue Hayton

Sue Hayton

Sue is Business Development Manager in the Faculty of Arts’ Arts Engaged programme.

Sue develops partnerships outside the University that help to promote the impact of research. Sue joined Leeds from the Arts Council in February 2012 brings over 30 years experience in the cultural and creative sector. Sue’s career started in design, working for a major publishing house. Her work in the cultural sector has spanned work with dance, festivals, galleries, museums and theatre as a director, producer and trustee.

As a consultant, through Hayton Associates, Sue was commissioned by organisations such as The National Trust, Arts Council England, HLF, Youth Music and The National Portrait Gallery. Sue devised creative methods for the evaluation of inter-disciplinary partnerships and cultural engagement which has impacted on policy and practice. Her work with HLF informed the distribution of £5million funding for young people’s heritage projects through the Young Roots programme.

Sue is also on the Board of the South Square Arts Centre in Bradford, a committee member of the Bronte Birthplace Trust and a member of the Chartered Management Institute.

Find out more about Sue at leeds.ac.uk/arts/profile/125100/1088/sue_hayton or find her on Twitter as @iggyfisk or follow the CCI Exchange (@ccileeds).

Jo Merrygold

Jo Merrygold

Jo Merrygold is a Masters Student in Theology and Religious Studies at the University of Leeds. Throughout her undergraduate degree she undertook an Undergraduate Research and Leadership Scholarship in the Community Religions Project where she continues to work as Project Assistant. She is particularly interested in the relationship between religion in particular spaces, and has researched the location and significance of Bibles in Leeds city centre.

Throughout summer 2014 she will be working with Seán McLoughlin to develop the new Hajj website and will participate in Leeds Creative Labs with Seán and Tim Waters.

Prior to starting her undergraduate degree Jo undertook a range of jobs from car sales to running a Christian commune and she enjoys the challenges of gaining new skills and sharing new ideas – something very much at the heart of Leeds Creative Labs.

Jo is on Twitter as @Jo_Merrygold and can be emailed at j.merrygold@leeds.ac.uk

Seeing Sound: A Sonic Journey

The project I’ve been working on with my two fantastic collaborators Kia Ng and Jane Wood  Seeing Sound has grown organically throughout the course of the past few weeks , as you’d expect, it’s taken some interesting twists and turns.   This is my account of it as a collaborator. You can read Jane’s one here.

Seeing Sound Team: Shay, Jane and Kia

For the most part Seeing Sound has been largely concerned and guided by the following areas:

  • How we see sound visually. This is obviously not a new question in itself (the amount of interactive digital media projects, sound installations, software plugins etc.. in this area are numerous and we’re not trying to reinvent the wheel) but sometimes in order to understand something, the best way is to repeat other experiments. I think you can’t probe deeper without exploring precedents.
  • The question of how you meaningfully arrive at a raw language for visualising sounds and representing a time based event in a visual form is one that definitely bears and benefits from reinterpretation.
  • The question of how we can arrive at a raw musical perception with a tool or medium that that utilises colour, shape and motion effectively and interactively. (We decided that one great way is to collaborate with kids, whose perceptions and ability to express themselves is pretty unbridled and less laden with years of cultural influence.)
  • Ultimately looking at the steps required to design a tool for technology enhanced learning and performance utilising what we’ve learn from the visualisation stage research above and the findings of the project.

How we started and what we did
Our initial introduction to the area was when we were paired up on the first meeting. This involved Kia telling us about his quite frankly, eclectic— interrelated areas of research.  I think it would be a challenge not to be interested or jealous of all the cool stuff he’s doing.

One area which caught my attention was the use of haptics, and visual feedback in learning  a musical instrument. I’m actually not the most adept at playing musical instruments (I’ve tried, honest) but I have a lot of fun with music, so the idea that I could find an optimal way for myself to learn was appealing. icsrim

From tour to coffee to meetings huddled over sketchbooks

We also had a lovely and extensive tour of Kia’s lab at the http://www.leeds.ac.uk/icsrim/  Interdisciplinary Centre for Scientific Research in Music (ICSRiM) – gotta love those acronyms!  We decided we’d meet and discuss things quite regularly.

In early meetings and conversations, Jane and myself fed back the results of our open ended research and mini experiments back to Kia, we discussed them together for an hour or two and went away with a redefinition of the area we wanted to look at.


Serendipity and interrelatedness

Very often something Jane would find on Pinterest would lead myself down a particular path which I then did something with like a code experiment.
I would like to highlight the massive power of open ended visual research during the project. Someone once told me, great creative research resembles a flow like structure with various funnels, expansions and contractions connected and feeding into each other.

Visualisation of our process

I also decided to start logging some of the things I’m seeing, on a tumblr blog with a bit of a commentary.

Shay’s Sonic Journey: See, Feel, Interact

Non digital Sketchbook I took time in this project to dabble in a non digital sketchbook  and reclaim the analogue starting phase as part of my process.
It’s something I used to do which I have done less of recently.

Bonus video of Spinning Top Drawing tool.

For me this included using watercolour and ink when I was originally exploring how music could be visualised. I later migrated to using a tool I’ve fallen in love with recently Construct2, to create mini digital prototypes of the kind of things we were discussing. These were constructed by turning brushes into particle effects. I later turned this into an audio reactive drawing tool.

Triple Colour Brush Animation

Experiment: Three Colour Brush (Video on Vimeo)


Experiment: Simulating watercolours (Video on Vimeo)
Thoughts so far

Working on a collaborative project without any expectation of an end outcome, has genuinely been a rare indulgence, like when you bite your favourite quality street chocolate and discover the rest of the pack has double the usual amount of your favourites. (mine is the purple one) 

In my humble opinion it forces you to re-evaluate your working practices, your usual assumptions and technical repertoire and gives you a real chance to do the following:

  1. Show off what you know
  2. Refine a dormant skill, or learn what you don’t 
  3. Enjoy the spark of creation in a condensed span of time

Outcomes so far

  • Collaboratively discussed and amassed a fair few interesting and current references and starting points for visualising sound and music
  • Explored the constraints present in creating musical visualisations
  • Conducted practical code experiments.
  • Considered how we would go about giving structure to what we think is an area which could potentially get research funding and coming up with a multi-phased approach
  • Refining the central questions at the heart of the research area, in order to ask smarter questions.
  • A personal outcome for me is that Kia has expressed an interested in using the visual engine I’ve created that simulates the watercolour effect,  for the next installation he’s working on at the Centre for Life in Newcastle.
  • Creating a prototype of an Audio Reactive Drawing tool and part of the tool to enable kids to explore their visualisations of sound.

Manifesto Project Overview

This document is also available as a PDF file.


The Manifesto project looks at how technology can be used as a resource for actor training.

In particular the project focuses on how smartphones can be used in this context. There are several researchers looking at how young people interact with their smartphones and how they can be used to create more civic engagement. This project aims to create new uses for smartphones that develop civic engagement through performance training and then subsequently in other disciplines.

We have proposed five applications to explore the possibilities of using existing and new technologies to develop a new form of theatre training. This new form of training is aimed at collapsing the existing boundaries between acting/devising/applied theatre that currently dictate the identities of students, theatre practitioners and degree programmes.

The file applications are:

  • Multi Camera Video – a system for producing a multi camera video of a performance
  • 3D Sound – a system for using movement to change sounds in a 3D space
  • Shape Projection – projecting images onto the floor for training
  • Situationist App – presenting participants with situations depending on a set of rules
  • Location Tracking – tracking participants locations on a map


The project is founded in principles of critical pedagogy that seek to challenge an educational model according to which an expert possesses knowledge which is then passed on to those that do not have knowledge. It aims to reconfigure training, and theatre training in particular, as a mutually generative process that takes place amongst the students and in between the students and the social and civic spaces they occupy. In this respect the forms of training proposed here aim to transcend the dichotomy between skill acquisition, which is supposed to be an individual affair, taking place in the studio and oblivious to the wider social and cultural frameworks within which these skills are eventually supposed to be put in use, and social engagement, which the student is expected to develop in other courses that deal with community or applied drama.

It also aims to develop in the trainee a sense of emancipation and responsibility. In Theatre and Performance Degrees currently offered by HE institutions across the country, critical reflection and inquiry is expected to be cultivated through the more theoretical classes. In conservatoire training, this dimension is often overlooked altogether. Equipping students with the skills required by the cultural industries ensures their employability within the cultural industries. Whereas this is desirable, it also propagates existing aesthetic and artistic norms, and does not equip future performers with models of resistance and the ability to think of themselves and their work outside existing paradigms.

Finally, the use of technology aims to bridge the gap between the training systems in use today, some of which date from the beginning of the 20th century, and the way in which various technologies have shaped the experience and learning mode of the ‘millenial generation’. The findings of this research project aim to feed back into the wider field of social media and civic engagement in order to propose ways in ways new technologies, especially mobile technologies, may be used towards enhancing civic participation amongst the young generation.


As we explored the potential applications for technology in this context a common approach to these ideas began to emerge:

  • The tutor might have time to set up things before a class – but during the class they would not be able to operate any technology – so during that phase the system needs to be totally automatic.
  • There is a blurred line between using these for teaching and performance – a system might be used initially in a purely training context but over time students might take that and develop their own piece with it that gets turned into a performance.
  • There are many actor training regimes – these solutions should be neutral to these and adaptable to several different methodologies.


The kinds of outcomes we might expect from using these technologies in a training context are:

  • To get students to think about the image and sound they are projecting
  • To get them to starting looking at what other performers are doing more closely
  • To get them out into the real world and interacting with it more
  • To allow a period of reflection by producing a digital asset (e.g. a video) that can be posted and then commented on my peers and tutors
  • To produce materials that can be shared on social media to allow them develop the skills required to build up a social media presence
  • To produce digital asset that can be used for submissions or as part of a performer’s CV
  • To allow a group of students to take a system and use for self-directed learning
  • To allow the tutors to use the system as part of a MOOC
  • Performances created using these systems
  • Publications about the process of training or developing performances using these systems


  • To produce digital asset that can be used for submissions or as part of a performer’s CV
  • To allow a group of students to take a system and use for self-directed learning
  • To allow the tutors to use the system as part of a MOOC
  • Performances created using these systems
  • Publications about the process of training or developing performances using these systems

Future Developments

The technology platforms we have proposed are all neutral to specific performance types – so we see these being potentially of use in other disciplines:

  • Dance and choreography
  • Social and political sciences
  • Media and communication studies

Manifesto Project: Multi Camera Video

This document is also available as a PDF file.


The requirement is for a system to automatically generate a multi-camera video. The smartphones of several participants are used to capture video shots from different angles of the performance.

The final output is recorded to a video file and also can be output for presentation. The system will randomly cut between these camera shots to create final video output.

The camera operators get advance notice with icons on their screen that their shot is next, and then that their shot is currently being used.

Optionally a sync track of video/audio/powerpoint can be mixed in to the final output. The sync track can also be output for presentation.

Effects such as freezing a frame and slow motion can also optionally be turned on.

Manifesto Multi Camera Video Diagram

System diagram


This application aims to foster a form of training that moves away from an understanding of training as an individual and individualized affair that takes place between the student and the teacher and rather facilitates an opportunity to experience training as a collective process that happens between individuals and between the individuals and the space. Rather than bringing attention to the skills and choices of each individual, this application aims to foreground the ‘field of forces’ that shapes and is shaped by the training event throughout the training process.

For the participants who are filming, this application can cultivate an ability to become and remain present, watching and witnessing another person’s process. The use of the camera will also alert the students to the in-the-moment choices made during a filming process. These choices can be initially guided by the system, and/or the tutor, by offering the choice to film up close or from afar, to concentrate on one body part etc.

For those being filmed, it is expected that the application will enable them to move in relation to something/someone and thus replace self-consciousness with hyper-consciousness.

It is expected therefore that this application will enable the development of both compositional and movement skills.

The recordings might be viewed at the end of the exercise by the whole group and this can enable them to think about the creation of ‘meaning as “lived change” rather than interpretation’ or intention (Cull 2012: 159).

This application will act as a self-organising system (similar to self-organising organisms, such as slime mould, that has informed both scientific and philosophical advances) that diffuses control and rather allows the creative process to develop a logic of its own.

A development of the possibilities offered by the system can be inspired by the use of chance operations in theatre and dance practices and the use of ‘oblique strategies’ in music composition.

Technology Solution

A camera operator application will be installed on several smartphones. This application will take live video and present this on the smartphone screen together with two “tally” lights, one red one showing that the shot being taken is being used, and one yellow one showing that the next shot that will be used will be from this camera.

These applications will connect to a central desktop or laptop running the mixing application via Wi-Fi. The mixing application will randomly cut between camera shots. Optionally it will also cut in sections from a sync track. Both the final output and/or the sync track will be output to monitor outputs.

The mixing application will have settings to set:

  • How often to cut in the sync track vs. camera inputs
  • Approximately how often to make the cuts or if the cuts should be made on a key-press
  • If camera shots should be randomly frozen or slowed down

Training and Performance Applications

This application can be used in training for both performance and dance and can accommodate the needs of different levels.

It can also be used to generate material for performance either live fed or pre-recorded.

Work Breakdown

  • Mock up the system using commercially available multi-camera applications – these require a traditional vision mixer operator so would not be automatic like our proposed system – however this would give a good sense of the kind of output that could be created using this system
  • iOS and Android camera application
  • Mac and/or PC mixer application

Budget Estimate

A mock-up of this application could be built by using a smartphone app like RecoLive with a technical operator randomly switching between camera views. The application costs are low (£2.99) so the only other costs would be time from a technical person to set-up and run the system during the mocked-up training sessions.

To develop a fully functional version of this application on both iOS and Android would cost in the region of £30,000 to £40,000. A more cost effective route might be to license bespoke versions of applications like RecoLive from their developers.

Further Development

A development of the system would be to allow the output videos to be uploaded to a central system where others can leave comments and feedback at certain points of the video (for example using Vimeo).

Manifesto Project: 3D Sound

This document is also available as a PDF file.


The requirement is to move a number of sounds through a 3D space of 4 speakers placed in the corners of a room.

Participant’s movements would change aspects of the sound (location in space, tempo, amplitude, filtering, pitch, vibrato etc). Alternatively participant’s movement can be used to “scroll” through a sample – perhaps of some spoken text.

Manifesto 3D Sound Diagram

System diagram


This application is inspired by performers, such as Laurie Anderson and Imogen Heap, who have explored the physical/spatial dimension of auditory processes in performance.

To begin with this application aims to enhance the students’ awareness of their movement and movement qualities. It is expected that involving the auditory sense will broaden their repertoire of movement and give them one more register through which they can engage with movement as a three-dimensional phenomenon.

This application can be further developed to include lines from a piece of text. This further development is inspired by actor trainers, such as David Zinder and Lorna Marshall, who approach text as an entity that is not only spoken aloud but embodied in the space.

Technology Solution

One solution is for a very simple client application to be installed on participant’s smartphones which gather data from the various phone sensors. Typically modern smartphones will have accelerometers in three axes and gyroscopes in three axes. When fused these sensors can provide accurate tracking of movements of the phone though space.

These smartphones would then connect directly via Wi-Fi to a desktop or laptop computer which would then use the inputs from phone movement to alter parameters on a sound generation system like Max DSP or Native Instruments Reaktor. These systems contain whole libraries of sounds that can be used as the basis for the sounds produced.

The system could be programmed to track phone movements such as:

  • Changes in the ‘attitude’ of the phone – so twisting the phone from one side to the other for example
  • Moving the phone through an arc with the participant’s arm
  • Walking in a straight line with the phone – the direction and speed the participant is moving in
  • Repetitive movements of a phone up and down of from side to side

The smartphone could be held by participants or strapped to their bodies (a whole variety of systems exist for strapping smartphones to people when they are doing exercise).

Training and Performance Applications

This application can be used in training for training in movement and text.

As this system produces only audio output it does not itself produce useful digital assets that can be used for reflection. However the Multi Camera Video system could be used to record a session and used for refection in on-line forums.

Work Breakdown

  • iOS and Android client app to track phone movement and send to central application
  • Mac or PC application to take phone movements and apply to sound generation system to drive a quadraphonic sound output

Budget Estimate

A mock-up of this application could be built by using a smartphone app like GyrOSC to send OSC controls to a set of sound generators built in Native Instuments Reakor or similar. While this might not detect all the gestures the full system might it should be good enough to test the principle. The application costs are low (£0.99) so the only other costs would be time from a technical person to set-up and run the system during the mocked-up training sessions. We would estimate two days of system set-up would be required to configure this system, plus whatever time was required to run the session itself.

To develop a fully functional version of this application on both iOS and Android would cost in the region of £5,000 to £10,000. A more cost effective route might be to license bespoke versions of applications like GyrOSC.

Further Development

One future development of the system would be to add some movement tracking technology (e.g. Kinect) to allow this to also affect sound output – but it would need some investigation as to how well this would work with multiple participants.

Manifesto Project: Shape Projection

This document is also available as a PDF file.


The requirement is for shapes in various colours to be projected on to the floor of a performance space such that exercises involving spatial and physical awareness can be undertaken with students. The projections could contain blocks of colour, grids, body outlines, simple lines or text, depending on what the tutor requires.

Manifesto Shape Projection Mockup     Manifesto Shape Projection Mockup

Two possible slides to be projected on the floor


The use of space is an important aspect of performer/theatre making training. In fact, space may be seen as a ‘performer’ in and of itself, determining the performance event as much, and in some cases even more than, the human participants. The aim of this application is to bring the space alive and enable the students to think of the space as an organism that reacts and responds to the choices they make during exercises in movement, improvisation and devising.

Furthermore, this application aims to familiarize the students with the use of light and colour within a training rather than performance situation. More often than not, students begin to consider the use of light as a composition tool once they enter the process of creating a performance and physically enter the performance space. In some cases, the students may be introduced to operating a light console and/or they might have a member of staff doing it for them. In either case though, and even if the students acquire some technical competence in operating lights, their understanding of the aesthetic and compositional dimension of light is rudimentary.

Another aim of this application is to use colour in order to stimulate the students’ affective responses. This is based on Michael Chekhov’s use of colour as a creative resource for the actor.

Technology Solution

In this case the solution is quite simple. A video projector is mounted on the ceiling, pointing down, so that the image it produces is projected onto the floor of the performance space.

The images required can be produced in advance by the tutor in a presentation program (e.g. PowerPoint or Keynote) and presented from a computer connected to the projector. The slides can be set to auto-advance or advanced by standard presentation remote controllers.

Both PowerPoint and Keynote allow the tutor to create slides with a wide range of shapes in any kind of colour combination required. If required more advanced animations involving moving the shapes on these slides can be produced.

An enhancement to this system would be to have 3 or 4 projectors mounted in the ceiling more towards the four corners of the space with the keystone of each one adjusted so that the images from all the projectors lined up – this would get rid of most of the shadowing effects that only using one projector might produce.

We considered using DMX controlled lights as the solution for this system – but these vary between performance spaces, are much harder to control and cannot project things like grids or text without custom gobos.

Training and Performance Applications

Depending on the material produced in advance by the tutor this application can be used with Level I undergraduate students as an introductory exercise in Movement Training in more advanced classes where students begin to work with the projection of lines of text and this can be used as a rehearsal technique.

As a devising tool and as such the students begin to use the application in the development of devising short performance, perhaps in relation to plays that have no spoken text just stage instructions, such as The Water Station, The Hour We Knew Nothing of Each Other etc.

The slides can be made available for the students to use when reflecting on a training exercise in online forums afterwards.

Work Breakdown

There is no extra work required in setting up the technology solution for this system.

Budget Estimate

Assuming each performance space already has a projector and a computer to present from there are no other budget requirements unless more than one projector is required.

Further Development

One future development of the system would be to add some movement tracking technology. This could come from one or more of the following sources:

  • Kinect body tracking (may be limited to a small number of participants)
  • Infra-red camera (tracking location of participant in the space)
  • Using the smartphone gesture capture described in the 3D Sound project

This would allow the system to respond to certain cues (someone being in one area of the room, a certain body shape) to trigger the advance to the next slide or some animation within the slides. This would make the system able to automatically respond to certain movements from the trainees or performers.

Manifesto Project: Location Tracking

This document is also available as a PDF file.


The requirement is for the outside location of participants to be tracked so that their current location and/or trajectory over time can be shown on a screen. Optionally this could be overlaid on a map or aerial map of the area. There needs to be the facility to send text-based instructions to the participants.

Manifesto Location Tracking Mockup

Mock-up of Location Tracking Output


This application aims to blur the boundaries between inside and outside both in a literal sense but also in a metaphorical sense, i.e. it aims to enable the students to think of and practice training as a social and civic activity.

It is expected that the GPS tracking will enable the students to gain an understanding of the way the body and social activities are moulded by the existing space and the trajectories that a certain space permits or prohibits.

It is also expected that the GPS tracking system will enable the communication between two group of students, one being inside the studio and one being outside.

A development of the application needs to consider the situations, actions, scores that the outside group will be asked to engage in and the way this will relate to the activities taking place inside the studio.

The work of artists that have developed perambulatory performances will be looked at as well as theoretical perspectives on space.

Technology Solution

One solution is to have a simple client application installed on participant’s smartphones. This will simply gather their current GPS location and send it back to a central server. Additionally this client application will be able to receive messages from a central server.

A desktop computer application will speak to the central server to show the locations of the participants overlaid on map backgrounds if require. The application will need to scale the map to the bounds of where participants have travelled during the session. The application will be able to show the participant’s current location or the journey they have taken over the session.

Information from a location or a route queried from Google Maps could be displayed. Additionally students could be asked to add an interesting fact about a certain point or route for inclusion in the display attached to a marker pin.

This application will also have an interface to allow messages to be sent to one or all participants.

Training and Performance Applications

This app can be used with undergraduate and postgraduate students within a devising as well as an applied theatre context.

It can be used in particular to alert students to cultural heritage, wider civic spaces and civic engagement.

The output will be a web page that can be used for reflection in online forums.

Work Breakdown

  • iOS and Android client app to track location and show messages
  • Server-side system for client and display app to talk to – could be based on Amazon Web Services or similar system for ease of development
  • Mac and/or PC application to gather current client locations and display

Budget Estimate

A mock-up of this application could be made using the existing FollowMee app (www.followmee.com) together with text messages to the participants for instructions. The application costs are low (£5.49) so the only other costs would be time from a technical person to set-up and run the system during the mocked-up training sessions.

To develop a fully functional version of this application on both iOS and Android would cost in the region of £15,000 to £20,000. A more cost effective alternative might be to commission a bespoke version from FollowMee or their competitors with the extra functionality required.

Further Development

A development of the system for performance could be to open this up to the audience or the public to co-create work.

Manifesto Project: Situationist

This document is also available as a PDF file.


The requirement is for one of a random set of “situations” to be presented to participants suggesting a way for them to react to the current situation they found themselves in.

The aim is to make mundane situations extraordinary. The act of drinking a glass of water is ordinary by itself – but drinking it on a bus makes it extraordinary.

The situations need to be set-able by the tutor beforehand.

The situation presented might be random, or depend on the participant’s location, or if they were with other participants at the time.

Manifesto Situationist Mockup

Mock-up of application screen


This application is inspired by an iOS application called Situationist that is no longer available on the AppStore. As the name suggests this app was inspired by the Situationist movement and the more recent revival of their ethos, as seen in practices of activism as well as contemporary theatre, such as the company Wrights and Sites.

The aim of this app is to blur the boundaries between training (in the studio at a predetermined time within a predetermined social order) and life, and thus test and invite the students to consider how training operates: How many hours are you training for? For what are you training? When are you in a training mode? What does it mean to ‘train’?

The development of the text and situations with which the students will be invited to interact will take as a basis the practice of Fluxus, Situationist, Wrights and Sites and it will also consider the ethical implications.

The development of this app will also consider the relationship between the studio based sessions and the impromptu training situations that may take place outside the studio space.

Technology Solution

One solution would be a client application installed on the participant’s smartphones. This application would sync the current set of situations at intervals as well as the location of other participants with a central server. Situations would then be provided to the participants depending on sets of rules defined on the server:

  • How many situations a day to be given to a participants
  • Locations one or more of the situations can be delivered
  • If one or more other participants need to be in the same location

In addition a simple admin system would be required to manipulate the current set of situations and any associated rules.

Training and Performance Applications

This app can be used with undergraduate students in both skills based or performance oriented modules. It can thus both be used to develop student’s skill to observe and look at daily life with a more detached and critical gaze and it can also be used as a tool to generate material for performance.

Examples of situations that could be programmed into the system:

  • When physically next close to another participant the system would prompt one participant to say something mundane and the other participant to laugh as if it was the funniest thing they have ever heard
  • Give a participant a task such as revisiting the neighbourhood they grew up in “and see how those places have got on without them”[???ref]
  • Say lines from a text they are currently working on out loud to the nearest person
  • Observe for 10 minutes what is happening in front of you

The system is only as good as the situations being programmed into it. With use it will become clear what situations produce useful results and these can be refined to produce sets of situations to be used with students at various stages of training.

The system will capture what situations happened where and when and these can be presented for reflection in on-line forums afterwards.

Students could use the system in self-directed work – setting situations for each other in groups.

Work Breakdown

  • iOS and Android client app to sync with server, track location and show situations
  • Server-side system for client to sync with – this could be based on Amazon Web Services or similar system for ease of development
  • Web based admin tool to edit situations and rules
  • Web based output of when and where situations were presented to each user

Budget Estimate

This application can be mocked-up by simply sending text messages to participants with situations to use – although this mock-up would not be able to make use of the location of where the participants were at that time. Since this would all use standard text messaging technology there should be no other costs for mocking this up.

To develop a fully functional version of this application on both iOS and Android would cost in the region of £10,000 to £15,000.

Further Development

As the system is developed the need for new types of rules will emerge that will need to be added.

Older posts

© 2018 Leeds Creative Labs

Theme by Anders NorenUp ↑