As Yoshi was doing the programming, my main job was doing the research and the presentation. I researched on our concept and where it came from, what projects were similar to ours and what's been done out there in the world. We also researched on theories based on studies of real reaction time of people in general (drivers) and applied this to our concept.
A product, very similar to what we wanted to achieve was the EyeToy which I found quite interesting. They have a similar set up with a camera that's hooked onto the PS2. This camera tracks the users motion and sends the information to the PS2. Games that are made for the EyeToy lets users interact with objects on the screen/game. If you look at the picture below, it is of a girl using the EyeToy (the USB camera) to play with the game thats projected on the screen (TV). This is very similar to ours. We use colour tracking as a way to detect someone's movement. With this glove (our colour tracker), they use it to hit certain objects that are on the screen.

For the presentation, I included all that was required including concept research, theories, programming logic, pseudocode, flow chart, schematics, etc. I worked with Yoshi to correspond to the slides that talked about the coding, making sure that what I put on there, made sense with what Yoshi was doing and what our project programming consisted of.
We went to class prepared to show what we have. We didn't achieve what we wanted to, but hope to update it for next week for our real presentation. This week we presented Jinsil with our finished presentation and presented our work in progress of our programming to Greg. We had good feedback and hope to improve our Sketch Two for next week.
No comments:
Post a Comment