
So when designing the HUB application, little consideration has been payed to the camera functionality of the iphone platform itself, and what role it can play in not only submitting user information to the HUB system, but how it can show information to the user. I've returned to that concept of the camera as an eye, and considered it's role in the process of uploading safety hazards. Users in HUB can now upload a photo to accompany their comment, giving the Department of Transportation vital visual data on what the issue was, where it was exactly located should the user have misplaced the point of the issue, and to give that users comment a bit more authority over users that don't put in photos with there comments. It shows that extra step in effort for a user desiring change in the bike safety network, and gives other users viewing their safety hazard posting a bit more insight.
Photos however aren't the only method of visual data that will be at the user's disposal. Augmented reality and recorded video is a great method to place posted user info in the physical space around the user. More active users can use their smartphone's camera to see user content at their level on the street, rather than panning around on a 2 dimensional map when viewing data. It puts the user postings in context, and establishes a location amongst the data at the exact place they are standing. So in a way, it's as if the user is putting themselves literally in the HUB network. The view can be toggled between viewing safety postings, and bike at the bottom right of the screen. Postings can be touched to enter them, showing the safety ratings and distance to them from the user's current location. The bike lane postings also have an option to track a route to that lane should the user want to ride on a road with lanes installed.