Mon, 29 Sep 2014 15:56:19 GMT
Another Tumblr repost. More thoughts on things multi-contributor VR could be used for.
Dynamic news reconstruction
You have some users managing a stand-alone world with a one-way viewing “plane” for observers to watch what’s going on and submit contributions to the editors as a newsworthy event is happening.
Let’s use the first Furguson protests as an example.
The room is black and of indeterminate size. There are a series of glassy squares in a circle overhead, as might be arranged in a hospital surgical gallery. There are multiple channel-rooms viewing from those virtual windows, but no rooms physically behind them in the main reconstruction room. They are one-way portals for imagery.
The room has no distinct light sources yet and the avatars of those present are rendered in flat shadowless colour.
First someone pulls the most recent satelite image of the area they can get, probably from Google maps if not a seperate public depository. Someone quickly maps that to a geographic height map to get the lay of the land. The observers get an overview showing items being event-tagged into it, but individual editors are already subjectively zooming back and forth through the document timeline, scaling and applying incoming photos and videos to the physical model.
Nearby buildings are crudely extruded up out of the ground and detailed with projections taken from the hundreds of photos and snippets of footage. Locations in 3D space are calculated from converging pieces of footage, offsets between individual time-stamps are noted and corrected for. What starts out as a crude mismash quickly becomes a four-dimensional scrapbook of stills, video and other data.
An extra wave of specialist editors join the reconstruction as word of the story spreads, looking at faces and outfits and noting who is where and where they are at each stage of the event. Police vehicles are identified, their VR render tagged with information on the makes, models and public histories. Multiple footage angles on the rifle-weilding police alow a targetting overlay to be added, calculating the line-of-sight down each gun barrel (in the form of an extrapolated probability cone). The arcs of tear-gas canisters are calculated back to their probable launch-points and the officers at that location using the in-world physics engine.
Someone tries to add information about guerilla warfare tactics to the map, but it’s spotted and wiped off quickly as irelevant and dangerous. Someone petulantly starts a #conspiracy channel for viewers to add optionally.
As the confrontation moves, the piecemeal map grows in different directions. Some people dedicate themselves to sitting on the livestreams, matching it’s broadcast location and viewing angle as best they can as it moves. Others follow them, popping up crude representations of the buildings to fit and making minor positional adjustments to this smear of imagery through virtual space-time. Others follow using events in the most uninterupted footage as a solid backbone to syncronise their own finds to.
Partly a multimedia record unlike any other, partly a vicarious experience of the event itself as it happens in a depth never before acheived. With time the world-document is refined more and more and new pieces of record are submitted and patched in. False evidence shows up easily when it can’t be matched with the events in dozens of other synchonised and overlapping pieces of record.
If it seems extreme, think of this; making a cheap VR headset using a tablet phone and some cardboard is now readily possible. You can use these to watch 3D videos.
Some day very soon someone will be livestreaming 3D video from inside an assaulted protest like at Ferguson, or inside the next Gaza bombardment, or at the crash site of a freshly downed 747. It will no longer merely be video from the perspective of someone on the scene, it will be their perspective absolute. Your head will be jerked to one side as theirs turns from the splinters of flying concrete, your eyes will fall to the ground as that citizen coughs in tear-gas, you will stagger jelly-legged with them in a feild full of fresh corpses. You will be in their shoes, riding helpless in their body through these events.
The 2-dimensional, individualy experienced, static internet is not capable of doing that justice.
What might you the viewer get from this? Well if you were in one of the viewing channels conceptually overlapping each-other in the gallery, it would probably depend on the channel topic. One might have a channel moderator, zooming the viewing angle back and forth and looking for human rights abuses. Another might be a general chat channel filled with the same disgusted and shocked reactions found on any other social media, it’s viewing windows scaled huge to accomodate the heaving mass of avatars. Another might be doomsday preppers, engaging in fantasies of perceived SS tactics or false-flag rationales behind anonymised white-noise personas. More channels might be proactive, hunting down new sources of information and filtering it for submission to the document. Some humanist channels might just be filled with the others who need somewhere to break down and weep at the sight of it all.
In short, this is another possible way an interactive and unrestrained virtual environment could be used in a way no other medium currently allows, to acheive results faster and in more detail than before. And there as an imersive analytical document to the future instead a collection of disparate images and messages.
Mirrored from The blog-hub for Peter "Sci" Turpin.