R3 – Virtual Reality for Data Visualization
Google Docs – via Iframely VR for the web In the currently unfolding rapid expansion of virtual reality in the consumer market, a brief study of the available technologies for bringing web content into VR devices seems to indicate that we’re in the competing-standards era of this emerging technology. Different individual developers and software/web companies are creating their own ways of porting wraparound images, movies, and interactive experiences to consumers via the web, and it seems that at the moment there’s no clear winner in this area. Some examples of standards presently available, any of which may be useful to us in our effort to bring material to consumer devices: Mozilla’s “Mozvr” platform includes the ability to provide VR imagery data to iOS and Android devices via Cardboard viewer or similar, and also Oculus Rift. The user operation for iOS and Android is quite simple: load a VR enabled webpage and then click the “VR” button on the page, and load the phone into the viewer. Oculus instructions require the installation of the latest nightly build of Firefox or Chromium (the open-source browser project that Google Chrome is built on), and in Firefox additionally require installing a WebVR enabler add-on. Examples of wraparound images for VR using this standard are available at this panorama viewer website. (For disambiguation, this standard also seems to be referred to as WebVR, described on its site as “an experimental Javascript API.”) Another Google platform is designed for Chrome and I honestly cannot tell if it’s meaningfully different technically from the Mozilla platform, though it has a different name and different website. It uses WebGL and Three.js, and is available to preview with a Cardboard or similar device at vr.chromeexperiments.com. It is described with good technical specificity in this 2014 blog post by Brandon Jones, a Google Chrome developer, though I don’t find any more recent discussion than that for some reason. (Also the latest nightly builds appear to date from mid-2015.) Jones reveals some testing data indicating that the latency for Oculus Rift with WebVR for Chrome was 64 milliseconds, when about 20 milliseconds is apparently the gold standard for making a VR experience feel more real. (Contrarily, a 1998 study by Watson et. al. actually finds that 82 milliseconds is the relevant threshold here; Jones does not cite his source but it’s likely to be more recent.) A third standard that appears to be under more active public development is Janus VR, which is designed to turn individual websites into “rooms” in a vast connected network with “portals” connecting them to each other. This virtualizes the web browsing experience in a very particular metaphorical fashion. However, in addition to displaying regular webpages as rooms, it also supports display of various 3D web content in addition to wraparound VR imagery using an HTML tag called FireBoxRoom. This standard allows for video and also shaders so that surfaces can take advantage of the user’s graphics card to quickly process imagery. Research in data visualization for virtual reality An interesting ambiguity emerges when searching for academic writing about virtual reality data visualization: for a long time, “virtual reality” referred to any system displaying simulated 3D imagery, including simple things like cubes rotating on flat computer screens! (Recently the term always refers to systems using headsets or the like, which are designed to be responsive to the user’s head movement.) There’s good evidence in this that the use of virtual reality systems increases “situational awareness” of visualized data, as one would expect. This is borne out by a series of studies which show, among other findings, that the immediacy of the virtual reality environment allows for a greater sense of presence in the data itself, and a much deeper ability to explore that data. (See Lin 1999, Bayyari 2006, and Laha 2012.) Note that these studies generally refer to volumetric data, for instance showing the extent of geological formations or paleontological findings. It is still relevant for our present purposes, though, because at least for the larger datasets we have looked at such as CREATE Lab’s EVA, the data takes on a volumetric form. My biggest finding in searching for information about data visualization in virtual reality environments, though, is the deafening silence one comes across: after reviewing ~30 papers, I didn’t find any example, either in the academic literature or the broader web, that addressed questions of visualization of datasets other than large and volumetric ones. I didn’t find an example using a virtual environment to tell a tale about a small-scale story about one neighborhood or community, for instance, or really any data exploration that wasn’t fundamentally scientific in motivation. This likeliest indicates that more artful storytelling modes have mostly not yet been explored. A wonderful opportunity for us to break new ground! The nose knows Finally, an interesting tidbit. VR users frequently complain of nausea, and it’s known anecdotally that having a “cockpit” or other stable feature of the VR image can help allay the problem. Of course many environments, such as a panoramic image, aren’t especially amenable to any fixed feature in the landscape. An undergraduate working on this research problem came up with a very smart solution: draw a virtual nose right where we usually expect our noses to be. (This acts as a stable object, and certainly one we’re well attuned to seeing.) With the virtual nose in place, a 2015 Purdue study found, users took longer to become nauseated while experiencing certain types of virtual environments. Clever! And maybe we should look into incorporating it ourselves as we build our own systems.
References: Bayyari, Ahmed, and M. Eduard Tudoreanu. “The impact of immersive virtual reality displays on the understanding of data visualization”. 2006 Virtual Reality Software and Technology conference. Proc. of ACM Symp. on Virtual Reality Software and Technology, pp. 368–371. http://dx.doi.org/10.1145/1180495.1180570 Gould, Duncan K. “Stimulating a beneficial human response by using visualization of medical scan data to achieve psychoneuroimmunological virtual reality”. US patent 5546943 A. 1996. Google Patents link: https://www.google.com/patents/US5546943 Laha, Bisrewar et al. “Effects of Immersion on Visual Analysis of Volume Data”. Visualization and Computer Graphics, IEEE Transactions on. vol. 18, no. 4 (2012). pp. 597–606. http://dx.doi.org/10.1109/TVCG.2012.42 Lin, Ching-Rong. “Applications of Virtual Reality in the Geosciences.” Order No. 9917207 University of Houston, 1998. Ann Arbor: ProQuest. http://search.proquest.com/docview/304436371 Watson, Benjamin, et al. “Effects of variation in system responsiveness on user performance in virtual environments.” Human Factors 40.3 (1998): 403. General OneFile. http://dx.doi.org/10.1518/001872098779591287
|