精选推荐

more
  • VR/AR

    Web前端也能做的AR互动

    目录 一、项目体验二、技术实现三、兼容情况四、遇到的问题五、结语  正文 一、项目体验     以往的AR,都是要在某个APP内才可以体验到的,例如pokemon go和QQ AR火炬传递活动。      我们团队在做技术储备的时候,发现Android设备下微信、手Q支持getUserMedia()网页拉起摄像头,并通过createObjectURL把数据流传给video在页面展示,以营造出实时视频的效果。据最新统计数据显示,安卓操作系统占有全球移动智能手机系统86.2%的市场份额。也就是说,我们可以基于微信和手Q来做网页的AR互动啦!!!(http://digi.tech.qq.com/a/20160822/034526.htm?t=1471877263208)于是我们基于实际需求发起并开发了一个WebAR的H5项目,玩家进来页面可以在不安装额外APP的情况下,体验到AR带来的乐趣;针对不支持实时视频的浏览器,就提供3D全景体验版。进入页面根据提示操作即可完成WebAR小游戏,跳转落地页。       二、技术实现 2.1 WebRTC——WebAR的基础无论是APPAR还是WebAR的一个最基础要实现的功能就是实时视频效果。WebRTC(网页实时通信,Web Real-Time Communication)是一个支持网页浏览器进行实时语音对话或视频对话的API。getUserMedia()是WebRTC的其中一个API,就是支持网页拉起摄像头的API,摄像头获取到的数据流会以标签作为载体呈现在页面上,这就给了我们一个很好的信息,可以在video上叠加任何我们需要的内容和操作,从而营造出WebAR的效果。关键代码: (http://blog.csdn.net/journey191/article/details/40744015)2.2 3D模型——WebGL、three.js、3DMAX     这次做的WebAR非常酷炫的一个体验是他的3D模型展示。要在网页展示3D模型,需要先把模型放到3DMAX里进行预处理(过程会遇到很多bug和困难),导出成js文件,再借助three.js在页面里建立模型、调整动画等(过程也会遇到很多bug和困难)。这块工作主要是四姑娘来负责的,让我们期待四姑娘后面更详细心酸的分享吧。 2.3 3D全景——three.js、球体全景     3D全景的制作有很多种方法,CSS3、Flash、Krpano等,因为3D模型动画我们是借助three.js在页面建模的,所以3D全景我们也考虑用three.js来制作。  三、兼容情况 3.1 getUserMedia()      由于苹果的安全机制问题,iOS设备任何浏览器都不支持getUserMedia()。     最终数据展示,Android设备下,有99.45%的设备在微信是支持getUserMedia()的,98.05%的设备在手Q是支持getUserMedia()的。而我们之前测试机型里面,本机浏览器、QQ浏览器对getUserMedia()都有不同程度的支持。     2015年底前,也就是chrome47版本前,chrome是支持http页面拉起摄像头的,出于安全问题考虑,chrome47版本后只支持https页面拉起摄像头。(http://caniuse.mojijs.com/Home/Html/item/key/stream/index.html) 3.2 3D模型&3D全景      WebGL是一项利用JavaScript API渲染交互式3D电脑图形和2D图形的技术,可兼容任何的网页浏览器,无需加装插件。WebGL在现代浏览器中已经被广泛支持。3D模型在移动设备浏览器上的兼容情况还是很好的,已测试机型里面,92%的设备是支持浏览器3D建模和动画的。(http://caniuse.mojijs.com/Home/Html/item/key/webgl/index.html)  四、遇到的问题 4.1 getusermedia() 4.1.1 前后摄像头遇到问题:      使用getUserMedia()拉起摄像头时,默认是拉起前置摄像头的,但是要营造WebAR的效果,肯定是需要拉起后置摄像头的。 解决方法:      因为在获取设备源ID的时候,前置摄像头会排在后置摄像头前,不单独设置的话,就会使用第一个获取到的设备源ID,也就是前置摄像头的ID。稍微增加点预处理即可。 关键代码: 4.1.2 摄像头全屏遇到问题:    摄像头拍摄的内容不能完美的全屏,上下总有留白,不完美。尝试了多种原生设置来设置大小,希望能完美铺满全屏,然而都不成功。 解决方法:     上图是用CSS设置了为红色背景,并且设置了宽高铺满全屏。标签确实如期待的展现,但摄像头拍摄到的内容却不会被合理铺满全屏。可以理解为就像我们平常拍摄的视频,是有固定宽高比的,在浏览器宽高比不同又要求视频全部显示时,就会出现上下留白或者左右留白的情况。测试发现,标签不需要另外设置宽高,会默认为铺满全屏并且溢出,那我们在外层增加一个div并且设置为浏览器宽高,再增加 overflow:hidden就可以模拟全屏的效果了。          关键代码: 4.1.3 页面无法针对模型点击遇到问题:    本来页面设置的是让用户点击3D模型来进行交互的,可发现没办法单独点击到模型,整个交互就不能进行下去了。 解决方法:      修改了页面的点击交互方式,改用坐标瞄准判定的方式。页面中心设置一个瞄准区域,用户移动手机让3D模型处在瞄准区域内,即判定为成功,进行下一步。这样就避免了页面需要点击的情况。     思路是:3D模型和camera都有自己的三维坐标,new THREE.Vector3获取到两者的坐标,a.angleTo(b)求他们的夹角,夹角小于设定范围,则判断为已瞄准。      由于最开始的需求是要做一个tips,提醒用户向左转或者向右转,用的是new THREE.Vector2获取到两者在Y面的坐标,再用a.angle()-b.angle()求得两者在Y面的夹角来判断当前camera在模型的右侧还是左侧。新需求相当于要增加上下的判定,用new THREE.Vector2获取到两者在X面的坐标,再用a.angle()-b.angle()求得两者在X面的夹角。Y面夹角和X面夹角都小于设定范围时,则判断为已瞄准。  关键代码:4.2 3D全景 4.2.1 iOS和Android初始化3D全景的朝向不一致遇到问题:    iOS设备在任何朝向(东南西北)打开3D全景,都是看向一个固定方向的。     Android设备在不同朝向(东南西北)打开3D全景,看向的是不同方向。     这个页面为了降低交互难度以及固定让3D模型出现在舞台上(背景比较好看),需要的是在不同朝向(东南西北)打开3D全景,都是看向一个固定方向的。 解决方法:      页面加载完毕初始化camera,获取到camera看向的矢量(不会根据看向角度不同而不同),lookAt(target)旋转3D全景朝向那个位置,这样每次打开页面都会朝向一个固定的方向,再rotate各个方向进行最终调整。 关键代码:4.2.2 初始化定位遇到问题:    用户进入页面时候的手机角度有各种情况,可能是竖着手机扫描二维码,然后平放手机等待加载完成查看页面。页面初始化方向和最终查看方向差太多的话,渲染的全景、3D模型等的位置可能会偏移很大。 解决方法:    进入页面捕获camera看向的矢量,加载完成捕获当前camera看向的矢量,有偏移的话重新初始化全景、3D模型等的位置和角度。 关键代码: 4.3 不支持情况排除 4.3.1 getUserMedia()、three.js、陀螺仪 ①判断是否支持getUserMedia()②用Detector.js判断是否支持WebGL ③使用three.js有报错时 ④设备不支持陀螺仪4.3.2 iOS8未知错误遇到问题:      有位远方的同事是iPhone 5S,测试发现加载到100%了,加载界面一直不消失,页面卡住。 解决方法:      本身页面的设置是:为了防止加载界面隐藏后,3D全景和3D模型没渲染完成显示一片空白,于是加了个判定,建模渲染完成才隐藏加载界面。找身边的同事借了iPhone 5S测试,能顺利进入3D全景AR界面,所以不是设备问题。问了下微信版本也是最新的,系统版本是iOS8,而没有问题的iPhone 5S的系统版本是iOS9,所以考虑是系统版本导致的问题。但是设备不在身边没办法一步步排除可能性,查了下iOS8的占比只有2.8%,该游戏用户群体iOS设备占比少于30%,所以决定放弃这部分用户的页面体验,直接跳转落地页。 关键代码:五、结语做新技术研究和实践的过程中会遇到没有先例的坑,去查API文档,曲线救国的方式解决问题(有时候觉得重构/前端是一个需要带着小聪明的工种XD)。很感谢游戏公众号的同事们给予的大力支持让我们的页面能顺利上线并被推送给玩家,看着数据蹭蹭蹭的涨,终于觉得努力没有白费了。最终页面的数据展示出,部分用户设备是支持拉起摄像头的,但可能出于安全问题的考虑,他们拒绝拉起。这是我们后续工作要考虑的一个问题,如何保护用户隐私以及让用户信任我们。总的来说,这是一次很有趣充实的新技术研究和实践,学习的过程是很让人幸福的。

  • VR/AR

    《子弹之殇》:让子弹再飞一会,剧情完整的射击游戏

    目前的VR游戏中,射击游戏占据了半壁江山,形形色色的射击游戏看得多了,甚至产生了几分厌倦,而在这样的大背景下,我们却发现了一款并不平凡的FPS游戏,《子弹之殇》。  我们选择这款游戏进行评测的原因很简单,首先,我们被它放出的精致细腻的游戏画面所吸引。而在进行了实际体验之后,发现它是目前国产FPS作品中少数几款有着完整剧情、其余各方面表现又都十分优秀的作品之一。  整款游戏体验下来,我们发现这款游戏不但画面精致、打击感棒,游戏内容也相当的丰富,武器种类繁多,并且还都有着不同的体感动作和枪支后坐力。游戏亮点  这款游戏的亮点有很多,精致的画面音效就不用我们多说了,相信看过视频图片、玩过体验版的玩家们都能够感受到这一点。那么其余的还有什么呢,就让我们来一一解读:  打击感  射击游戏的重中之重自然就是打击感了,《子弹之殇》的打击感虽然不说登峰造极,但该有的东西都一一做到了。  游戏中的武器都有着不同的后坐力,后坐力的大小完全仿照现实,射出的子弹也有非常不错的物理模拟。尤其是在“子弹时间”的模式下,玩家可以清楚地看到子弹飞行的轨迹和射中敌人的那一瞬间,感觉十分特别。  射击时手柄会随之震动,击中敌人之后虽然喷出的血雾有些夸张,但明显的击中僵直和恰到好处的恢复时间都十分给力。游戏有爆头设定,射中躯干、四肢和头部都会有不同的伤害数值,到游戏结束的时候还会给予结算,看玩家的爆头率是多少,并根据命中的准度计算该局得分。结算界面  这就牵涉到游戏的可玩性上了,本来只有三关的游戏在这样的设定之下可玩性大增,玩家可以在游戏中磨练自己的技术,争取枪枪命中,获得更高的名次。  真实度和VR适配  制作方为了使游戏最大程度逼近现实,给玩家带来真实的感受,建立了一个巨大的地图让玩家来进行身临其境的体验。除了子弹弹道、后坐力之外,枪支在射击的时候甚至还会掉落弹壳,掉落的弹壳落在铁皮箱上还会发出叮叮当当的声音,非常逼真。  除此之外,游戏还有模拟真枪的换弹夹设计。与其他射击游戏不同的地方是,枪内子弹射完之后,游戏会自动替玩家换上新的弹夹,这样的设计对新手玩家比较友好,利于玩家的上手。游戏中可以捡到光盾作为防御武器,这个光盾倒不算特别真实,玩家的攻击可以透过光盾射出,而敌人的攻击则会被挡住。而这面光盾也不能永久保护玩家,它会随着敌人的攻击渐渐出现裂痕,最终破碎。  子弹时间  当然,整款游戏中最特别、最亮眼的还是名为“子弹时间”的时间停滞系统。虽然这种系统我们也不是第一次见到了,但能够将其作为一种战术利用起来的玩法,还是非常特别的。  开启子弹时间时,周围的时间就像被慢放了一样,子弹在空气中划出一道长长的弧线冲向敌人,而玩家也可以清楚地看到向自己飞来的子弹,以进行躲避。我们甚至研究出一种很无耻的方法,就是躲在角落里,等到子弹时间冷却好再出来打,可以最大幅度的避免伤害。  BOSS战  最后我们特别提到一下这款游戏的BOSS战系统,采用的是RPG游戏常用的弱点击破。BOSS的体积巨大行动缓慢,玩家可以在BOSS攻击的时候射击其弱点,弱点全部击破BOSS才会受到伤害,最后倒地身亡。弱点击破缺点和小BUG  当然,人无完人,这款游戏也并不是真的就完美到无懈可击了,它也有一些小缺陷和BUG的存在,我们将它列举出来,期待着制作方的改进。  读图界面闪屏现象  游戏刚开始的读图界面就有相当严重的卡顿情况,还附带着不断的闪屏,一开始我们还以为只是个别现象,但每次游戏开始时的读图界面都有这个问题出现,让我们确定了问题的存在。这样的闪屏现象令玩家的眼睛非常难受,好在这仅仅只是出现在游戏的开始,一旦进入游戏中这样的情况就不会再出现,也不太影响游戏体验。  道具设计  前面提到过,武器的种类非常丰富,除了比较常见的机关枪、手枪等,游戏中甚至还有火箭炮、激光炮等等。但在不清楚武器的用法和用途之前我们不建议玩家轻易尝试,因为制作方为了模拟真实体感,将武器的换弹方式设计得太过逼真,让没有在现实中或是其他游戏中接触过类似枪支的玩家吃了不少这个枪支的苦头。真实模拟弹壳掉落  在这里我们建议制作方加入一个特别的枪支训练关卡,教会玩家到底如何去使用这些武器,并且熟悉武器特性,还可以制作一些靶子,设计成小游戏的样子来增加游戏乐趣。  游戏BUG  我们体验游戏的过程中,有一次我们想要找个掩体躲避起来,就在射击了“下一关”的图标之后迅速地瞬移到了掩体的后面。谁知这波的第一批怪物是从后方刷出的,迅速地击杀了第一个敌人之后,后续的敌人竟然不再刷新了,无论我们怎么移动、怎么射击都无济于事,最后只得重启游戏。  这个BUG我们在体验的过程中只出现过一次,其实应该算是无伤大雅的。因为游戏中总是会出现各种奇奇怪怪的BUG,但秉持着希望游戏能够更进一步的心,我们将它提了出来,并期待着一款更加完美的游戏到来。总结  《子弹之殇》这款游戏在steam上的售价为88元,价格不低,却是国产游戏中难得一见的良心之作。游戏的画面精致、玩法丰富,还融合了一些RPG游戏的元素,喜欢射击游戏的朋友可以尝试购买体验。玩法丰富游戏评分:7.6评分来自中国国际VR游戏评鉴联盟本文由83830评测编辑"墨焰涟"原创,转载需保留出处及作者。

精品游戏

more

GAD译馆

more
VR/AR 虚幻引擎4——VR开发入门
  • 已翻译VR中Unity的UI系统

    译者:赵菁菁(轩语轩缘)  审校:李笑达(DDBC4747)Unity的UI系统让开发者很容易创建用户界面,但是我们能将其用于VR应用吗?幸运的是,答案是肯定的。在本篇博客结尾,你会找到一个链接,它指向一个示例项目,该项目包括你需要的在VR中使用的Unity UI系统的任何东西。项目的脚本和资源,包括把现有Unity UI转成VR可用UI的所有内容。VR应用常用两种输入模式,项目的组件都支持。第一种是GazePointer,用户通过头部、鼠标指针来与物体或者UI元素进行交互来控制指针。“点击”事件可能来自于游戏手柄的按钮或者敲击Gear VR触摸板。第二种是与常规鼠标指针相似的一种指针,它移动穿过一个世界坐标平面(如悬浮在空中的UI面板)或者虚拟世界中的一个计算机模拟器。我们首先简要介绍一下Unity UI 系统是如何工作。Unity的GUI系统Unity的UI系统包括一些重要组件:– EventSystem– InputModules– RayCasters– GraphicComponents: 按钮,开关,滑块等。EventSystem是贯穿所有事件流的核心。它与其他一些组件关系紧密,包括InputModule组件,它是被EventSystem控制的主要事件来源。在Unity UI 系统中,在场景中一次只有一个InputModule是活跃的。内置的鼠标输入模块和触摸输入模块的实现处理与指向源的状态(也就是说鼠标或者触摸)有关,而且它们负责检测这些指针事件与图形用户界面组件的交集。实际检测是在如 GraphicRaycaster 这样的射线投射类中实现的。射线投射器负责一些不活跃的组件。当一个InputModule处理指针移动或者触摸时,它检测它知道的所有射线投射器,并且每个射线投射器会检测是否事件触发了自己的任意组件。在Unity的UI系统中有两种内置射线投射器:GraphicRaycaster (针对画布)和PhysicsRaycaster(针对物理对象)。在一个鼠标驱动或者触摸驱动的应用中,用户触摸或者点击屏幕上的一些点,这些点对应应用视口上的点。从相机角度,视口中的一个点对应空间中的一道光线。因为用户可能打算沿着这条路线单击任何对象,InputModule 负责从所有的光线交叉点中选择最接近的结果,交点是在场景中各种各样的射线投射器中找到的。简单回答就是VR中没有屏幕,因此没有可见表面来让鼠标移动。VR中提供 GUI 控制的一种方法是在世界坐标中为鼠标指针遍历创建一个虚拟的屏幕。在这种方法中,头部的运动并不控制指针,指针根据鼠标移动在虚拟屏幕上移动。注意这不同于Unity的世界坐标UI系统,在其中点击和触摸来自显示用户点击图片的摄像头,即使UI是在世界坐标中。与 VR 应用程序交互的另一个常用工具是GazePointer,它总是发生在视图的用户视野当中,并且由用户头部运动控制。GazePointer也作为光线投射,但是来自你的双眼,不来源于UI系统期望的摄像头。如果你有一个跟踪的输入装置,射线可能甚至来自一个你手中的指针。与鼠标指向和触摸不同,这些指针不能被描述成源自摄像头的射线。Unity的UI系统大多数面向整个屏幕的各个位置。一线希望      幸运的是,修改 UI 系统不是太难,让事件系统在世界坐标中与射线相互作用,而不是使用与摄像头相联的屏幕位置。在未来,Unity的用户界面系统可能与射线工作在更深的层面,但现在我们采取在射线投射代码中使用射线的方法,为了与Unity的UI系统剩余部分相兼容,我们之后再把它转换回屏幕位置。      如果你打开博客结尾的项目小样,你会发现我们加入了源于内置Unity UI类的如下类:      我们稍后会检查每个类中的代码。在进一步讨论之前,现在是打开示例项目的好时机。运行示例项目      博客结尾的项目是Unity 5.2项目,所以我们推荐你使用这个版本的Unity来打开这个项目。但是,所有的代码也可以在Unity 4.6和Unity 5.1上运行。      你也会需要OculusUnity实用工具的最新版本,可以在这里下载。      如果你已经下载了集成环境,像这样把它导入项目。      在项目中你会在场景目录中发现两个场景,Pointers和 VRPointers。Pointers是使用正常的Unity UI 画布和普通摄像头的场景。VRPointers 和前者一样,但是存在 OVRCameraRig和UI在VR中工作的安装程序。在我们继续之前可以随便试用这些场景,但请记住,您将设置"虚拟现实支持"选项关闭或打开来分别运行这些场景。你可以在播放器设置中找到此选项。      现在让我们看一下如何使用 OVRInputModule,OVRRaycaster,和OVRPhysicsRaycaster (和几个其他 helper 类)来从非-VR 版本走向VR 版本。一旦我们跨过了这道程序,我们将深入了解这些类是如何工作的。一步一步 UI 转换      打开指针场景然后按下播放,在编辑器中运行它。请注意,它有如一个常规非 VR 应用程序 —— 你可以在屏幕上移动你的鼠标并用它移动滑块和点击复选框。现在让我们看看如何转换把这一幕转换成虚拟现实。请确保在继续之前你已经取消播放模式。      第一步:用OVRCameraRig取代摄像头      从场景中删除摄像头,从OVR->Prefabs目录中找到OVRCameraRig预制件,用它替换摄像头。(如果在你的项目视图中看不到列出的 OVRCameraRig,请确保您已经导入Oculus集成包了)。您可以选择将其放置在相机之前的位置,或其他任何你觉得提供良好UI视点的位置。      如果您使用的是Unity 5.1或之后的版本,那么在这一点上你应该也确保“虚拟现实支持”在独立播放器设置中是开启的。      第二步:改变InputModule      在层次结构视图中选择 EventSystem。在监视器中,请注意它有一个 StandaloneInputModule 组件。这是正常的鼠标输入的输入模块。删除此组件 (右击并选择删除组件) 并且从Assets->Scripts目录中添加新的OVRInputModule。OVRInputModule 处理基于射线的指向,因此我们需要设置次属性的光线变换属性。通过从OVRCameraRig拖动 CentreEyeAnchor到此位置上做到这一点— — 这意味着你会用你的头进行指向。      为了启用游戏手柄支持,你应该也把OVRGamepadController组件加到EventSystem对象中。      第三步:添加一个GazePointer      我们想要在世界中添加一个围绕你的视角移动的视觉指针。在Assets->Prefabs目录中找到GazePointerRing预制件,并把它扔到场景中。OVRInputModule 会自动查找它,当你环顾四周时,OVRInputModule会围绕你的视角移动它。请注意这个预制件上有一些其它的脚本做颗粒效果。这是一切都是可选的——与OVRInputModule 工作唯一必需的部分是OVRGazePointer 组件。      将 OVRCameraRig 对象拖到CameraRig位置上,那么 OVRGazePointer 组件才得以了解 CameraRig。      第四步:搭建画布      在VR中,任何世界坐标画布对象可以用几个变化来操纵。在此场景中确实有三个画布,所以您需要为每个画布重复此步骤。首先,让我们找到并选择下面的Computer对象下的 JointsCanvas 对象。      4a:您会注意到的第一件事是画布组件不会再有事件摄像头参考。因为我们删除了那个摄像头,所以现在让我们在OVRCameraRig中添加对相机之一的引用。在Unity 4.6 你可以选择 LeftEyeAnchor 或 RightEyeAnchor 相机。在 5.1 以上版本,唯一的选择是 CenterEyeAnchor。      4b:在使其中,你会发现画布上有一个 GraphicRaycaster 组件,它用来检测你的鼠标什么时候和GUI 组件交织在一起。让我们把它删除,并用OVRRaycaster 组件 (在脚本目录中)替换,这会起到相同效果,但它适用于光线而不是鼠标的位置。      4c:在新的 OVRRaycaster 对象上,把Blocking Objects下拉框的选择改为All。这将确保我们的目光在场景中会被杠杆这样的对象遮挡。UI 目光指向准备好了!      在整个过程的这一点上,你应该能够运行场景并使用你的目光和空格键来与 GUI 元素进行交互。您可以把更改目光从空格的“单击”键改成OVRInputModule监视器面板中的任何其他键。您还可以配置一个游戏手柄按钮来充当目光-"点击"。请记住︰ 如果你只完成为 JointsCanvas的第 4 步,那么此时你将只能够凝视-点击那个帆布(有垂直滑块的粉红色那个)。      第五步:与物理对象交互      将OVRPhysicsRaycaster 组件添加到 OVRCameraRig。此新组件看起来与Unity的内置 PhysicsRaycaster非常类似。在监视器中,你会注意到,它有一个事件掩码属性。此筛选器指定该射线投射器会检测到场景中哪个物体。将此设置为"Gazable"层。场景已设置,以便所有交互组件是在"Gazable"层。再次运行场景,现在试着目光点击场景中央的杠杆。      第六步:世界坐标鼠标指针      让我们将世界坐标指针添加到场景中。这个指针将采取行动,好像它是一个虚拟监视器上的鼠标,只有当你用你的GazePointer看画布时该指针才会被激活,。这是一个在VR中提供熟悉输入设备的很好的方式。      你想要为任何画布添加一个鼠标指针,都应该遵循这些步骤。现在,让我们选择 JointsCanvas。      6a:找到 CanvasPointer 预制件,将它作为画布的孩子实例化。此对象上没有花哨的脚本,现有的纯粹作为指针的可视化表示形式。你可以把这个换用任何一种你喜欢的 2D 或 3D 指针表示。      6b:把新加的指针拽到画布的OVRRaycaster指针参考位置上,这让它知道它应该使用此作为指针。      6c:向画布对象添加OVRMousePointer。它负责在画布上移动指针。      就是这样!现在如果你运行场景,你会发现你仍然可以使用凝视指针与 UI 交互,现在当你看着它时,你可以使用鼠标来控制画布上的虚拟鼠标指针。      这里需要特别注意的是,初始场景只载有正常Unity UI 组件。所以你可以对 VR 启用的任何现与场景程执行完全相同的过程。那么这些都是怎么运作的?在这一节我们将看看我们使用上一节中的脚本如何允许您将现有的Unity用户界面转换为 VR UI。脚本已经写了,按照上述步骤,它们可用于在您自己的项目中,我们不非要了解内部运行,但这一节是为对技术感兴趣的人设计的。若要使奇迹发生,我们扩展的一些核心Unity UI 类。咱们先来看看核心类处理我们输入的扩展……OVRInputModule      Unity的StandaloneInput模块有很多指针交互和GUI元素导航代码,它本可以是我们继承的一个很好的类,但是不幸的是StandaloneInputModule的核心功能不是私有的就是受保护的,所以我们不能在我们的新类中用到所有的好东西。假如是这种情况,我们有三种选择:1.    将Unity的UI系统分支,建立我们自己的UI版本。在一个私有项目中,这本应是最好的选择,但是因为我们想要你能够尽可能在使用时少大惊小怪,我们不想要你安装一个新的UI DLL,所以这个方法不采用。2.    让Unity把这些功能改成受保护的。这会花费时间,但这确实是我们所追求的,多亏了Unity的合作,这些改变会发生。在未来,这个教程中讨论的扩展会变得简单些。3.    相反,从基类进一步继承,我们只需要从 StandaloneInputModule 将需要的代码复制粘贴到我们的类中。因为我们想让你能够尽快运行这个示例,并且尽可能在多的Unity版本中运行,我们选项了此项目。组织      如果你看看 OVRInputModule.cs,你会看到类从 PointerInputModule继承,并在代码中有两个区域我们放置了StandaloneInputModule 代码︰#regionStandaloneInputModule code#region ModifiedStandaloneInputModule methods      第一段是逐字移动的代码,如果它们不是私有的,第二段是我们已重写的功能。总的来说,OVRInputModule 的变化都是 StandaloneInputModule 的简单扩展。理解的最佳方式是看看代码。然而,以下的是关键更改的总结︰处理目光和世界坐标指针      GetGazePointerData() 和 GetCanvasPointerData() 两个新函数做GetMousePointerEventData 在 PointerEventData中做的事情,但是是针对我们新的类型的指针。这些都是我们处理指针输入的状态的扩展,例如:把空间视为GazePointer的"点击"键,为指针的方向使用赋值的射线变换。这些函数也调用OVRRaycaster/OVRPhysicsRaycaster找出 GUI/ Physics的交叉点。但我们已经改变了我们与射线投射器说话的方式用光线指向      我们已经取得的重要的变化是让子类 PointerEventData产生 OVRRayPointerEventData。这个新的类具有额外的成员变量︰ public Ray worldSpaceRay;因为它继承了 PointerEventData,整个现有的包括 EventSystem的UI 系统可以把它看作任何其他类型的 PointerEventData。重要的是我们新的射线投射对象了解此新成员,并且用它来纠正世界坐标光线交叉点。用世界坐标指针指向OVRInputModule有如下成员变量:public OVRRaycasteractiveGraphicRaycaster;      它跟踪了它所认为的"活跃"的 raycaster。你可以采取的各种计划,决定哪些 raycaster 处于活动状态 (没有理由,你只需要一个),但在此示例中,当GazePointer进入时,OVRRaycaster 组件声明自己处于活跃状态。其中 OVRRaycaster当前处于活动状态是很重要的因为,因为OVRInputModule就是允许它检测画布世界坐标指针和GUI元素之间的交集。在此例中,你可以看到这种行为,事实上,当你通过GazePointer使画布活跃时,你可以为画布移动鼠标指针。带回屏幕空间      可能OVRInputModule最重要的工作就是隐藏这样一个事实:我们用来自GUI系统的大量VR指针工作,比如:按钮,开关,滑块,编辑域。在这些元素中的大量逻辑都依赖于指针事件的屏幕位置。我们的指针是基于世界坐标,但是幸运的是,我们可以很容易将这些世界坐标位置转换成相对VR摄像头之一的屏幕位置(在示例中,我们随机选择了左眼)。这种策略施加的约束是你不能与不在摄像头前的UI元素进行交互——这看起来没什么不合理。由于这种转换,以及OVRRayPointerEventData类只是PointerEventData的子类的事实,UI系统的剩余部分可以与PointerEventData对象交互,无需知道这些对象是来自于一个鼠标指针、GazePointer还是任何其他指针。正在更新GazePointer      上述变化在技术上足以使目光指向在虚拟现实中起作用。然而,如果场景中没有你目光的视觉呈现,这就不会非常直观。OVRGazePointer 是照顾这方面的单例组件,OVRInputModule 有责任将其保持在适当的地点和正确的方向。保持在正确的地方是足够简单 —— 从射线投射回来的世界位置转发到 OVRGazePointer。稍微涉及到方向;一个幼稚的做法是给GazePointer定位,那么它就会总是面向用户 (通过定位它朝向摄像头装置的 CenterEyeAnchor)。这其实就是当没有检测无交叉时OVRGazePointer做什么。但当有交叉时,也就是说你的GazePointer是实际上正对对象或 UI 画布时,OVRInputModule 就会找到的 GUI 组件正常或物理对象的常规示范,并且用这将GazePointer与表面对齐。这就使得GazePointer感觉更附着在表面。目光光标与UI表面对齐总结      结果是,利用几个扩展Unity UI系统的新类,在虚拟现实中使UI 100%有效是可能的。这里的例子只是抛砖引玉——有很多方法可以扩展这个例子,用以实现在VR中与UI交互的新方式。GazePointer可以用跟踪控制器的指针替换;世界坐标的指针可以用游戏手柄移动;甚至可以使用世界坐标指针直接追踪跟踪输入控制器的位置 ——例子不胜枚举。      Unity采用越来越大特定的虚拟现实的功能,此示例中的一些代码可能会变得冗余,但现在就是现在。我们希望这篇文章可以帮助您现在就开始使用Unity伟大的UI系统。      点击此处下载项目【版权声明】原文作者未做权利声明,视为共享知识产权进入公共领域,自动获得授权;

  • 待翻译Minority Report science advisor builds the most awesome conference room

    Above: Tom Cruise in Minority Report inspired lots of tech companies.Image Credit: 20th Century FoxJohn Underkoffler was the science advisor for the landmark 2004 film Minority Report, and he designed the gesture-controlled user interface that Tom Cruise used in the film to solve crimes in the sci-fi story.In 2006, Underkoffler started Oblong Industries to build the next generation of computing interfaces, and in 2012, the company began selling commercial versions of the Minority Report interface. These are, famously, gesture-based systems where you can use a wand to make things happen on a big monitor.But the interface do a lot more than that. They are spatial, networked, multi-user, multi-screen, multi-device computing environments. Architects can use them to zoom in on a drawing on the wall, allowing everybody in the room or those watching via video conference to see what’s being discussed.I watched Oblong’s Mezzanine in action at one of the company’s clients, the architectural firm Gensler, which among other things designed the new Nvidia headquarters building in Silicon Valley. It was by far the coolest work room I’ve been in. I picked up conferencing windows and moved them around the screens in the room as if they were Lego pieces.Oblong has sold hundreds of these systems to Fortune 500 companies, and itraised $65 million to bring these computing interfaces to the masses. I talked with Underkoffler at Gensler in San Francisco to talk about his futuristic interface, as well as the accelerating inspiration cycle of science fiction, technology, and video games. This subject is the theme of our upcoming GamesBeat Summit 2017 event this spring.Here’s an edited transcript of our conversation.Above: John Underkoffler, CEO of Oblong, envisioned the gesture controls in Minority Report.Image Credit: Dean TakahashiJohn Underkoffler: Claire is in a room very much like this one. Three screens at the front, some displays on the side. Part of what you’re about to see is that the visual collaborative experience we’ve built, this architectural computer you’re sitting in called Mezzanine, is actually shared. There is shared control in the room. Rather than being a one-person computer, like every other computer in our lives, it’s a multi-person computer. Anyone in the room can simultaneously, democratically inject content and move it around.The pixels are owned by everyone. These are the people’s pixels. That’s true not just for us in this room, but for all the rooms we connect to. Anything we can do, Claire can do equally. She can grab control and move things around, contribute to the hyper-visual conversation if you will. The point here is to give you a sense of what we’re doing.I’ll grab Claire there with the spatial wand, the conceptual legacy of the work I did on Minority Report with gestural computing, and we can move through a bunch of content like this. We can use the true spatial nature of Oblong’s software to push the entire set of slides, the content, back and scroll through this way. We can grab any individual piece and move it around the room – really around the room.VentureBeat: You guys designed the wand?Underkoffler: Yeah, the spatial pointing wand. It’s next door to the Minority Report gloves, which we’ve also built and deployed for more domain-specific situations. The glove-based gestural work is more sophisticated, more precise in some sense, but it’s also less general. There’s a bigger vocabulary. It’s easy, in a generic computing collaboration context like this, for anyone to pick up the wand and start moving things around the room.If you are game to type one-handed for a second, I’ll give you the wand. If you just point in the middle of that image, find the cursor there, click and hold, and now you can start swinging it around. If you push or pull you can resize the image. You can do both of those things at the same time. When you have true six degrees of freedom spatial tracking, you can do things you couldn’t do with any other UI, like simultaneously move and resize.This truly is a collaborative computer, which means that anyone can reach in, even while you’re working, and work alongside you. If you let go for a second, there’s Claire. She’s just grabbed the whole VTC feed and she’s moving it around. Gone is the artificial digital construct that only one person is ever doing something at a time. Which would be like a bunch of folks standing around on stage while one blowhard actor is just talking. We’re replacing that with a dialogue. Dialogue can finally happen in, rather than despite, a digital context.VB: This works in conference rooms, then?Underkoffler: It works in any setting for which people need to come together and get some work done. The set of verticals–the Fortune 1000, Forbes Global 3000 companies that we predominantly sell to, occupy almost any vertical you can think of, whether it’s oil and gas or commercial infrastructure or architecture like Gensler. Commercial real estate. Hardcore manufacturing. Social media. Name a vertical, a human discipline, and we’re serving it.The intent of the system itself is universal. People always need to work together. People are inherently visual creatures. If we can take work, take the stuff we care about, and deploy it in this hyper-visual space, you can get new kinds of things done.VB: Hyper-visual?Underkoffler: It’s how it feels to me. It should be as visual as the rest of the world. When you walk around the world, you’re not just seeing a singular rectangle a couple of feet from your face. You have the full richness and complexity of the world around you.Even if you imagine human work spaces before the digital era—take an example like Gensler here, a commercial architecture and interior design space. Everyone knows what that style of work is. If, at the one o’clock session, we’ll work on new Nvidia building, we’ll come into a room with physical models. We walk around them and look at them from different points of view. You’ve brought some landscape design stuff. You unroll it on the table. We’re using the physical space to our advantage. It’s the memory palace idea all over again, but it’s very literal.For the longest time – essentially for their whole history – computers and the digital experience has not subscribed to that super-powerful mode of working and thinking spatially. Mezzanine gives the world a computer that’s spatial. It lets us work in a digital form the way that we’ve always worked spatially.Everyone knows the experience of walking up to a physical corkboard, grabbing an image, and untacking it from one place to move it over next to something else. That simple gesture, the move from one place to another, the fact that two ideas sit next to each other, contains information. It makes a new idea. We’ve just made that experience very literal for the first time in a digital context.Although the result is, in a sense, a new technology and a new product, it’s not new for human beings, because everyone knows how to do that already. That’s an advantage for us and for our customers. Everyone knows how to use this room because everyone is already an expert at using physical space.VB: What kind of platform is it? Is it sitting on top of Windows, or is it its own operating system?Underkoffler: At the moment it’s a whole bunch of specialized software sitting on top of a stripped-down Linux. It runs on a relatively powerful but still commodity hardware platform, with a bit of specialized hardware for doing spatial tracking. That’s one of our unique offerings.VB: Are the cameras more generic, or are they–Underkoffler: Completely. Right now that’s a Cisco camera with a Cisco VTC. We’re equally at home with Polycom and other manufacturers. We come in and wrap around that infrastructure. A lot of our customers have already made a big investment in Cisco telepresence or Polycom videoconferencing. We’re not saying there’s anything wrong with that. We’re saying you need to balance the face-to-face human communication with the rest of the work – the documents and applications, the data, the stuff we care about. Although it’s nice to see people’s faces from time to time, especially at the beginning and end of a meeting, most of the time what we want is to dig in and get to the real work, the digital stuff, in whatever form that takes. From there you start injecting more and more live content, whatever that may be.One of the experiences is browser-based. There’s a little tiny app you can download onto your Android or iOS platform, smartphone or tablet. A big part of our philosophy is that we want people to bring the tools they’re already most comfortable with as the way they interact with this experience. Anything I can do with the wand, I can also do with the browser. It’s very WYSIWYG. You drag stuff around.If you like, you can take out your phone. The phone makes you a super-powerful contributor and user of the system as well. Anything you know how to do already in a smartphone context is recapitulated and amplified, so you’re controlling the entire room. You can grab that and move it around the space, dump it over there. You can upload content with the add button at the bottom.That moment right there is indicative of what makes this way of working so powerful. If we were locked into a traditional PowerPoint meeting, there’d be no space, no way that anyone could inject a new idea, inject new content. Whereas here, in under three seconds, if we needed this bit of analog pixels stuck up there—you did the same thing simultaneously.VB: So phones are the way to get a lot of analog stuff into the screens?Underkoffler: Yeah. And we can plug additional live video feeds in. One thing that happens there is that we’re—again, we’re very excited about analog pixels. We’re not fully digital obsessives. We can do live side-by-side whiteboarding, even though we’re embedded in this more generic, more powerful digital context.Then the pixels start to become recombinant. Let’s cut out that little bit and we can augment Claire’s idea with our crummy idea here. Then we can make one composite idea that’s both brilliant and crummy, just like that. That now becomes part of the meeting record. Everything we do is captured down here in the portfolio. Claire, on a tablet on that end, if she were inclined to correct our mistakes, could reach in and annotate on top of that.In a way, what we’ve discovered is that the real value in computation is always localized for us humans in the pixels. Whatever else is happening behind the scenes, no matter how powerful, at the end of the day the information there is transduced through the pixels. By supercharging the pixels, by making them completely fluid and interoperable whatever the source may be – a PDF, the live feed from my laptop, the whiteboard, whatever – by making all the pixels interoperable we’ve exposed that inherent value. We make it accessible to everyone. Claire just used a tablet to annotate on top of the thing we’ve been working on.Above: Mezzanine lets you visualize complex projects at a glance.Image Credit: OblongVB: Is there some kind of recognition software that’s taking over and saying, “I recognize that’s being written on a whiteboard. Now I can turn that into pixels”?Underkoffler: There really isn’t. Down here, in the livestream bin, are all the sources that are presently connected. When we started that whole bit of the dialogue, I simply instantiated the live version there. In a way, there’s an appealing literalness to all of this. We can plug in AI. We can plug in machine vision and recognition, machine learning algorithms.VB: The camera is seeing that as an image, though? It’s not trying to figure out what you’re saying.Underkoffler: Right. But the opportunity always exists to simply plug in additional peripherals, if you will, which is part of what our technical architecture makes easy and appealing. We just built a prototype a month ago using a popular off-the-shelf voice recognition system where you could control the whole work space just by talking to it.The multi-modal piece is important, because it gives you the opportunity to use the tool you want to control a space. The thing that’s most natural or most urgent for you—I want to talk to the room, point at the room, annotate or run with an iPad or smartphone. You use whatever utensil you want.VB: How did you get here from Minority Report?Above: Tom Cruise in Minority Report uses John Underkoffler’s computing interface.Image Credit: 20th Century FoxUnderkoffler: By about 2003, 2004, in the wake of the film, I was getting a lot of phone calls from big companies asking if the stuff I designed, the way it’s shown in the film, was real. If it was real, could they buy it? If it wasn’t real, could I build it and make it work? It didn’t take many of those before I realized that now was the moment to push this stuff into commercial reality.We founded the company and started building out—it was literally building out Minority Report. Our very first product was a system that tracked gloves, just like the ones in the film. It allowed the gloves to drive and navigate around a huge universe of pixels. Early customers like Boeing and GE and Saudi Aramco purchased the technology and engaged us to develop very specific applications on top of it that would let their designers and engineers and analysts fly through massive data spaces.Part of the recognition here is that–with our recent fascination with AI and powerful backend technologies, we’re more and more implicitly saying the human is out of the loop. Our proposition is that the most powerful computer in the room is still in the skulls of the people who occupy the room. Therefore, the humans should be able to be more in the loop. By building user interfaces which we prototyped in Minority Report, by offering them to people, by letting people control the digital stuff that makes up the world, you get the humans in the loop. You get the smart computers in the loop. You enable people to make decisions and pursue synthetic work that isn’t possible any other way.For Saudi Aramco and Boeing and GE, this was revelatory. Those teams had the correct suspicion that what they needed all along was not more compute power or a bigger database. They needed better UI. What happened next was we took a look at all this very domain-specific stuff we’d been building for big companies and realized that there was a through line. There was one common thread, which was that all of these widely disparate systems allowed multiple people in those rooms pursuing those tasks to work at the same time. It’s not one spreadsheet up, take that down to look at the PowerPoint, take that down to look at the social media analytics. You put them all up at the same time and bring in other feeds and other people.The idea of Mezzanine crystallized around that. It’s a generic version of that. All you do is make it so everyone can get their stuff into the human visual space, heads-up rather than heads-down, and that solves the first major chunk of what’s missing in modern work.Above: You can use smartphones or laptops to control things in Mezzanine.Image Credit: OblongVB: What kind of timeline did this take place on?Underkoffler: We incorporated in 2006. I built the first working prototypes of Oblong’s Minority Report system, called G-Speak, in December of 2004. By 2009, 2010, we’d seen enough to start designing Mezzanine. It first went live in late 2012. We’re four years into the product and we’re excited about how it’s matured, how broad the adoption and the set of use cases are. It’s in use on six continents currently.VB: How many systems are in place?Underkoffler: Hundreds.VB: What is the pricing like?Underkoffler: At the moment, the average sale price is about half of what you’d pay for high-end telepresence, but with 10 times the actual functional value. We sell with or without different hardware. Like I say, sometimes customers have already invested in a big VTC infrastructure, which is great. Then we come in and augment it. We make the VTC feed, as you’ve seen with Claire, just one of the visual components in the dialogue.But again, the point is always that it’s—we can have the full telepresence experience, which looks like this, and at certain moments in the workflow might be critical. Then, at other times, Claire just needs to see us and know we haven’t left the room. We’re still working on the same thing she is. At that point we’re pursuing the real work.VB: It’s taken a while, obviously. What’s been the hard part?Underkoffler: Making it beautiful, making it perfect, making it powerful without seeming complex. The hard part is design. It always is. You want it to be super fluid and responsive, doing what you expect it to do. A spatial operating environment where you’re taking advantage of the architecture all the way around you, to be flexible for all different kinds of architecture, all of that’s the hard part on our end.On the other side of the table, the hard part is we’re rolling off 10 years where the whole story was that you don’t even need a whole computer. You just need a smartphone. The version of the future we see is not either/or, but both. The power of this is that it’s portable. The liability is that it’s not enough pixels, not enough UI. If we have to solve a city-scale problem, we’re not going to do it on this tiny screen. We’re going to do it in space like this, because there’s enough display and interaction to support the work flow, the level of complexity that the work entails.It’s taken a while, as well, for the mindset of even the enterprise customers that we work with to turn the ship, in a way. To say, “Okay, it’s true. We’re not getting enough done like this, where everyone is heads-down.” If we’re going to remain relevant in the 21st century, the tools have to be this big. There’s a very interesting, and very primitive, scale argument. How big is your problem? That maps, in fact, to actual world scale.VB: How far and wide do you think Minority Report inspired things here?Underkoffler: All of the technologies that were implied by the film?VB: Mostly the gesture computing I’ve seen, in particular.Underkoffler: There’s a great virtuous feedback loop that’s at work there. Minority Report was kind of a Cambrian explosion of ideas like that. We packed so much into the film. But it arrived at just the right time, before the touch-based smartphone burst on the market. We can attribute part of the willingness of the world to switch from—what did we call phones before that? Small keyboard flip-phones and stuff like that? And then the more general-purpose experience of the smartphone. We can attribute part of that to Minority Report, to a depiction of new kinds of UI that made it seem accessible, not scary, part of everyday life. It goes around and around. Then we put new ideas back into films and they come out in the form of technology and products.Above: Evan Rachel Wood as Dolores and Ed Harris as the Gunslinger in Westworld.Image Credit: HBOVB: I feel like Westworld is another one of those points, but I’m not sure what it’s going to lead to. It’s being talked about so much that there has to be something in there.Underkoffler: I think so. In one sense the ideas are cerebral more than visual, which is great. I hope what Westworld leads to is a renewal of interest in consciousness, the science of cognition and consciousness, which is fascinating stuff. Understanding how the wetware itself works. Westworld is definitely unearthing a lot of that.To pick back up on Minority Report, as we were working on it, as I was designing the gestural system and that whole UI, I was consciously aware that there was an opportunity to go back to Philip K. Dick and do a thing that had happened in Blade Runner. In Blade Runner you remember the holographic navigation computer that Harrison Ford uses to find the snake scale, the woman in the mirror. It’s really appealing and gritty and grimy, part of this dense texture of already aged tech that fills up that film.But for me as a nerd and a designer and a technologist, those moments in science fiction are a little frustrating. I want to understand how it works. What would it be like to use that? He’s drunk in that scene already and barking out these weird numerical commands and it doesn’t have any correlation to what’s going on. I knew we could show the world a UI where it’s actually legible. From frame one you know what John Anderson is doing. Viscerally, you know how he’s moving stuff around and what effect it has and what it would feel like to introduce that in your own life.It was a great opportunity. The feedback, as you’ve been saying, is really immense. The echo that has continued down the last decade because of that is remarkable.The other piece I wanted to show was a collaborative user interface. Of course Tom Cruise is at the center of those scenes, but if you go back and watch again, there’s a small team, this group of experts who’ve assembled in this specialized environment to solve a really hard time-critical problem. Someone is going to die if they don’t put the clues together in six minutes. They shuttle data back and forth, stuff flying across the room. That was a unique view of a user interface, at least in fiction, that allowed people to work together.We’ve built literally that. In a sense this is that Minority Report experience. We have lots of pixels all over the room. We can be in the room together and work with everything that all of us bring here.VB: Do you think you’re close to being there? Or do you think you’re going to be doing improvements on this for years? Are there things in the film that you still can’t quite do?Underkoffler: I think it’s fair to say we’ve already exceeded everything we put in the film and that I’d imagined behind the scenes. But our road map at Oblong, what we have, will occupy the next 10 years. We have enough new stuff – not incremental improvements, but genuinely new things – to keep us busy for a decade.Above: Telling Siri to send money through PayPal.Image Credit: PayPalVB: I don’t know if I would guess, but putting computer vision to work, or voice commands?Underkoffler: There’s a huge set of projects around integration with other modalities like what you’re discussing. Our view, our philosophy, is very clear. We never want that stuff to replace the human as the most important participant. But if machine vision can find business cards in the room, or documents we lay down on the table, and automatically import them into the digital work space, absolutely. If we can make speaker-independent voice recognition work flawlessly in an extended environment like this, where it can even respond to multiple people speaking at the same time, that would be immensely powerful. Then we have a room where all existing human modalities are welcome and amplified. That’s one of the vectors we’re pursuing.VB: You mentioned that it costs about half as much as a telepresence room. How much do those cost? What order of magnitude are we talking about for one room? Or I guess you have to have two rooms.Underkoffler: You don’t, actually, and that’s important. Even when there’s one room, the idea that a bunch of people can come in and work with each other, rather than one at a time or separated by these, is transformative. We do as much work internally in a single room, not connected to any other room, as we do when the rooms are connected together. Unlike the telephone or the fax machine, it’s fine if you have only one.Pricing really depends on the size of your space, how many screens, what kind of screen, what size. Do you already own the screens? Do you want to buy screens? Typically, like we say, it’s half the price of telepresence. Telepresence is really about high fidelity voice and video delivery of people talking to each other, the in-room personal piece. This layers on infopresence, all the content and information everyone brings to the table to discuss. You have the opportunity to surround the entire team. We have two walls here with screens, but it could be three or even four. You can take the notion of immersion as a group to a whole different level that you can’t do with any other kind of technology.VB: How long before you get to medium and smaller businesses being able to use this?Underkoffler: Do you remember in Star Trek II, where they decide to not speak openly on an unencrypted channel? [laughs] The answer to your question is: sooner than you might guess. We did just expand our war chest, so to speak, to fuel some wonderful growth and development. There are lots of good things coming. We’ll be introducing at least two major iterations on the product in 2017.VB: Digressing a bit from Oblong, back to Minority Report, what benefit do you see in attaching a science advisor to a sci-fi film, convincing people that what they’re going to watch is plausible? When did that become a common thing?Underkoffler: I’d have to nominate Andromeda Strain, for which Michael Crichton undoubtedly did his own science advising. Robert Wise directed, but I remember seeing it and being blown away, because every element of it – all the technology, the dialogue, even the passion that the scientist characters infused into this fictional world – is real. It may be the world’s oddest instances of product placement. I don’t suppose they actually bought telemanipulators for the scenes where they pull the top off the crashed space probe and yank out the synthetic life form and the rest of that. But it’s all real. The excitement of the film derives from the fact that there’s no question it’s real.The dialogue Paddy Chayefsky wrote in Altered States stands as the single best depiction of how scientists actually sound when they’re excited, and in some cases drunk. That’s cool. After that, one thing that’s interesting to look at is a 1984 film by Martha Coolidge called Real Genius, which was Val Kilmer in a comic mode at a lightly fictionalized Cal Tech. But all of the stuff, including the student experience and the political interactions and the alliances with DARPA and other government funding agencies, all of it was shockingly real and authentic. It’s because the production hired a guy named Dave Marvit, who has since become a friend. He was a recent Cal Tech graduate.If you remember back in those days, there was Weird Science and some other movies, and they were all – with the exception of Real Genius – kind of in the same mold. Someone decided that they would have a very sketchy picture of what it’s like to conduct science, to be an engineer, to be creative in those worlds. Then you hang the presumably ribald comedy on that, where at the end of the day it’s about seeing other naked human teens. But with Real Genius the effect is completely different, because you’re immersed. There’s that word again. You’re immersed in a world that you can relate to. It’s only because of the authenticity of every little detail.Minority Report was one of the next steps. The film made an investment in that kind of immersion, made an investment in that verisimilitude. It came from the top, from Spielberg, who said he wanted a completely real, completely believable 2054.Above: HAL from 2001: Space OdysseyImage Credit: Flickr/Alberto RacatumbaVB: An example I think of is 2001, where you have the computer that goes rogue, and the HAL initials are a nod to IBM. Had they not put that in—the film is still good, but it’s somehow deliberately reminding you of reality.Underkoffler: That was part of Kubrick’s genius. Now we’re going backwards from Andromeda Strain, and I should have started with 2001, but part of his genius was that he cared about the tiniest details. He went at it until he personally understood all of it. You have the really interesting implications, and it’s not about product placement. It’s about verisimilitude. Bell Telephone is in business and now has this different form factor, because you’re making a call from the space station. I forget what the other recognizable western brands are, but there’s a hotel and others. It was about showing how the world will remain commercial, even when we have everyday space travel.Then you bolt that on to all the NASA-connected research he did. You have Marvin Minsky designing these multi-end effector claw arms for the little pod they fly around in. Everything is considered. It’s not just icing. It’s the cake, in a sense. It’s how you end up in that world, caring about it.VB: I wonder whether these walls are coming down faster, or that the connections are getting stronger. The thing I point to—I had a conversation with the CEO of Softbank, and I asked him, “What are you going to do with the $100 billion you’re raising from the Saudis?” He says, “We’re investing for the singularity. This is something we know is going to happen in the next 30 years and we’re going to be ready for it.”Underkoffler: So he’s predicting singularity in 30 years? I’ll bet five dollars that’s not true. Tell him that. I’ll see him in 30 years.VB: I asked about what Elon Musk has been saying, and everything science fiction has predicted about the dangers of AI. He says, “Fire was not necessarily good for humans either. But it does a lot of good for us. Someone’s going to do this.”Underkoffler: If science fiction becomes part of the standard discourse, if everyone expects to see a lot of that on TV, that’s good, because it leaves room for—let’s call it social science fiction. Pieces that aren’t just about technology, but about the kind of social, political, and philosophical consequences of technology. That’s why Philip Dick was so fascinating as a writer. It’s why Westworld is interesting.How about Black Mirror? Black Mirror is specifically about that. That’s way more interesting, way more exciting than just seeing a bunch of people flying around. For my money, that’s when stuff gets really good. That’s when humanity is actually talking to itself about decisions — what matters, and what happens if we do or don’t decide to pursue this or that technology. It’s probably a dangerous thing to say we’re just going to pursue technology. Did Prometheus think what might happen when he gave us fire? Or was he just like, “I’m pissed at the gods, here you go”? That ladder feels to me like a lot of what’s happening. Let’s pursue technology because we’re technologists.VB: That’s why I thought the story in the new Deus Ex game was interesting. Human augmentation is the theme, so they predicted that it would divide humanity into two groups. The augmented humans are cutting off their arms to have better arms and things like that. Then the natural humans are saying this isn’t right, that it’s going too far. Terrorism happens and one group blames the other, tries to marginalize and ghettoize the other. The notion is that division in society is an inevitable consequence.Underkoffler: I think that’s smart, and that’s right. There’s always haves and have-nots. It’s important to go back to the earlier days of sci-fi, too. There’s an amazing short story by Ray Bradbury called “The Watchful Poker Chip of H. Matisse.” This guy who nobody’s ever paid attention to, because he’s really boring, loses an eye in an accident. He’s able somehow to commission Henri Matisse to paint a poker chip for him and he puts it in his eye socket. Suddenly people are really interested in him, and the rest of the story is about him having these willful accidents. He loses a leg and builds this gold bird cage where his thigh would be. He becomes an increasingly artificial assemblage walking around like a curiosity shop. But there’s a social currency to these alterations.Are you seeing anything really great coming to light in the game world these days around collaborative gameplay? Not multiplayer, but collaborative.Above: Mezzanine is being used by researchers to help cure cancer.Image Credit: OblongVB: There was one attempt by the people who made Journey, a very popular game. It was only about four hours long, but they had a multiplayer mode, where you could go on your adventure with somebody else. That other person was a complete stranger. You couldn’t talk to each other. But there’s a lot of puzzle-solving in the game, and you could work together on that and progress. It’s a very different kind of cooperation than you’d expect.Underkoffler: I have a great friend who was a large-scale team lead on Planetside. I used to study them playing, because they’re all separated by distance, but wearing headsets, so there’s communication through one channel that allows them to operate as a conjoined unit. That was interesting.VB: There’s almost always cooperative modes in games these days, where you’re shooting things together or whatever. But collaboration—it makes me think of some of those alternate-reality games, like the I Love Bees campaign for Halo 2. I don’t know if you ever read about that. They had hidden clues scattered around the web and the real world to market the game. They made 50,000 pay phones ring at the same time, and people had to go to as many of these could and answer them. They recorded what they heard, and it all patched together into this six-hour broadcast of the invasion of Earth, like War of the Worlds.Underkoffler: The crossover with the real world is really fun there.Above: Ilovebees was an ARG for Halo 2.Image Credit: 42 EntertainmentVB: Alternate reality games became a popular thing later on, although not on as large a scale. They’re very hard to do. Only a couple of companies were doing them. A guy named Jordan Weisman ran one of them. And Elan Lee, but he’s off making board games now. Getting the masses to collaborate, crowdsourcing, is pretty interesting.I wonder what you get when you put these people together. You take the AI experts and the science advisors and the video game storytellers and moviemakers all together. Something good has to happen.Underkoffler: I think so. It’s always worth studying, studying games in particular. Before Oblong got started, some of the best work in next-generation UI was happening in the gaming world. People didn’t want to pay attention. SIGGRAPH or the ACM people didn’t want to hear about games, because you need this academic thing and all the rest of it. But the fact is, before anyone else was thinking about it, game designers were figuring out how to do incredibly complex things with a simple controller. A reward system was in place to make it worth learning how to pilot a craft around with six degrees of freedom using a game pad. It bears studying. As you say, once you start colliding these different disciplines, interesting stuff is going to come out.Above: In ilovebees, 42 Entertainment made 50K pay phones ring at once. Players recorded the calls and put together an hours-long broadcast on the Covenant invasion.Image Credit: 42 EntertainmentVB: VR seems like an interesting frontier right now. People are inventing new ways to control your hands in it and so on.Underkoffler: It’s pretty primitive. A lot of the foundational technology isn’t even there yet. How do you really want to move around that world? Mostly people have been building the output piece, the headsets. There’s been less work on the UI. But that’s what we’re interested in.VB: Games are teleporting you from place to place because you get sick if you try to walk there. What would you say is the road map going forward, then? What’s going to happen?Underkoffler: We’re going to make the kind of computing you’re looking at now – architectural, spatial, collaborative computing – more and more the norm. It’ll be a layer on top of the computing that you expect and understand today on your laptops and tablets and smartphones. As you suggested earlier in the hour, that’ll start permeating through various layers – small and medium business, all kinds of organizations at different levels.And at the end we get to actual ubiquity. When you sit down in front of your laptop, it’s not just you and your laptop. It’s also the opportunity to communicate and collaborate with anyone else in the universe. We’re going to give the world a multitude of collaboration machines.

资讯

more
  • VR/AR

    VR音乐游戏的出路在哪里?

      VR游戏的种类有很多,音乐游戏也算其中一个分支,毕竟还是有很多玩家喜欢对着节奏按相应的按键,而对于VR游戏的出路在哪里?相信有很多VR游戏开发者/开发商都会考虑到,那就一起来看看吧。    音乐类VR游戏已经陆续上架各类平台,有开心鼓神(Happy Drummer VR)、超级节拍(Beats Fever) 、《音姬》、《Carry Me VR》、《宇宙迪斯科》、《漫步》(WalkPlay)、《超级节拍》,还有比较早期的《音盾》。    音乐类游戏作为休闲类的一个分支,通常能在有限的玩法上延伸出不同的乐趣,所以其用户活跃一直处于高位。    对于VR游戏来说,小游戏往往更容易弯道超车,音游就是有这样的潜质。    音乐游戏的快感来自于它的人机互动的爽快感。    音游由动听的音乐组成和强操作组成,一般游戏是从画面的感观上和人的肢体感觉上反馈你的目标,而音游又深了一层,添加了从听觉上的反馈,让人从游戏中获得更多的体验达到快感的升级。    VR音游相对来说是比较容易设计的,而且就传统音乐来说,VR技术更有游戏快感。    纵观市场上的音乐游戏,几乎都是从PC平台上照搬过来的作品,而且横向比较,每一款游戏也都没有玩法上的创新。《劲舞团》《QQ炫舞》能火,实际上加入了社交的元素。虫有一回上《QQ炫舞》,竟然看到有人在房间里实景唱歌,简直就是早期的直播,大吃一鲸。    VR就不同了,《音盾》玩一圈下来,也会有体力消耗,毕竟是需要动用双臂的游戏。    音乐市场确实艰难,不过做好了,也是最容易盈利的。    因版权保护问题,所以付费上形式可以有多种形式,可以设置为解锁原创音乐实现收费,也可以在本身是优秀游戏上,让大家为剧情或者角色埋单。    不过目前的VR音游,更多是以音舞类游戏出现,解锁点(往后可以设置成付费点)只能单一地设置在角色换装上。    VR游戏要讲求赚钱,可以参考传统的音游,同样有以下几种付费模式:    1、流量和广告;    2、游戏内部的曲包,如雷亚的《Deemo》采用的形式是免费下载试玩版,体验过后12元升级完整版,还可以通过解锁付费曲包来获得额外的曲目;    3、游戏内部设定的付费项目(例如音效、主题和剧情)。    本身音游就是休闲游戏,加上玩音游的大多是打发碎片化的时间,要想从玩家口袋里拿钱,就得对游戏长期的曝光和导流量,这对于中小团队来说无疑是无能为力的。    但音游的支出部分也很大,刨除开发成本,版权是一个烧钱的事。    国内音游的版权获得一般有几种途径:    1、土豪企业:像网易、腾讯这种巨头本身公司就有QQ音乐、网易云音乐等背景的支持,版权自然容易解决;    2、中小企业:有点财力的公司会选择跟唱片公司合作,收买版权,但这种方式获得的音乐数量少。现在还出现了小团队会跟草根明星或者小众音乐人合作,通过每个草根明星的影响力来带量。    但习惯了拿来主义的大众几乎都不大愿意为音乐买单,所以即使是腾讯这样的巨头也不大愿意为音乐版权掏钱。    在2016年VR CORE颁奖典礼上,国产游戏《音姬》获奖了,但这款游戏的一个硬伤是,目前就只收录了6首歌曲,并且不能自定义使用自己的歌曲,这一点是被玩家吐槽比较多的一点。    就VR游戏《音盾》来说,游戏比较特别的地方是它可以识别本地音乐,也就是说你能下载到什么歌曲,它就能将它转化为游戏然后播放出来。将侵权的风险转移到玩家,不清楚这是不是钻了版权的漏洞。    就目前来看,让VR游戏和明星合作,简直天方夜谭。只能签约草根明星,或是自己原创,或是重新演绎歌曲。    VR音游能否出现一款爆款?    游戏要做到的是:竞技性&观赏性强。    音游最基本的元素在于节奏感,通过对节奏感的演绎,使玩家体验到游戏的快感。音游属于上手快通关难的游戏,决定了其受众门槛低也适合竞技。    而且移动端的受众更为大众化,对一款移动端音游有更多的了解,同时,音游的玩法很单一,小白观众还是很想知道游戏还能玩出什么花样来。所以联机和排名就显得很重要了。

  • VR/AR

    未来VR/AR的显示技术:光场显示技术

      眼下的电视机是无法达成这种体验的——因为它只是单纯的2D平面。要让电视机真正实现“窗户”的感觉,这就是接下来要介绍的“光场”显示技术,这也很有可能是未来VR/AR的显示技术。    当你走过一扇窗,你会感到什么?也许是窗外色彩缤纷的景色,也许是徐徐吹来的凉风,但最重要的是,你在窗子的正面和左右角度看到的东西是有很大差异的。    那么,如果说你刚才经过的不是窗户,而是一台电视机呢?    当然,眼下的电视机是无法达成这种体验的——因为它只是单纯的2D平面。要让电视机真正实现“窗户”的感觉,就要依仗我们今天要介绍的“光场”显示技术。这也很有可能是未来VR/AR的显示技术。    “光场”是什么?    众所周知,显示出来的画面是由各个像素组成的,而像素的密度决定了画面的精细度。而如果每个像素都拥有一种以上的颜色,就能够实现,根据观看角度不同,产生颜色也不同的效果。这就是全息图像技术最基本的一点。    这同样也是光场对于移动图像的实现手段。之所以叫光“场”,是因为对于拍摄光场的摄像机来说,每一个像素都必须捕捉到从每个方向打到每个像素的光,而并非只是正前方的光线。    光场显示器的角度大小可能会比较受限制。因此比较理想的解决办法很可能就是类似本文开头那样一台类似窗户的电视。但如此一来,分辨率就要提高好几倍。    想象一下,不管观看的角度多么分散,都需要实现完整的分辨率——这将耗费巨大的视频带宽,去在180度的视场角中,以平均的观看距离,支持两眼间不同角度的画像。而眼下我们离提供这一量级的带宽还非常遥远,可能需要今天所使用的几百、甚至上千倍。    传统的VR显示屏是什么样的?    Oculus Rift或HTC Vive这种VR头盔打造的是一种人工图像,但通过使用视觉线索,会给人一种有深度的感觉。比如:    双眼像差或立体视觉:让用户的左右两眼看到的图像都有些不同,因此当大脑处理这两图像的时候,就能感觉到深度的存在。    运动视差:当用户把头从一边移动到另一边的时候,让离用户视线较近的的横向移动会比远处的更快。VR头盔就是这样欺骗了人的大脑,让人相信在场景中有“远近”。    双目遮挡:在场景中位于前景的物体、以及在其它物体之前的物体看上去要更加接近,从而产生简单的相对距离等级。如果每只眼睛的遮挡都不同,人类的大脑就会认为存在深度。    视轴:当盯着某物的时候,物体越近,眼睛就越要让它保持在视场中心。但如果某物在远处一个无限的距离,则眼睛就必须要发散、转动,这一动作给了大脑所需的数据,来计算物体的距离。    大多数的头显都不会提供聚焦的线索。整个的场景都是永远聚焦的,这是由于场景显示在和用户的眼睛同样距离的平面屏上。但眼睛也会因为变化中的立体图像的一部分而内聚和外散到注意力被吸引的地方去。    而这一过程,大脑的感觉并不好,这种被称为“视轴调焦”的过程,将会造成“视觉不适、图像质量损失、头晕、头疼和眼疲劳”。    那么,有哪些公司在做这些呢?    英伟达正在和斯坦福大学合作,开发新类型的显示技术。这一被称为“光场立体镜”的技术采用双显示屏,由两个相距5毫米的LCD面板组成。一台VR头盔将通过微透镜阵列来把每个图像变成分散的光线,并追踪、显示每个光线的来源和去向。如此一来,人类的眼睛将会更容易在不同的深度上定位聚焦提示。    另一家是Magic Leap,他们扬言要让用户的眼睛,除了虚拟图像还能处理真实世界里的光。

  • VR/AR

    淘宝Buy+使用教程 教你在淘宝上VR购物

    相信已经有很多人已经使用过淘宝Buy+进行VR购物,当然也有的人还不会,为此下面就给大家介绍下使用淘宝Buy+进行VR购物的方法,一起来看看吧。

  • VR/AR

    为什么Moto Z手机玩谷歌Daydream VR会出现掉帧

      谷歌已经确认联想的 Moto Z 和 Moto Z Force 是支持 Daydream VR 的手机,但是很多用户在使用Moto Z玩谷歌Daydream VR会出现掉帧的情况,那么为什么会出现这个问题呢,下面就让我们来一起分析下掉帧出现的原因。    谷歌已经确认联想的Moto Z和Moto Z Force是支持 Daydream VR的手机。已经拥有这两款手机的用户,则只需要单独购买 79 美元的 Daydream View头显便可以体验Daydream内容。而Moto Mods 扩展模块是 Moto Z 手机的主打功能,相比其他手机,相信通过扩展的电源模块可以让你拥有更长的 Daydream VR 体验时间。    或者大家也想知道,Moto Z 运行 Daydream VR 的性能表现跟谷歌自家品牌的 Pixel 和 Pixel XL 会有什么差异?带着这个疑问,我们决定对 Moto Z 进行了评测。首先把 Moto Z 放进 Daydream View 头显之后,需要稍微来回移动一下手机以匹配 NFC 标签,从而在 Daydream 模式下打开 Daydream 应用程序。但如果你不想使用 NFC 功能,可以通过手机蓝牙配对 Daydream 控制器来启用 Daydream 应用。    我们发现Pixel 可以运行的 VR 应用或游戏,Moto Z 同样也能运行,如游戏《Action Bowl》、《Danger Goat》和《Fantastic Beasts》。而“设备在VR模式下刷新率必须达到60Hz”是谷歌 Daydream-Ready 设备的详细标准之一。在大多数情况下,Moto Z 的表现跟 Pixel 不相上下。 但是在运行一些稍微重量级的游戏如《Drift》,Moto Z 则比 Pixel 逊色了。《Drift》是一款 VR 射击游戏,玩家在游戏中需要躲避敌人的攻击,收集蓝水晶,并击中最终目标,游戏难度随着关卡提升难度。    主要来说,使用 Moto Z 玩《Drift》会出现掉帧的情况,无法保持 60 fps 的标准。以下是 Moto Z 在一个特定时间内的帧率情况(见下图)。对于用户来说,帧率始终保持在 60 fps 是十分重要的。因为掉帧足以让一些用户感觉不舒服,甚至恶心想吐。    对于这个差异,我们尝试从几个方面去探讨可能引起这个掉帧问题的原因(有不同见解的朋友可在文末留言区与小编讨论哦~):    机身的温度:正常来说,体验 Daydream VR 一段时间之后必然会导致手机升温,从而导致掉帧的情况。但是我们发现游戏一开始就出现了掉帧情况。机身温度影响掉帧的可能性不大。    处理器:谷歌 Pixel 搭载的是高通骁龙 821 处理器,而 Moto Z 则采用高通骁龙 820 处理器。测试过程中,除了一些必要的应用程序,我们已经尝试禁用了其他多余的手机应用。下图是两者处理器可占用率分析。在体验游戏《Drift》的时候,Pixel XL 的 CPU 可占用率为 18-19%,而 Moto Z 则是14-15%。    游戏本身:可能是《Drift》游戏并没有进行最大限度的优化或者更新。因为很多的 Daydream 应用自首次发布之后都有进行过多次更新,如 Bug 的修复、增加新功能或者支持最新版的谷歌 Daydream SDK。而我们从一名摩托罗拉的技术人员了解到,Daydream 应用需要在谷歌 Daydream SDK 1.0.3 版或以上进行编译才能在 Moto Z 上获得较好的性能表现。而许多应用最初都是使用旧版的 SDK 进行编译,不过现在这些应用都陆续开始进行更新了。

  • VR/AR

    Magic Leap新专利:MR头显能治疗色盲

      最近美国专利及商标局通过了初创公司Magic Leap的一项专利申请,该专利跟通过增强现实技术来治疗色盲有关。这对于那些有色盲的VR用户来说不可谓不是一项天大的好消息,这样色盲用户以后也可以欣赏到正常的颜色了。    色盲是指缺乏或完全没有辨别色彩的能力。面对五色缤纷的世界,人们到底是如何感知它的呢?原来在人的视网膜上有一种感光细胞——锥细胞,它有红、绿、蓝3种感光色素。每一种感光色素主要对一种原色光产生兴奋,而对其余两种原色光产生程度不等的反应。如果某一种色素缺乏,则会产生对此种颜色的感觉障碍,表现为色盲或色弱。    Magic Leap的专利说明了增强现实设备可以用于诊断和治疗眼部问题,尤其是色盲。设备包括:可佩戴增强现实显示平台的增强现实头戴式眼科系统,所述的可穿戴增强现实显示平台包括含有至少一个光源的显示器,其中所述的可穿戴增强现实设备会配置为根据用户的颜色识别缺陷来选择性地修改来自世界的所述光。

返回顶部
关注GAD微信公众号
VR/AR
站点指引