论文部分内容阅读
2011年世界互联网最热关键词:增强现实技术 (AR, Augmented Reality)
这种技术通过网络把虚拟图像、文字、音效等现实生活场景叠加在一起,帮助用户实现超越现实的感官体验。
比如,在巴黎、巴塞罗那、伦敦等大城市,游客可以一眼就看到最近的地铁站;用手机摄像头扫过帅哥美女,就可以看到他/她在社交网站上公开的资料;扫过卢浮宫,关于她的照片,展览信息,周围的饮食建议等信息一网打尽。
再比如,把在纸质书上看到的文字、图表用手指“摘”下来,直接放在普通纸上,像操作iPad一样进行编辑、修改、打印;之后还可以再把这些加工过的材料用手指轻轻一“摘”,放回传统计算机。
听起来是不是很神奇?其实已经有很多应用可供选择了。我们和数字世界的交互方式就这样被彻底的重新定义了。
Biographical Sketch|人物速写
1981年生于印度,现在是麻省理工学院媒体实验室 (Media Lab) 的助理研究员和博士候选人。进入实验室之前,普拉纳夫是微软公司的研究员。
2000年,普拉纳夫把自己的和向朋友借来的鼠标拆开,拿到四个滚轮,利用滑带和弹簧,做出了一个侦测手部动作的接口装置。这个装置成本只要两块美元,通过它,可以将现实世界的动作在数字世界里反映出来。此外,普拉纳夫还发明了可以将写在便利贴上的内容同步到计算机的数字便利贴、画3D效果的笔、以及只需通过动作识别、不必输入关键词的实体世界Google地图。
2009年普拉纳夫在TED展示自己的发明创造。在之后的主持人问答环节,他宣布开放“第六感运算”背后程序代码,让可能性无限延伸,使更多的普通大众从中获益。
普拉纳夫在TED (Technology, Entertainment, Design) 大会上的分享,为大家整理节选如下。
Pranav Mistry: The thrilling potential of SixthSense technology
We grew up interacting withinteract with:与……相互作用。 the physical objects around us. There is an enormousenormous:huge, marked by extraordinarily great size, number, or degree. number of them that we use everyday. Unlike most of our computing devicesdevice:装置,设备。, these objects are much more fun to use. When you talk about objects, one other thing automatically comes attached to that thing, and that is gestures: how we manipulatemanipulate:使用,操作(机械等)。 these objects, how we use these objects in everyday life. We use gestures not only to interact with these objects, but we also use to interact with each other.
(为篇幅考虑,略去普拉纳夫对之前4项发明的介绍,这段文字可以在考试杂志社网站的互动版块找到。之后他转变思路,“第六感”技术诞生了。)
Fig1 A headmounted projector
Pixels are actually, right now, confined in these rectangular devices that fit in our pocketspixel:(显示器或电视机图像的)像素;confine:To confine something to a particular place or group means to prevent it from spreading beyond that place or group.. Why can I not remove this confine and take that to my everyday objects, everyday life, so I don’t need to learn the new language for interacting with those pixels?
So, in order to realize this dream, I actually thought of putting a bigsize projectorprojector:投影机。 on my head. I think that’s why this is called a headmounted projector (see Fig.1),
Fig2 The SixthSense device
isn’t? I took it very literally, and took my bike helmet, put a little bit cut over there so that the projector actually fits nicely. So now, what I can do — I can augment the world around me with this digital information.
But later, I realizedI wanted to interact with those digital pixels. So I put a small camera over there, that acts as a digital eye. Later, we moved to a much better, consumeroriented pendant versionoriented:oriented is added to nouns and adverbs to form adjectives which describe what someone or something is mainly interested in or concerned with.面向……,关于……;pendant version:颈挂式。 of that, that many of you now know as the SixthSense device (see Fig. 2).
But the most interesting thing about this particular technology is that you can carry your digitalworld with you wherever you go. You can start using any surface, any wall around you as an interface. The camera is actually tracking all your gestures. Whatever you’re doing with your hands, it’s understanding that gesture. And, actually, if you see, there are some color markers that in the beginning version we are using with it. You can starting painting on any wall. You stop by a wall, and start painting on that wall.
Fig. 3 Take a photo by just doing the gesture
But we are not only tracking on finger. We are giving you the freedom of using all of both of your hands, so you can actually use both of your hands to zoom into or zoom out of a map just by pinchingpinch:If you pinch a part of someone’s body, you take a piece of their skin between your thumb and first finger and give it a short squeeze. all present. The camera is actually doing — just, getting all the images — is doing the edge recognition and also the color recognition and so many other small algorithmsalgorithm:运算法则。 are going on inside. So, technically, it’s a little bit complex, but it gives you an output which is more intuitive to use, in some sense. But I’m more excited that you can actually take it outside. Rather than getting your camera out of your pocket, you can just do the gesture of taking a photo and it takes a photo for you (see Fig. 3).
(Applause) Thank you.
And later I can find a wall, anywhere, and start browsing those photos or maybe, “OK, I want to modify this photo a little bit and send it as an email to a friendbrowse:If you browse through a book or magazine, you look through it in a fairly casual way;modify:If you modify something, you change it slightly, usually in order to improve it..” So, we are looking for an era where computing will actually merge with the physical worldera:时代;merge with:If one thing merges with another, or is merged with another, they combine or come together to make one whole thing..
Fig4 Making a call
And, of course, if you don’t have any surface, you can start using your palmpalm:手掌。 for simple operations. Here, I’m dialing a phone number just using my hand (see Fig.4). The camera is actually not only understanding your hand movements, but interestingly, is also able to understand what objects you are holding in your hand. What we’re doing here is actually — for example, in this case, the book cover is matched with so many thousands, or maybe millions of books online, and checking out which book it is. Once it has that information, it finds out more reviewsreview:评论性刊物,评论。这里指书评。 about that, or maybe New York Times has a sound review on that, so you can actually hear, on a physical book, a review as sound. That was Obama’s visit last week to MIT. So, I was seeing the live (video) of his talk, outside, on just a newspaper. You newspaper will show you live weather information rather than having it updated — like, you have to check your computer in order to do that, right? (Applause) When I’m going back, I can just use my boarding pass to check how much my flight has been delayed, at that particular time, I’m not feeling like opening my iPhone and checking out a particular iconicon:图标。这里指iphone手机屏幕上的应用图标。. And I think this technology will not change the way — Yes. It will change the way we interact with people, also, not only the physical world.
Fig5 二十世纪七十年代,最原始的Pong游戏。而现在你的两脚可以作为球拍和球交互。
You can start using your palm for simple operations. The fun part is, I’m going to the Boston metro, and playing pong game (see Fig. 5) inside the train on the ground, right? And I think the imagination is the only limit of what you can think of when this kind of technology merges with real life. But many of you argue, actually, that all of our work is not only about physical objects.
We actually do lots of accounting and paper editing and all those kind of things; what about that? And many of you are excited about the next generation tablet computerstablet computer:平板电脑。 to come out in the market. So, rather than waiting for that, I actually made my own, just using a piece of paper. So, what I did here is remove the camera — all the webcam cameras have a microphone inside the camera. I remove the microphone from that, and then just pinched that — like I just made a clipclip:夹子;(用夹子)夹。 out of the microphone — and clipped that to a piece of paper, any paper that you found around you.
Fig6 Browsing
So now the sound of the touch is getting me when exactly I’m touching the paper. But the camera is actually tracking where my fingers are moving. You can of course watch movies. And you can of course play games. Here, the camera is actually understanding how you’re holding the paper and playing a carracing game. (Applause)
Many of you already must have thought, OK, you can browse (see Fig6). Yes. Of course. You can browse to any websites or you can do all sorts of computing on a piece of paper wherever you need it. So, more interestingly, I’m interested in how we can take that in a more dynamic way. When I come back to my desk I can just pinch that information back to my desktop so I can use my fullsize computer. (Applause)
And why only computers? We can just play with papers. Paper world is interesting to play with. So here, I’m taking a part of a document and putting over here a second part from a second place — and I’m actually modifying the information that I have over there. Yeah. And I say, “OK, this looks nice, let me print it out that thing.” So I have a printout of that thing, and now — The workflowworkflow:工作流程。 is more intuitive the way we used to do it maybe 2o years back, rather than now switching between these two worlds.
So, as a last thought, I think that integrating information to everyday objects will not only help us to get rid of the digital divide, the gapgap:A gap between two groups of people, things, or sets of ideas is a big difference between them. between these two worlds, but will also help us, in some way, to stay human, to be more connected to our physical world. And it will help us, actually, not be machines sitting in front of other machines.
That’s all. Thank you. (Applause)
这种技术通过网络把虚拟图像、文字、音效等现实生活场景叠加在一起,帮助用户实现超越现实的感官体验。
比如,在巴黎、巴塞罗那、伦敦等大城市,游客可以一眼就看到最近的地铁站;用手机摄像头扫过帅哥美女,就可以看到他/她在社交网站上公开的资料;扫过卢浮宫,关于她的照片,展览信息,周围的饮食建议等信息一网打尽。
再比如,把在纸质书上看到的文字、图表用手指“摘”下来,直接放在普通纸上,像操作iPad一样进行编辑、修改、打印;之后还可以再把这些加工过的材料用手指轻轻一“摘”,放回传统计算机。
听起来是不是很神奇?其实已经有很多应用可供选择了。我们和数字世界的交互方式就这样被彻底的重新定义了。
Biographical Sketch|人物速写
1981年生于印度,现在是麻省理工学院媒体实验室 (Media Lab) 的助理研究员和博士候选人。进入实验室之前,普拉纳夫是微软公司的研究员。
2000年,普拉纳夫把自己的和向朋友借来的鼠标拆开,拿到四个滚轮,利用滑带和弹簧,做出了一个侦测手部动作的接口装置。这个装置成本只要两块美元,通过它,可以将现实世界的动作在数字世界里反映出来。此外,普拉纳夫还发明了可以将写在便利贴上的内容同步到计算机的数字便利贴、画3D效果的笔、以及只需通过动作识别、不必输入关键词的实体世界Google地图。
2009年普拉纳夫在TED展示自己的发明创造。在之后的主持人问答环节,他宣布开放“第六感运算”背后程序代码,让可能性无限延伸,使更多的普通大众从中获益。
普拉纳夫在TED (Technology, Entertainment, Design) 大会上的分享,为大家整理节选如下。
Pranav Mistry: The thrilling potential of SixthSense technology
We grew up interacting withinteract with:与……相互作用。 the physical objects around us. There is an enormousenormous:huge, marked by extraordinarily great size, number, or degree. number of them that we use everyday. Unlike most of our computing devicesdevice:装置,设备。, these objects are much more fun to use. When you talk about objects, one other thing automatically comes attached to that thing, and that is gestures: how we manipulatemanipulate:使用,操作(机械等)。 these objects, how we use these objects in everyday life. We use gestures not only to interact with these objects, but we also use to interact with each other.
(为篇幅考虑,略去普拉纳夫对之前4项发明的介绍,这段文字可以在考试杂志社网站的互动版块找到。之后他转变思路,“第六感”技术诞生了。)
Fig1 A headmounted projector
Pixels are actually, right now, confined in these rectangular devices that fit in our pocketspixel:(显示器或电视机图像的)像素;confine:To confine something to a particular place or group means to prevent it from spreading beyond that place or group.. Why can I not remove this confine and take that to my everyday objects, everyday life, so I don’t need to learn the new language for interacting with those pixels?
So, in order to realize this dream, I actually thought of putting a bigsize projectorprojector:投影机。 on my head. I think that’s why this is called a headmounted projector (see Fig.1),
Fig2 The SixthSense device
isn’t? I took it very literally, and took my bike helmet, put a little bit cut over there so that the projector actually fits nicely. So now, what I can do — I can augment the world around me with this digital information.
But later, I realizedI wanted to interact with those digital pixels. So I put a small camera over there, that acts as a digital eye. Later, we moved to a much better, consumeroriented pendant versionoriented:oriented is added to nouns and adverbs to form adjectives which describe what someone or something is mainly interested in or concerned with.面向……,关于……;pendant version:颈挂式。 of that, that many of you now know as the SixthSense device (see Fig. 2).
But the most interesting thing about this particular technology is that you can carry your digitalworld with you wherever you go. You can start using any surface, any wall around you as an interface. The camera is actually tracking all your gestures. Whatever you’re doing with your hands, it’s understanding that gesture. And, actually, if you see, there are some color markers that in the beginning version we are using with it. You can starting painting on any wall. You stop by a wall, and start painting on that wall.
Fig. 3 Take a photo by just doing the gesture
But we are not only tracking on finger. We are giving you the freedom of using all of both of your hands, so you can actually use both of your hands to zoom into or zoom out of a map just by pinchingpinch:If you pinch a part of someone’s body, you take a piece of their skin between your thumb and first finger and give it a short squeeze. all present. The camera is actually doing — just, getting all the images — is doing the edge recognition and also the color recognition and so many other small algorithmsalgorithm:运算法则。 are going on inside. So, technically, it’s a little bit complex, but it gives you an output which is more intuitive to use, in some sense. But I’m more excited that you can actually take it outside. Rather than getting your camera out of your pocket, you can just do the gesture of taking a photo and it takes a photo for you (see Fig. 3).
(Applause) Thank you.
And later I can find a wall, anywhere, and start browsing those photos or maybe, “OK, I want to modify this photo a little bit and send it as an email to a friendbrowse:If you browse through a book or magazine, you look through it in a fairly casual way;modify:If you modify something, you change it slightly, usually in order to improve it..” So, we are looking for an era where computing will actually merge with the physical worldera:时代;merge with:If one thing merges with another, or is merged with another, they combine or come together to make one whole thing..
Fig4 Making a call
And, of course, if you don’t have any surface, you can start using your palmpalm:手掌。 for simple operations. Here, I’m dialing a phone number just using my hand (see Fig.4). The camera is actually not only understanding your hand movements, but interestingly, is also able to understand what objects you are holding in your hand. What we’re doing here is actually — for example, in this case, the book cover is matched with so many thousands, or maybe millions of books online, and checking out which book it is. Once it has that information, it finds out more reviewsreview:评论性刊物,评论。这里指书评。 about that, or maybe New York Times has a sound review on that, so you can actually hear, on a physical book, a review as sound. That was Obama’s visit last week to MIT. So, I was seeing the live (video) of his talk, outside, on just a newspaper. You newspaper will show you live weather information rather than having it updated — like, you have to check your computer in order to do that, right? (Applause) When I’m going back, I can just use my boarding pass to check how much my flight has been delayed, at that particular time, I’m not feeling like opening my iPhone and checking out a particular iconicon:图标。这里指iphone手机屏幕上的应用图标。. And I think this technology will not change the way — Yes. It will change the way we interact with people, also, not only the physical world.
Fig5 二十世纪七十年代,最原始的Pong游戏。而现在你的两脚可以作为球拍和球交互。
You can start using your palm for simple operations. The fun part is, I’m going to the Boston metro, and playing pong game (see Fig. 5) inside the train on the ground, right? And I think the imagination is the only limit of what you can think of when this kind of technology merges with real life. But many of you argue, actually, that all of our work is not only about physical objects.
We actually do lots of accounting and paper editing and all those kind of things; what about that? And many of you are excited about the next generation tablet computerstablet computer:平板电脑。 to come out in the market. So, rather than waiting for that, I actually made my own, just using a piece of paper. So, what I did here is remove the camera — all the webcam cameras have a microphone inside the camera. I remove the microphone from that, and then just pinched that — like I just made a clipclip:夹子;(用夹子)夹。 out of the microphone — and clipped that to a piece of paper, any paper that you found around you.
Fig6 Browsing
So now the sound of the touch is getting me when exactly I’m touching the paper. But the camera is actually tracking where my fingers are moving. You can of course watch movies. And you can of course play games. Here, the camera is actually understanding how you’re holding the paper and playing a carracing game. (Applause)
Many of you already must have thought, OK, you can browse (see Fig6). Yes. Of course. You can browse to any websites or you can do all sorts of computing on a piece of paper wherever you need it. So, more interestingly, I’m interested in how we can take that in a more dynamic way. When I come back to my desk I can just pinch that information back to my desktop so I can use my fullsize computer. (Applause)
And why only computers? We can just play with papers. Paper world is interesting to play with. So here, I’m taking a part of a document and putting over here a second part from a second place — and I’m actually modifying the information that I have over there. Yeah. And I say, “OK, this looks nice, let me print it out that thing.” So I have a printout of that thing, and now — The workflowworkflow:工作流程。 is more intuitive the way we used to do it maybe 2o years back, rather than now switching between these two worlds.
So, as a last thought, I think that integrating information to everyday objects will not only help us to get rid of the digital divide, the gapgap:A gap between two groups of people, things, or sets of ideas is a big difference between them. between these two worlds, but will also help us, in some way, to stay human, to be more connected to our physical world. And it will help us, actually, not be machines sitting in front of other machines.
That’s all. Thank you. (Applause)