If you work on a computer most of the day, it takes a moment for the brain to understand it can’t just search in the real world, too. But why shouldn’t it be able to do that?
As the previous post, this is a fidelity twist on an earlier post. While it still makes sense, I think you should be able to speak to it as well. And I guess next time I’m close to where I’ve searched a realtime photo earlier, it should just present it to me so I don’t have to walk over to it physically. Although that has a certain charm to it.
Privacy concerns suspended briefly, the app could submit all searches and make the info available for people not at the actual location.
I was reminded of this old idea and decided to step up the fidelity to see if I would learn more about it.
I have a feeling most highlights will be full sentences, so it could just be that, tapping a sentence and done. Which reminds me that it’s odd Kindle’s highlighter doesn’t default to sentences.
It would also be cool if this one could remember pages and add the highlights when it sees it again.
It should probably also save everything it saw as an image so you could get the context of a highlight.
Taking the eyes off the road is the most common cause for traffic accidents. So why don’t we change the in-car display to an on-road display? And while we’re at it, place a depth camera (a Kinect, basically) next to the projector and just don’t show anything if we’re in a dangerous situation? And while we’re still at it, let’s place a projector on the back of the car telling your drivers where you’re going and let them turn off their navigation and just follow you.