A visit to Intel Labs is always an interesting experience — this is where researchers are given free reign to imagine how technology can change the way we interact with our devices.
The research going on currently ranges from a new kind of car infotainment system to a kitchen helper that recognises and tracks vegetables and other objects on a table.
Mobile augmented reality
While not unique, an augmented reality system was demonstrated at Intel Labs which automatically recognised landmarks.
The device, which runs on an Intel Atom processor, uses a combination of location information from a GPS chip and a visual recognition engine which uses the built-in camera to determine just what the user is looking at.
This information is then fed to a server which houses about 500,000 photos of famous locations around the world together with the location information and Wikipedia entries on those landmarks.
From there, information is relayed back to the smartphone and displayed as an augmented reality overlay giving information on the landmark.
This sophisticated next-generation object recognition greatly enhances the accuracy of the results compared with just using GPS location information alone.
Cars with cameras built-in for navigation and sensing are not new, but until now, most cars with built-in cameras have them pointed forwards facing oncoming traffic.
However, Vu Nguyen, a researcher from the Intel Labs in Hillsboro, showed off a context-aware system which has cameras pointed inside the cabin directly at the driver, in addition to the cameras facing oncoming traffic.
These inward-facing cameras use face detection techniques to continually check on the status of the driver, including whether he or she is looking sideways or facing forwards.
Using this information, the face recognition software and context engine is able to determine whether the driver is paying attention to the road or if he or she is, say, looking sideways and having a conversation with a passenger.
At the same time, the system is also using cameras to sense oncoming traffic and if it detects a possible collision, it will play an alert sound.
A more strident alert will sound if the system detects that the user is not looking forwards and thus may not be aware of the oncoming danger.
Smart computing islands
Another spin on context aware computing is how sensing can be used to help in performing one’s kitchen duties.
Using Oasis (Object-Aware Situated Interactive System), an RGB 3D camera uses a real-time vision algorithm to recognise and track everyday objects and gestures.
A micro-projection system is also used to project menus and information right onto the working surface.
For example, the user can put a bell pepper on the table and the system will immediately recognise it as such and project a label next to the bell pepper.
As the user puts other ingredients on the table, the system will identify them and then pop up a number of possible recipes using those ingredients.
The system also recognises the position of hands and fingers within the area so the user can directly touch and interact with the menus projected on the table. — TAN KIT HOONG
Related Stories: IDF 2010: Intel banks on smart computing IDF 2010: Get in touch with your devices