Archives ||About Us || Advertise || Feedback || Subscribe-
-
Issue of February 2004 
-

  -  
 
 Home > Focus
 Print Friendly Page ||  Email this story

Future Computing

Computing in the 21st Century

What computing technologies can we expect in the near future? The recent Microsoft Research Asia's science fair in Beijing unveiled some possibilities. by Mark Feldman

In early November, Beijing-based Microsoft Research Asia (MSRA) celebrated its five-year anniversary with a science fair and conference entitled "Computing in the 21st Century". At the same time, Microsoft also announced the founding of the Beijing Advanced Technology Center (ATC).

The Beijing ATC, which will have 80 engineers in its first year, will accelerate technology transfer from MSRA to Microsoft product groups. Zhang Ya-Qin, managing director of MSRA, says: "By streamlining technology transfer with the ATC, we will enable researchers to remain focused on solving the hard problems in computer science that the industry is faced with."

The ATC will also work to make human-computer interaction more natural for Asian users in their native languages.

In fulfillment of an MoU signed by Microsoft CEO Steve Ballmer and vice- premier Zheng Peiyan in 2002, the ATC will also license some of its technologies to local Chinese partners to help build the local software ecosystem.

Products of Tomorrow

The science fair demonstrated many research projects that can analyze image data and interact intelligently with users.

MSRA technology used faces as biometric login passwords, identified individuals in photographs, classified photographs according to type of scene, and automatically edited home movies.

Researchers even showed off software that could learn about different types of sports and automatically create highlights of sporting events for television news programs.

Other research focused on maintaining connectivity in a heterogeneous wireless Internet of varying speeds and coverage.

The SMART (Scalable Media Adaptation and Robust Transport) and ProFIT (Pro-active Federated Intelligent Sameness) projects claimed improvements to the technical foundations of roaming session and quality of service management.

These improvements allow a user with a single wireless device to move seamlessly between existing CDMA, GPRS and Wi-Fi networks while watching streaming video. The quality of the video adapts to the characteristics of the network and even adapts itself to permit other activities that require fast response, such as interactive text chatting.

The unified wireless device of the future might also feature new Microsoft technologies that segment large Web pages into smaller "visually related" semantic blocks which can be more easily viewed on a small screen.

A mobile phone on display translated spoken phrases between Mandarin and English for future visitors to China, and could easily translate Web pages as well.

A Comeback For Paper?

Microsoft's Universal Pen (uPen) project is attempting to bring the convenience of tablet computing to paper. The prototype uPen captures a digital version of whatever you write on a piece of paper. It can save handwritten notes and drawings, or it can recognize clicks and menu selections to control an overhead PowerPoint presentation from a printed copy.

The key element to this is a special pen incorporating a tiny camera, positioning sensor and Bluetooth transmitter. In addition, encoded location data is transparently printed onto the paper, like a watermark, which allows the uPen to determine where it is writing on the page.

One possible use is for filling in pre-printed forms when a computer terminal is not practical or not available. As the user writes, the content could be automatically recognized and stored in an online database, eliminating a manual data entry task later.

Books galore

Technologies discussed on the second day of the conference were not limited to Microsoft's, but were destined to somehow dramatically alter our lives.

In particular, Professor Raj Reddy, 1994 Turning Award winner and professor of computer science at Carnegie Mellon University, highlighted trends in disk storage. He explains that by 2013, "profound changes will become possible as a result of the fact that we will be able to have a terabyte of storage capacity for one dollar—which is less than the earnings capacity per day of humans in some of the poorest countries."

The Million Book Digital Library Project is a first step to having instant access to all human knowledge online, most of which you will soon have the capacity to store on your own PC.

There are an estimated 100 million multilingual books currently in existence. But, "if you wanted to digitise just a million books, each of about 400 pages, and scan in one page every second, it would take 100 years," says Reddy.

With support from various governments, his project is mirroring this task using 15 centers in India, 14 in China and one in Egypt. By the end of 2003 they will have scanned 100,000 books, and have the capacity for a million pages per day thereafter.

Among the daunting issues are selecting which books to scan first, shipping books to scanning centers, working with the mere 99 percent accuracy of current OCR technology, providing royalties for copyrighted works, and making the result available to millions of users in different languages.

HAL, Are You There Yet?

In the final presentation, Dr Lee Kai-Fu, vice-president of the Natural Interactive Services Division at Microsoft, reminds us that since 1950, there have been repeated predictions that conversational computers were just 10 years away. Yet HAL, the intelligent computer, is still nowhere in sight.

According to Lee, past predictions have suffered from immature technology, oversold Hollywood expectations, and underestimated difficulty in developing the technology. But he says that we have learned some lessons.

First, conversational computers need technologies in Speech Recognition, Natural Language Recognition, and Text-To-Speech Conversion. As such, it would be practical to "change the world one domain at a time".

Second, algorithms need more data to improve, yet only halve error rates approximately every seven years. Moore's Law only helps recognition errors appear faster. Taken together, speech recognition error rates halve only every 60 months.

Finally, you cannot extrapolate from one data point. The realm of Natural Language Recognition is still beyond predictability. But we now have enough data points that Lee predicts that speech generation will approach human naturalness in 2010, and that speech recognition will approach the human error rate in 2011.

This article first appeared in Network Computing Asia

 
     
- <Back to Top>-  

Copyright 2001: Indian Express Newspapers (Bombay) Limited (Mumbai, India). All rights reserved throughout the world.
This entire site is compiled in Mumbai by the Business Publications Division (BPD) of the Indian Express Newspapers (Bombay) Limited. Site managed by BPD.