Friday, April 27, 2007

Graphical User Interfaces – Moving to Gesture Recognition

As the user base shifts away from the PC to handheld devices and embedded devices, so the user interface will shift with it. The expert user is no longer the primary user of computational technologies. As non-experts come into the picture, the user interface must accommodate them -- witness the rise of the graphical user interface in the 1980s when computers came into mainstream use in the form of the personal computer.

I’ve maintained for a long time now, that the video game industry is the one to watch. As Edison said, good engineers develop, but great engineers steal. We should be stealing from the video game industry. With their high resolution graphics and use of 3D techniques, it appears to be the place to shop for new technology. But there are other technologies to consider, such as gesture recognition.

Gesture recognition is not new. In fact it goes back to 1964 when ARPA funded the RAND tablet. If you can remember the RAND corporation you’re really dating yourself. Since it’s been around for so long why hasn’t it penetrated further into usage? Like most emerging technologies, it’s not the technology itself that defines its success but rather how well it works with other technologies and how many other technologies it requires to run well.

One application of gesture recognition is demonstrated in a paper from MIT which used the kitchen as the wall space and then overlayed digital information on it for the user to manipulate. They used multiple projectors and actually moved the projection in case the object they projected on moved, such as a table. Also, the image changed based on the task the user was performing. Applications include showing the contents of the refrigerator on the outside of the door noting the items that need to be purchased. A display on the dishwasher shows the state of the contents – dirty or clean. An application called Heatsink measures the temperature of water coming from the tap and then projects a color on the water stream to indicate that to the user.

In the case study from MIT in which they instrumented a kitchen, it’s clear that a whole range of supporting technologies is required to make gesture recognition work including image processing, temperature sensing, proximity sensing, object tracking, and more.

While MIT may be instrumenting kitchens to test out the technology, others are actively installing gesture recognition systems in your local mall. The one in the mall nearest my home is called Reactrix which consists of a ceiling mounted projector that displays vendor brands on the floor of the mall along with games. Kids are enticed to participate with the various games and questionnaires. The projector “reads” the movements of the kid on the projected image and provides a reaction to it. For example, a soccer field is displayed with a ball in the middle. As the kid tries to kick the image of the ball, the Reactrix sends the ball floating across the screen, thus encouraging the kid to pursue the ball. To see this in action for yourself you can click on the Reactrix web site and see a map of the USA with over 160 locations.

As with all emerging technologies there are bugs to work out of the system. For gesture-recognition it comes down to interpreting the gestures the user makes. In this article, a number of companies use heuristics and other algorithms to interpret a user’s movements.

Wouldn’t it be cool to take a LabVIEW front panel and project it on the wall but then enable the user to physically turn the knob or toggle the switch or even tap twice on a VI and have its front panel open up?

Best regards,
Hall T.

Friday, April 20, 2007

Quantum Logic Devices – Nanotechnology

The field of Nanotechnology has received a substantial amount of investment over the past five years. It offers the promise of better materials, better tests, and better results. In looking at this space an interesting company to watch is Quantum Logic Devices which developed a novel nanotechnology-based technique called Single Electron Transistors (SET) which combine silicon with biology to perform ultra-sensitive chemical detection. Traditional microarrays require fluorescent dyes to attach to the target chemical which are then read by optical devices that require large numbers of molecules to be present to be seen.

QLD’s SET system consists of a disposable assay cartridge and an electronic data capture device. The test cartridge is a self-contained bioassay platform containing all the reagents except the sample. The targets are captured and recorded by QLD's nanoelectronic device arrays.

Their founder Dr. Louis Brousseau recently wrote a paper for the American Chemical Society in which he describes proof of concept experiments that demonstrate that hybrid formation of 36-base sequences of DNA can be detected electronically with a single electron transistor. The experiment was able to detect target samples at concentrations down to femtomolar which equates to only 12,000 DNA molecules per 20 microliter drop.

They built a preamp that connects a data acquisition board to the SET device from which they read the data. Once collected, they use LabVIEW to crunch the data. An increasingly familiar theme among biotech companies is the large volume of data their application generates. An application easily reaches into the gigabytes because of the number of data points collected and then the even larger number of data points calculated.

As an aside, there doesn’t appear to be a leader technical data management in the biotech area. Most use NIH or NSF funding to create their own tools. Since this is done primarily in the academic area there’s little support (it’s mostly shareware) and since no one is out promoting or marketing the tool to others it doesn’t get very far from the lab that created it. What’s needed is a general purpose tool that can handle any kind of data. In the life science field there’s quite a range including gene sequencing, to gene expression, and protein, as well as biological data types.

Emerging technologies take time to mature. It’s interesting to watch QLD as they mature theirs.

Best regards,
Hall T.

Friday, April 13, 2007

Next Generation Battery Power – Nanogenerator

Battery power for handheld devices continues to attract novel and innovative methods. Last year, at the Rice Business Plan competition, the University of Arizona team showed off a technology that used piezoelectric crystal which they stimulated with an ultrasound wave to recharge the device’s battery. In concept it looked great, but in reality the transmitter had to be within about 18 inches of the device for it to work. Splashpower uses inductive coupling to recharge devices. You lay your portable devices on a pad and it recharges it for you without the need of plugs and cables. Inductive coupling works well at low power which is why it’s used to recharge electric toothbrushes and the like.

At the Rice Business Plan Competition this year, MIT, demonstrated the PowerPad which uses a non-radiative energy field to recharge devices. Essentially, energy at a particular frequency can cause objects to vibrate.

At the Idea2Product Competition held at the University of Texas, the winning team in 2005 was Micro Dynamo which developed a battery that could be recharged by human motion. It was designed with the military in mind. A soldier could recharge the battery as he walked. It uses ultra capacitors with a rare-earth magnet dynamo to capture human motion and then slowly recharge the battery which can hold the charge for a longer period of time.

Now comes, Nanogenerator, which uses the mechanical motion of nanolevel hairs to generate electricity. Zhong Lin Wang of Georgia Tech and his graduate students created zinc oxide nanowires and used an atomic force microscope tip to move the wires and thus create an electrical charge. Zinc oxide wires are piezoelectric and thus product direct current. By using mechanical motion to generate electricity, the researchers can scale the device to be almost any size and controlled by almost any moving force such as human motion, water movement, air flow, and more. Because it doesn’t require chemicals such as those found in batteries, it could also be used within the human body for powering biosensors.
The researchers built a prototype device and were able to produce 0.5 nanoamperes of current for more than an hour.

Best regards,
Hall T.

Friday, April 06, 2007

Mobile phone to PC Interfacing – Twitter

I remember the early days of the National Instruments User Conference. It was actually called a User Conference in the first few years until it was renamed NI Week. I guess you have to meet for the better part of a week before you can call it that. In the first few user conferences they lasted two days and were held in a hotel with conference rooms. If you came across a user with a topic of interest to someone else at NI, all you had to do was walk down the hallway and you’d find the person you were looking for in about ten to fifteen minutes. After a few years had passed and the event moved from the hotel room to the Austin Convention Center this was no longer possible. If you didn’t setup a meeting with a person well in advance of the conference, chances were good that you would never even see that person until after the conference much less meet with them.

During the recent SxSW conference held in Austin, a new phone to web networking tool gained some notice. It’s called Twitter. It combines blogging (albeit in short form), text messaging, with social networking. Basically, it allows one to send a text message from a phone to a web-site which can provide your location and what you’re doing. I could see this as a solution to the “where are you and what are you doing right now problem?” that I face at NI Week. People who are interested in keeping up with you can add your link to their page. It certainly worked at SxSW where two large screens displayed the flow of messages at the Austin Convention Center.

Twitter was developed by Obvious Corp which created Odeo (podcasting tools) and brings the same minimalist style first used by Google. The sparse interface quickly focuses your attention on what the software does.

One gauge of an emerging technology is the variations people make from the technology. There appears to be a robust collection of mashups based on the Twitter concept. One of them is Twittervision.com which shows a map of the world and then displays in a constant feed, the comments of twitter users. While the current user based of early adopters gives a random smattering of data points, a cohesive group of users (i.e. sales/marketing/R&D of a particular) product could see a more cogent picture of what the organization is doing.

If you want to see what Twitter activity is going on in your neighborhood, try Twittermaps. To change the location type in “L:” followed by your city, street, etc.

While Twitter is another step in the social networking movement, I can see applications in Virtual Instrumentation. Instead of pushing your location and activity to a web-site, one could push measurements to the same web-site. One could implement a mobile data collection system connected to cell phones which gather data and then send the results to a map for one to monitor. It’s just a thought.

Best regards,
Hall T.