Intelligent Image Processing / Edition 1

Intelligent Image Processing / Edition 1

by Steve Mann
ISBN-10:
0471406376
ISBN-13:
9780471406372
Pub. Date:
12/03/2001
Publisher:
Wiley
ISBN-10:
0471406376
ISBN-13:
9780471406372
Pub. Date:
12/03/2001
Publisher:
Wiley
Intelligent Image Processing / Edition 1

Intelligent Image Processing / Edition 1

by Steve Mann
$182.95
Current price is , Original price is $182.95. You
$182.95 
  • SHIP THIS ITEM
    This item is available online through Marketplace sellers.
  • PICK UP IN STORE
    Check Availability at Nearby Stores
$97.41 
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    • Condition: Good
    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

This item is available online through Marketplace sellers.


Overview

Intelligent Image Processing describes the EyeTap technology that allows non-invasive tapping into the human eye through devices built into eyeglass frames. This isn't merely about a computer screen inside eyeglasses, but rather the ability to have a shared telepathic experience among viewers. Written by the developer of the EyeTap principle, this work explores the practical application and far-reaching implications this new technology has for human telecommunications.

Product Details

ISBN-13: 9780471406372
Publisher: Wiley
Publication date: 12/03/2001
Series: Adaptive and Cognitive Dynamic Systems: Signal Processing, Learning, Communications and Control , #27
Pages: 368
Product dimensions: 6.38(w) x 9.35(h) x 0.85(d)

About the Author

STEVE MANN is Professor in the Department of Electrical Engineering and Computer Engineering at the University of Toronto.

Read an Excerpt

Intelligent Image Processing


By Steve Mann

John Wiley & Sons

ISBN: 0-471-40637-6


Chapter One

HUMANISTIC INTELLIGENCE AS A BASIS FOR INTELLIGENT IMAGE PROCESSING

Personal imaging is an integrated personal technologies, personal communicators, and mobile multimedia methodology. In particular, personal imaging devices are characterized by an "always ready" usage model, and comprise a device or devices that are typically carried or worn so that they are always with us.

An important theoretical development in the field of personal imaging is that of humanistic intelligence (HI). HI is a new information-processing framework in which the processing apparatus is inextricably intertwined with the natural capabilities of our human body and intelligence. Rather than trying to emulate human intelligence, HI recognizes that the human brain is perhaps the best neural network of its kind, and that there are many new signal processing applications, within the domain of personal imaging, that can make use of this excellent but often overlooked processor that we already have attached to our bodies. Devices that embody HI are worn (or carried) continuously during all facets of ordinary day-to-day living. Through long-term adaptation they begin to function as a true extension of the mind and body.

1.1 HUMANISTIC INTELLIGENCE

HI is a new form of "intelligence." Its goal is to not only work in extremely close synergy with the human user, rather than as a separate entity, but, more important, to arise, in part, because of the very existence of the human user. This close synergy is achieved through an intelligent user-interface to signal-processing hardware that is both in close physical proximity to the user and is constant.

There are two kinds of constancy: one is called operational constancy, and the other is called interactional constancy. Operational constancy also refers to an always ready-to-run condition, in the sense that although the apparatus may have power-saving ("sleep") modes, it is never completely "dead" or shut down or in a temporary inoperable state that would require noticeable time from which to be "awakened."

The other kind of constancy, called interactional constancy, refers to a constancy of user-interface. It is the constancy of user-interface that separates systems embodying a personal imaging architecture from other personal devices, such as pocket calculators, personal digital assistants (PDAs), and other imaging devices, such as handheld video cameras.

For example, a handheld calculator left turned on but carried in a shirt pocket lacks interactional constancy, since it is not always ready to be interacted with (e.g., there is a noticeable delay in taking it out of the pocket and getting ready to interact with it). Similarly a handheld camera that is either left turned on or is designed such that it responds instantly, still lacks interactional constancy because it takes time to bring the viewfinder up to the eye in order to look through it. In order for it to have interactional constancy, it would need to always be held up to the eye, even when not in use. Only if one were to walk around holding the camera viewfinder up to the eye during every waking moment, could we say it is has true interactional constancy at all times.

By interactionally constant, what is meant is that the inputs and outputs of the device are always potentially active. Interactionally constant implies operationally constant, but operationally constant does not necessarily imply interactionally constant. The examples above of a pocket calculator worn in a shirt pocket, and left on all the time, or of a handheld camera even if turned on all the time, are said to lack interactional constancy because they cannot be used in this state (e.g., one still has to pull the calculator out of the pocket or hold the camera viewfinder up to the eye to see the display, enter numbers, or compose a picture). A wristwatch is a borderline case. Although it operates constantly in order to continue to keep proper time, and it is wearable; one must make some degree of conscious effort to orient it within one's field of vision in order to interact with it.

1.1.1 Why Humanistic Intelligence

It is not, at first, obvious why one might want devices such as cameras to be operationally constant. However, we will later see why it is desirable to have certain personal electronics devices, such as cameras and signal-processing hardware, be on constantly, for example, to facilitate new forms of intelligence that assist the user in new ways.

Devices embodying HI are not merely intelligent signal processors that a user might wear or carry in close proximity to the body but are devices that turn the user into part of an intelligent control system where the user becomes an integral part of the feedback loop.

1.1.2 Humanistic Intelligence Does Not Necessarily Mean "User-Friendly"

Devices embodying HI often require that the user learn a new skill set. Such devices are therefore not necessarily easy to adapt to. Just as it takes a young child many years to become proficient at using his or her hands, some of the devices that implement HI have taken years of use before they began to truly behave as if they were natural extensions of the mind and body. Thus in terms of human-computer interaction, the goal is not just to construct a device that can model (and learn from) the user but, more important, to construct a device in which the user also must learn from the device. Therefore, in order to facilitate the latter, devices embodying HI should provide a constant user-interface-one that is not so sophisticated and intelligent that it confuses the user.

Although the HI device may implement very sophisticated signal-processing algorithms, the cause-and-effect relationship of this processing to its input (typically from the environment or the user's actions) should be clearly and continuously visible to the user, even when the user is not directly and intentionally interacting with the apparatus. Accordingly the most successful examples of HI afford the user a very tight feedback loop of system observability (ability to perceive how the signal processing hardware is responding to the environment and the user), even when the controllability of the device is not engaged (e.g., at times when the user is not issuing direct commands to the apparatus). A simple example is the viewfinder of a wearable camera system, which provides framing, a photographic point of view, and facilitates the provision to the user of a general awareness of the visual effects of the camera's own image processing algorithms, even when pictures are not being taken. Thus a camera embodying HI puts the human operator in the feedback loop of the imaging process, even when the operator only wishes to take pictures occasionally. A more sophisticated example of HI is a biofeedback-controlled wearable camera system, in which the biofeedback process happens continuously, whether or not a picture is actually being taken. In this sense the user becomes one with the machine, over a long period of time, even if the machine is only directly used (e.g., to actually take a picture) occasionally.

Humanistic intelligence attempts to both build upon, as well as re-contextualize, concepts in intelligent signal processing, and related concepts such as neural networks, fuzzy logic, and artificial intelligence. Humanistic intelligence also suggests a new goal for signal processing hardware, that is, in a truly personal way, to directly assist rather than replace or emulate human intelligence. What is needed to facilitate this vision is a simple and truly personal computational image-processing framework that empowers the human intellect. It should be noted that this framework, which arose in the 1970s and early 1980s, is in many ways similar to Doug Engelbart's vision that arose in the 1940s while he was a radar engineer, but that there are also some important differences. Engelbart, while seeing images on a radar screen, envisioned that the cathode ray screen could also display letters of the alphabet, as well as computer-generated pictures and graphical content, and thus envisioned computing as an interactive experience for manipulating words and pictures. Engelbart envisioned the mainframe computer as a tool for augmented intelligence and augmented communication, in which a number of people in a large amphitheatre could interact with one another using a large mainframe computer. While Engelbart himself did not seem to understand the significance of the personal computer, his ideas are certainly embodied in modern personal computing.

What is now described is a means of realizing a similar vision, but with the computational resources re-situated in a different context, namely the truly personal space of the user. The idea here is to move the tools of augmented intelligence, augmented communication, computationally mediated visual communication, and imaging technologies directly onto the body. This will give rise to not only a new genre of truly personal image computing but to some new capabilities and affordances arising from direct physical contact between the computational imaging apparatus and the human mind and body. Most notably, a new family of applications arises categorized as "personal imaging," in which the body-worn apparatus facilitates an augmenting and computational mediating of the human sensory capabilities, namely vision. Thus the augmenting of human memory translates directly to a visual associative memory in which the apparatus might, for example, play previously recorded video back into the wearer's eyeglass mounted display, in the manner of a visual thesaurus or visual memory prosthetic.

1.2 "WEARCOMP" AS MEANS OF REALIZING HUMANISTIC INTELLIGENCE

WearComp is now proposed as an apparatus upon which a practical realization of HI can be built as well as a research tool for new studies in intelligent image processing.

1.2.1 Basic Principles of WearComp

WearComp will now be defined in terms of its three basic modes of operation.

Operational Modes of WearComp

The three operational modes in this new interaction between human and computer, as illustrated in Figure 1.1 are:

Constancy: The computer runs continuously, and is "always ready" to interact with the user. Unlike a handheld device, laptop computer, or PDA, it does not need to be opened up and turned on prior to use. The signal flow from human to computer, and computer to human, depicted in Figure 1.1a runs continuously to provide a constant user-interface. Augmentation: Traditional computing paradigms are based on the notion that computing is the primary task. WearComp, however, is based on the notion that computing is not the primary task. The assumption of WearComp is that the user will be doing something else at the same time as doing the computing. Thus the computer should serve to augment the intellect, or augment the senses. The signal flow between human and computer, in the augmentational mode of operation, is depicted in Figure 1.1b. Mediation: Unlike handheld devices, laptop computers, and PDAs, WearComp can encapsulate the user (Figure 1.1c). It does not necessarily need to completely enclose us, but the basic concept of mediation allows for whatever degree of encapsulation might be desired, since it affords us the possibility of a greater degree of encapsulation than traditional portable computers. Moreover there are two aspects to this encapsulation, one or both of which may be implemented in varying degrees, as desired: Solitude: The ability of WearComp to mediate our perception will allow it to function as an information filter, and allow us to block out material we might not wish to experience, whether it be offensive advertising or simply a desire to replace existing media with different media. In less extreme manifestations, it may simply allow us to alter aspects of our perception of reality in a moderate way rather than completely blocking out certain material. Moreover, in addition to providing means for blocking or attenuation of undesired input, there is a facility to amplify or enhance desired inputs. This control over the input space is one of the important contributors to the most fundamental issue in this new framework, namely that of user empowerment. Privacy: Mediation allows us to block or modify information leaving our encapsulated space. In the same way that ordinary clothing prevents others from seeing our naked bodies, WearComp may, for example, serve as an intermediary for interacting with untrusted systems, such as third party implementations of digital anonymous cash or other electronic transactions with untrusted parties. In the same way that martial artists, especially stick fighters, wear a long black robe that comes right down to the ground in order to hide the placement of their feet from their opponent, WearComp can also be used to clothe our otherwise transparent movements in cyberspace. Although other technologies, like desktop computers, can, to a limited degree, help us protect our privacy with programs like Pretty Good Privacy (PGP), the primary weakness of these systems is the space between them and their user. It is generally far easier for an attacker to compromise the link between the human and the computer (perhaps through a so-called Trojan horse or other planted virus) when they are separate entities. Thus a personal information system owned, operated, and controlled by the wearer can be used to create a new level of personal privacy because it can be made much more personal, for example, so that it is always worn, except perhaps during showering, and therefore less likely to fall prey to attacks upon the hardware itself. Moreover the close synergy between the human and computers makes it harder to attack directly, for example, as one might look over a person's shoulder while they are typing or hide a video camera in the ceiling above their keyboard.

Because of its ability to encapsulate us, such as in embodiments of WearComp that are actually articles of clothing in direct contact with our flesh, it may also be able to make measurements of various physiological quantities. Thus the signal flow depicted in Figure 1.1a is also enhanced by the encapsulation as depicted in Figure 1.1c. To make this signal flow more explicit, Figure 1.1c has been redrawn, in Figure 1.1d, where the computer and human are depicted as two separate entities within an optional protective shell that may be opened or partially opened if a mixture of augmented and mediated interaction is desired.

Note that these three basic modes of operation are not mutually exclusive in the sense that the first is embodied in both of the other two. These other two are also not necessarily meant to be implemented in isolation. Actual embodiments of WearComp typically incorporate aspects of both augmented and mediated modes of operation. Thus WearComp is a framework for enabling and combining various aspects of each of these three basic modes of operation. Collectively, the space of possible signal flows giving rise to this entire space of possibilities, is depicted in Figure 1.2. The signal paths typically comprise vector quantities. Thus multiple parallel signal paths are depicted in this figure to remind the reader of this vector nature of the signals.

1.2.2 The Six Basic Signal Flow Paths of WearComp

There are six informational flow paths associated with this new human-machine symbiosis. These signal flow paths each define one of the basic underlying principles of WearComp, and are each described, in what follows, from the human's point of view. Implicit in these six properties is that the computer system is also operationally constant and personal (inextricably intertwined with the user). The six signal flow paths are:

1. Unmonopolizing of the user's attention: It does not necessarily cut one off from the outside world like a virtual reality game does. One can attend to other matters while using the apparatus. It is built with the assumption that computing will be a secondary activity rather than a primary focus of attention. Ideally it will provide enhanced sensory capabilities. It may, however, facilitate mediation (augmenting, altering, or deliberately diminishing) these sensory capabilities. 2. Unrestrictive to the user: Ambulatory, mobile, roving - one can do other things while using it. For example, one can type while jogging or running down stairs.

(Continues...)



Excerpted from Intelligent Image Processing by Steve Mann Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface

1 Humanistic Intelligence as a Basis for Intelligent Image Processing

1.1 Humanistic Intelligence/

1.2 "WearComp" as Means of Realizing Humanistic Intelligence

1.3 Practical Embodiments of Humanistic Intelligence

2 Where on the Body is the Best Place for a Personal Imaging System?

2.1 Portable Imaging Systems

2.2 Personal Handheld Systems

2.3 Concomitant Cover Activities and the Videoclips Camera System

2.4 The Wristwatch Videophone: A Fully Functional "Always Ready" Prototype

2.5 Telepointer: Wearable Hands-Free Completely Self-Contained Visual Augmented Reality

2.6 Portable Personal Pulse Doppler Radar Vision System Based on Time-Frequency Analysis and q-Chirplet Transform

2.7 When Both Camera and Display are Headworn: Personal Imaging and Mediated Reality

2.8 Partially Mediated Reality

2.9 Seeing "Eye-to-Eye"

2.10 Exercises, Problem Sets, and Homework

3 The EyeTap Principle: Effectively Locating the Camera Inside the Eye as an Alternative to Wearable Camera Systems

3.1 A Personal Imaging System for Lifelong Video Capture

3.2 The EyeTap Principle

3.3 Practical Embodiments of EyeTap

3.4 Problems with Previously Known Camera Viewfinders

3.5 The Aremac

3.6 The Foveated Personal Imaging System

3.7 Teaching the EyeTap Principle

3.8 Calibration of EyeTap Systems

3.9 Using the Device as a Reality Mediator

3.10 User Studies

3.11 Summary and Conclusions

3.12 Exercises, Problem Sets, and Homework

4 Comparametric Equations, Quantigraphic Image Processing, and Comparagraphic Rendering

4.1 Historical Background

4.2 The Wyckoff Principle and the Range of Light

4.3 Comparametric Image Processing: Comparing Differently Exposed Images of the Same Subject Matter

4.4 The Comparagram: Practical Implementations of Comparanalysis

4.5 Spatiotonal Photoquantigraphic Filters

4.6 Glossary of Functions

4.7 Exercises, Problem Sets, and Homework

5 Lightspace and Antihomomorphic Vector Spaces

5.1 Lightspace

5.2 The Lightspace Analysis Function

5.3 The "Spotflash" Primitive

5.4 LAF×LSF Imaging ("Lightspace")

5.5 Lightspace Subspaces

5.6 "Lightvector" Subspace

5.7 Painting with Lightvectors: Photographic/Videographic Origins and Applications of WearComp-Based Mediated Reality

5.8 Collaborative Mediated Reality Field Trials

5.9 Conclusions

5.10 Exercises, Problem Sets, and Homework

6 VideoOrbits: The Projective Geometry Renaissance

6.1 VideoOrbits

6.2 Background

6.3 Framework: Motion Parameter Estimation and Optical Flow

6.4 Multiscale Implementations in 2-D

6.5 Performance and Applications

6.6 AGC and the Range of Light

6.7 Joint Estimation of Both Domain and Range Coordinate Transformations

6.8 The Big Picture

6.9 Reality Window Manager

6.10 Application of Orbits: The Photonic Firewall

6.11 All the World's a Skinner Box

6.12 Blocking Spam with a Photonic Filter

6.13 Exercises, Problem Sets, and Homework

Appendix A: Safety First!

Appendix B: Multiambic Keyer for Use While Engaged in Other Activities

B.1 Introduction

B.2 Background and Terminology on Keyers

B.3 Optimal Keyer Design: The Conformal Keyer

B.4 The Seven Stages of a Keypress

B.5 The Pentakeyer

B.6 Redundancy

B.7 Ordinally Conditional Modifiers

B.8 Rollover

B.8.1 Example of Rollover on a Cybernetic Keyer

B.9 Further Increasing the Chordic Redundancy Factor: A More Expressive Keyer

B.10 Including One Time Constant

B.11 Making a Conformal Multiambic Keyer

B.12 Comparison to Related Work

B.13 Conclusion

B.14 Acknowledgments

Appendix C: WearCam GNUX Howto

C.1 Installing GNUX on WearComps

C.2 Getting Started

C.3 Stop the Virus from Running

C.4 Making Room for an Operating System

C.5 Other Needed Files

C.6 Defrag /
323

C.7 Fips

C.8 Starting Up in GNUX with Ramdisk

Appendix D: How to Build a Covert Computer Imaging System into Ordinary Looking Sunglasses

D.1 The Move from Sixth-Generation WearComp to Seventh-Generation

D.2 Label the Wires!

D.3 Soldering Wires Directly to the Kopin CyberDisplay

D.4 Completing the Computershades

Bibliography

Index
From the B&N Reads Blog

Customer Reviews