A method for controlling the video display of a virtual 3D object or scene on a 2D display device is provided. A virtual video camera, controlled by a virtual-video-camera state variable consisting of camera control and location parameters, generates the 2D video of the object or scene. A target virtual camera state, representing an optimal view of a given surface point, is generated for each model surface point. A 2D coordinate of the image display is received from a user, either by looking at a point or selecting it with a mouse click. A corresponding 3D designated object point on the surface of the object is calculated from the received 2D display coordinate. The virtual camera is controlled to move its view toward the 3D designated object point with dynamics that allow the user to easily follow the motion of the designated object point as he watches the video.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 7/70 - Determining position or orientation of objects or cameras
2.
Gaze-operated point designation on a 3D object or scene
A method for controlling the video display of a virtual 3D object or scene on a 2D display device is provided. A virtual video camera, controlled by a virtual-video-camera state variable consisting of camera control and location parameters, generates the 2D video of the object or scene. A target virtual camera state, representing an optimal view of a given surface point, is generated for each model surface point. A 2D coordinate of the image display is received from a user, either by looking at a point or selecting it with a mouse click. A corresponding 3D designated object point on the surface of the object is calculated from the received 2D display coordinate. The virtual camera is controlled to move its view toward the 3D designated object point with dynamics that allow the user to easily follow the motion of the designated object point as he watches the video.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 7/70 - Determining position or orientation of objects or cameras
3.
Gaze-operated point designation on a 3D object or scene
A method for controlling the video display of a virtual 3D object or scene on a 2D display device is provided. A virtual video camera, controlled by a virtual-video-camera state variable consisting of camera control and location parameters, generates the 2D video of the object or scene. A target virtual camera state, representing an optimal view of a given surface point, is generated for each model surface point. A 2D coordinate of the image display is received from a user, either by looking at a point or selecting it with a mouse click. A corresponding 3D designated object point on the surface of the object is calculated from the received 2D display coordinate. The virtual camera is controlled to move its view toward the 3D designated object point with dynamics that allow the user to easily follow the motion of the designated object point as he watches the video.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 7/70 - Determining position or orientation of objects or cameras
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Computer hardware and recorded software for measuring eye movement Providing temporary use of non-downloadable computer software for measuring eye movement
A device for performing eyetracking on a handheld device includes and eyetracking camera and an eyetracking camera boom mount. The eyetracking camera boom mount physically and electrically connects a handheld device and the eyetracking camera. The eyetracking camera boom mount includes an extension boom that positions the eyetracking camera behind the user's hands. The extension boom provides the eyetracking camera with a view of the user's eyes that is unobstructed by the user's hands. The device can further include an operating scene camera for monitoring a person's hand operations on the handheld device. The operating scene camera can be mounted on the same extension boom as the eyetracking camera or on a separate extension boom.
An asymmetric aperture device for a camera is provided that improves light gathering properties by increasing both the light gathering opening of the aperture and the number of light producing light sources placed on the aperture. An asymmetric aperture design is provided that utilizes a significantly larger portion of the camera lens. The tradeoff between the competing objectives of maximizing camera depth of field and maximizing the production of useful focus-condition information within the camera image is optimized. More illumination is provided without significantly increasing the lateral size of the illuminator pattern.
G03B 15/06 - Special arrangements of screening, diffusing, or reflecting devices, e.g. in studio
G03B 15/14 - Special procedures for taking photographsApparatus therefor for taking photographs during medical operations
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
An asymmetric aperture device for a camera is provided that improves light gathering properties by increasing both the light gathering opening of the aperture and the number of light producing light sources placed on the aperture. An asymmetric aperture design is provided that utilizes a significantly larger portion of the camera lens. The tradeoff between the competing objectives of maximizing camera depth of field and maximizing the production of useful focus-condition information within the camera image is optimized. More illumination is provided without significantly increasing the lateral size of the illuminator pattern.
G03B 15/14 - Special procedures for taking photographsApparatus therefor for taking photographs during medical operations
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
G03B 15/16 - Special procedures for taking photographsApparatus therefor for photographing the track of moving objects
G03B 15/03 - Combinations of cameras with lighting apparatusFlash units
A device for performing eyetracking on a handheld device includes and eyetracking camera and an eyetracking camera boom mount. The eyetracking camera boom mount physically and electrically connects a handheld device and the eyetracking camera. The eyetracking camera boom mount includes an extension boom that positions the eyetracking camera behind the user's hands. The extension boom provides the eyetracking camera with a view of the user's eyes that is unobstructed by the user's hands. The device can further include an operating scene camera for monitoring a person's hand operations on the handheld device. The operating scene camera can be mounted on the same extension boom as the eyetracking camera or on a separate extension boom.
A system and method are disclosed for using a camera image of a user's eye as a visual stimulus for the calibration point in an eyetracking calibration system. A camera image of a user's eye is generated on a user display using an eyetracker. The camera image of the user's eye is used as a visual stimulus of a calibration point. In an embodiment, the center of the pupil of the camera image of the user's eye represents the coordinates of the calibration point on the user display. In another embodiment, the center of the corneal reflection of the camera image of the user's eye represents the coordinates of the calibration point on the user display.
An asymmetric aperture device for a camera is provided that improves light gathering properties by increasing both the light gathering opening of the aperture and the number of light producing light sources placed on the aperture. An asymmetric aperture design is provided that utilizes a significantly larger portion of the camera lens. The tradeoff between the competing objectives of maximizing camera depth of field and maximizing the production of useful focus-condition information within the camera image is optimized. More illumination is provided without significantly increasing the lateral size of the illuminator pattern.
G03B 15/14 - Special procedures for taking photographsApparatus therefor for taking photographs during medical operations
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 3/14 - Arrangements specially adapted for eye photography
A61B 3/00 - Apparatus for testing the eyesInstruments for examining the eyes
B81B 7/02 - Microstructural systems containing distinct electrical or optical devices of particular relevance for their function, e.g. microelectro-mechanical systems [MEMS]
A miniature eye tracking system is disclosed that includes a camera, a microelectromechanical (MEMS) device, and a processor. The camera images an eye. The MEMS device controls the view-direction of the camera. The processor receives an image of the eye from the camera, determines the location of the eye within the camera image, and controls the MEMS to keep the camera pointed at the eye. In another embodiment, the MEMS device controls an adjustable focus of the camera. The processor determines the focus condition of the eye image, and controls the MEMS device to maintain a desired focus condition of the camera on the eye. In another embodiment, the MEMS device controls an adjustable camera zoom. The processor determines the size of the eye image within the overall camera image, and controls the MEMS to maintain a desired size of the eye image within the overall camera image.
G03B 29/00 - Combinations of cameras, projectors or photographic printing apparatus with non-photographic non-optical apparatus, e.g. clocks or weaponsCameras having the shape of other objects
A61B 3/14 - Arrangements specially adapted for eye photography
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 3/00 - Apparatus for testing the eyesInstruments for examining the eyes
B81B 7/02 - Microstructural systems containing distinct electrical or optical devices of particular relevance for their function, e.g. microelectro-mechanical systems [MEMS]
Systems and methods are provided for wirelessly controlling a client computer system from a host computer system. A HID class command is received from a host computer system that is generated by an application executing on the host computer system in order to control a client computer system using a first wireless transceiver device that connects to a USB port of the host computer system. The HID class command is transmitted across a wireless channel using the first wireless transceiver device. The HID class command is received from the wireless channel using a second wireless transceiver device that is connected to a USB port of the client computer system and is configured by the client computer system as a HID. The HID class command is sent to the client computer system in order to control the client computer system using the second wireless transceiver device.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
Effective patient-centered care in a hospital relies heavily on the ability of patients to communicate their physical needs to care givers. If a patient is unable to speak, he has limited means of communicating at a time when he needs it the most. The embodiments presented here, generally referred to as EyeVoice, include unobtrusive eye-operated communication systems for locked-in hospital patients who cannot speak or gesture. EyeVoice provides an alternate means of communication, allowing hospital patients to communicate with their care givers using their eyes in place of their voices. Simply by looking at images and cells displayed on a computer screen placed in front of them, patients are able to: answer questions posed by caregivers; specify locations, types and degrees of pain and discomfort; request specific forms of assistance; ask or answer care related questions, and help direct his own care.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
17.
Systems and methods for aiding traffic controllers and/or pilots
Gaze based systems and methods are used to aid traffic controllers and/or pilots. A gaze line of an eye of the user viewing the display is tracked using an eyetracker. An intersection of the gaze line of the eye with the display is calculated to provide continuous feedback as to where on the display the user is looking. A trace of the gaze line of the eye is correlated with elements of a situation. The user's awareness of the situation is inferred by verifying that the user has looked at the elements of the situation. In an embodiment, the user is notified of the situation when it is determined that the user has not looked at the elements of the situation for a predetermined period of time. The notification is automatically removed once it is determined that the user has looked at the elements of the situation.
G01C 21/00 - NavigationNavigational instruments not provided for in groups
G01C 23/00 - Combined instruments indicating more than one navigational value, e.g. for aircraftCombined measuring devices for measuring two or more variables of movement, e.g. distance, speed or acceleration
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
A target is imaged in a three-dimensional real space using two or more video cameras. A three-dimensional image space combined from two video cameras of the two or more video cameras is displayed to a user using a stereoscopic display. A right eye and a left eye of the user are imaged as the user is observing the target in the stereoscopic video display, a right gaze line of the right eye and a left gaze line of the left eye are calculated in the three-dimensional image space, and a gazepoint in the three-dimensional image space is calculated as the intersection of the right gaze line and the left gaze line using a binocular eyetracker. A real target location is determined by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from the locations and the positions of the two video cameras using a processor.
A system for determining a three-dimensional location and orientation of an eye within a camera frame of reference includes a camera, an illuminator, and a processor. The camera captures an image of the eye. The illuminator generates a reflection off of a corneal surface of the eye. The processor computes a first two-dimensional location of a pupil reflection image and a corneal reflection image from the image of the eye. The processor predicts a second two-dimensional location of a pupil reflection image and the corneal reflection image as a function of a set of three-dimensional position and orientation parameters of the eye within the camera frame of reference. The processor iteratively adjusts the set until the first two-dimensional location is substantially the same as the second two-dimensional location. The set is the three-dimensional location and orientation of an eye.
An embodiment of the present invention provide a system for measuring and modifying at least one model parameter of an object of an image in order to distinguish the object from noise in the image includes a perceived image generator, an image-match function, and a parameter adjustment function. The perceived image generator produces a first perceived image of the object based on the at least one model parameter. The image-match function compares the first perceived image with a real image of the object. The parameter adjustment function adjusts the at least one model parameter so that the perceived image generator produces a second perceived image of the object that more closely matches the real image than the first perceived image.
One embodiment of the present invention is a method for computing a first gaze axis of an eye in a first coordinate system. A camera is focused on the eye and moved to maintain the focus on the eye as the eye moves in the first coordinate system. A first location of the camera in the first coordinate system is measured. A second location of the eye and a gaze direction of the eye within a second coordinate system are measured. A second gaze axis within the second coordinate system is computed from the second location and the gaze direction. The first gaze axis is computed from the second gaze axis and the first location using a first coordinate transformation.
An embodiment of the present invention is a system for identifying a user by observing irregularities on the surface of an eyeball of the user includes a topography system and a gaze tracking system. The topography system obtains one or more discernable features of the eyeball and stores the one or more discernable features. The gaze tracking system observes the irregularities, compares the irregularities to the one or more discernable features, and identifies the user if the irregularities and the one or more discernable features match.
Embodiments of the present invention relate to systems and methods for minimizing motion clutter in image-generation devices. Temporally-interleaved image-subtraction reduces the magnitude of motion clutter and has no adverse effect on the desired ambient-light cancellation of static images. Embodiments of image-generation devices employing temporally-interleaved image-subtraction include single, double, triple, and series accumulator configurations. All four embodiments allow synchronization with scene illuminators and may be implemented on a single electronic chip. Temporally-interleaved image-subtraction is particularly well suited for use in video eyetracking applications where ambient light and scene motion can cause significant problems.