Systems and methods are provided for object extraction from images influenced by depth of field settings. Through control circuitry, an image subjected to a prior segmentation operation is acquired. A subsequent segmentation operation is performed, modulating the depth of field setting to its extreme values, producing two distinct segmented images. From these, an in-focus object is derived, forming delineated representations. A similarity index between representations is computed. If this index exceeds a specified threshold, the in-focus object is extracted from the original image using the control circuitry.
The present disclosure is directed to systems and methods to enhance the process of creating an artificial intelligence (AI) generated content or content items, such as images, text, video, sounds, etc., using a text or other suitable prompt, such as via voice input. The systems and methods disclosed provide streamlined content generation with, e.g., reduced processing power and computing time. In an embodiment the systems and methods receive a prompt for generating a first content item using a generative artificial intelligence (AI) model and retrieve, based on the prompt, a collection of matching content items. The systems and methods may then receive input selecting one of the content items from the collection and identify a prompt used to generate the selected content item. The systems and methods may then merge using a trained natural language processing model, the received prompt with the prompt of the selected content item to create a third prompt. In an embodiment the systems and methods may modify the third prompt based on additional input and, based on the modified third prompt, generate a second content item.
Systems and methods are described for inputting text input to a trained machine learning model; generating, using the trained machine learning model and based on the text input, a single-layer image comprising a plurality of objects; segmenting the single-layer image to generate a plurality of images, each image of the plurality of images comprising a depiction of a respective object of the plurality of objects of the single-layer image; extracting, from the text input, a portion of the text input describing a background portion of the single-layer image; generating, using the trained machine learning model and based on the extracted portion of the text input, a background image; and generating the multi-layer image based on the plurality of images and the background image.
System and method are provided for capturing images based on dominant eye characteristics of a user. The system detects, by a wearable device, a hand gesture of the user indicating an image boundary for capturing an image from one or more cameras of the wearable device. The system generates, by the wearable device, the image based on (a) a dominant eye characteristic of the user, (b) the image boundary indicated by the hand gesture of the user, and (c) one or more images captured from the one or more cameras of the wearable device. The system stores, by the wearable device, the generated image to memory.
Systems and methods are described for generating, using the first trained machine learning model and based on text input, a single-layer image comprising a plurality of objects; generating a plurality of masks associated with the plurality of objects; determining a plurality of attributes associated with the plurality of objects; generating, using a second trained machine learning model, a plurality of textual descriptions respectively corresponding to the plurality of objects; inputting the plurality of textual descriptions, and the plurality of attributes, to the first trained machine learning model; generating, using the first trained machine learning model, a plurality of images respectively corresponding to the plurality of textual descriptions; and generating the multi-layer image by combining the plurality of images and by using the plurality of masks, wherein the plurality of images respectively correspond to a plurality of layers of the multi-layer image.
Systems, methods, and apparatuses are described for capturing panoramic images and positioning virtual objects on a device screen, using a device having a static camera and an adjustable camera. To generate a panoramic image, the device moves the field of view of the adjustable camera by moving a corresponding MEMS mirror. The device then captures a first image using the static camera, and a second image using the adjustable camera, and generates a panoramic image by combining the first and second images. To position a virtual object, the device captures a first image using the static camera, and determines that there are insufficient visual features in the first image for positioning. The device moves the field of view of the adjustable camera by moving the corresponding MEMS mirror, and captures a second image. Visual features from the second image are then used to position the virtual object.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
H04N 23/695 - Commande de la direction de la caméra pour modifier le champ de vision, p. ex. par un panoramique, une inclinaison ou en fonction du suivi des objets
H04N 23/698 - Commande des caméras ou des modules de caméras pour obtenir un champ de vision élargi, p. ex. pour la capture d'images panoramiques
7.
SYSTEMS AND METHODS FOR AUTOMATED IMAGE CAPTURE ASSISTANCE AND DUAL CAMERA MODE
Systems and methods are provided for enabling improved image capture at a computing device comprising a plurality of cameras. First and second capture streams, from respective first and second cameras of a computing device, are received at the computing device, wherein the first and second cameras face in different directions. A region of the first capture stream to include as an overlay over a portion of the second capture stream is identified. It is determined that a combined frame comprising a frame from the second capture stream with an overlay from the region of the first capture stream meets threshold criterion based on image component analysis, and, in response to the determining, an image based on the combined frame is stored in a non-transitory memory.
H04N 23/60 - Commande des caméras ou des modules de caméras
H04N 5/262 - Circuits de studio, p. ex. pour mélanger, commuter, changer le caractère de l'image, pour d'autres effets spéciaux
H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse
H04N 23/611 - Commande des caméras ou des modules de caméras en fonction des objets reconnus les objets reconnus comprenant des parties du corps humain
H04N 23/62 - Commande des paramètres via des interfaces utilisateur
H04N 23/90 - Agencement de caméras ou de modules de caméras, p. ex. de plusieurs caméras dans des studios de télévision ou des stades de sport
8.
SYSTEMS AND METHODS FOR IMPROVED CONTENT EDITING AT A COMPUTING DEVICE
Systems and methods are provided for improving image item editing. An image item is selected at a computing device and with an editing application, and a preferred editing option to apply to the image item is identified via a user profile. The preferred editing option is determined based on historic editing actions for a plurality of different image items. An icon for applying the preferred editing option to the image item is generated for display in a user interface of the editing application. User input associated with the icon is received, and the preferred editing option is applied to the image item.
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p. ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
Systems, methods and apparatuses are described herein for accessing image data that comprises a plurality of macropixels, wherein the image data may be generated using a device comprising a lenslet array. The image data may be decomposed into a plurality of components using Kronecker product singular value decomposition (KP-SVD). Each component of the plurality of components may be encoded. Each encoded component of the plurality of components may be transmitted to cause display of reconstructed image data based on decoding each encoded component of the plurality of components.
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
H04N 21/4402 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
A method and system for detecting facial expressions in digital images and applications therefore are disclosed. Analysis of a digital image determines whether or not a smile and/or blink is present on a person's face. Face recognition, and/or a pose or illumination condition determination, permits application of a specific, relatively small classifier cascade.
H04N 23/611 - Commande des caméras ou des modules de caméras en fonction des objets reconnus les objets reconnus comprenant des parties du corps humain
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
H04N 23/61 - Commande des caméras ou des modules de caméras en fonction des objets reconnus
11.
SYSTEM AND METHODS FOR LENSLESS UNDER DISPLAY CAMERA
Systems and methods are described for enabling a lensless camera having an image sensor and a mask to be positioned behind a display screen of a device, which allows for the device to have an increased screen-to-body ratio. The image sensor captures an image based on the light that travels through the display screen and the mask. The display screen may include portions between pixel elements that allow light to pass through. The mask may include a pattern, such as an opaque material with portions that allow light to pass through from the portions of the display layer to the image sensor. The image captured by the image sensor may be indiscernible to humans. The system may utilize a trained machine learning model to reconstruct the image, using data about the pattern of the mask, so humans may visually recognize features in the image.
Systems and methods are described for enabling a lensless camera having an image sensor and a mask to be positioned behind a display screen of a device, which allows for the device to have an increased screen-to-body ratio. The image sensor captures an image based on the light that travels through the display screen and the mask. The display screen may include portions between pixel elements that allow light to pass through. The mask may include a pattern, such as an opaque material with portions that allow light to pass through from the portions of the display layer to the image sensor. The image captured by the image sensor may be indiscernible to humans. The system may utilize a trained machine learning model to reconstruct the image, using data about the pattern of the mask, so humans may visually recognize features in the image.
G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
H04M 1/02 - Caractéristiques de structure des appareils téléphoniques
H04N 23/955 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux pour l’imagerie sans objectif
13.
System and Methods for Calibration of an Array Camera
Systems and methods for calibrating an array camera are disclosed. Systems and methods for calibrating an array camera in accordance with embodiments of this invention include the capturing of an image of a test pattern with the array camera such that each imaging component in the array camera captures an image of the test pattern. The image of the test pattern captured by a reference imaging component is then used to derive calibration information for the reference component. A corrected image of the test pattern for the reference component is then generated from the calibration information and the image of the test pattern captured by the reference imaging component. The corrected image is then used with the images captured by each of the associate imaging components associated with the reference component to generate calibration information for the associate imaging components.
H04N 13/282 - Générateurs de signaux d’images pour la génération de signaux d’images correspondant à au moins trois points de vue géométriques, p. ex. systèmes multi-vues
H04N 17/00 - Diagnostic, test ou mesure, ou leurs détails, pour les systèmes de télévision
H04N 17/02 - Diagnostic, test ou mesure, ou leurs détails, pour les systèmes de télévision pour les signaux de télévision en couleurs
H04N 23/667 - Changement de mode de fonctionnement de la caméra, p. ex. entre les modes photo et vidéo, sport et normal ou haute et basse résolutions
H04N 23/951 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux en utilisant plusieurs images pour influencer la résolution, la fréquence d'images ou le rapport de cadre
14.
Capturing and Processing of Images Including Occlusions Focused on an Image Sensor by a Lens Stack Array
Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 23/11 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde pour générer des signaux d'image à partir de longueurs d'onde de lumière visible et infrarouge
H04N 23/13 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde avec plusieurs capteurs
H04N 23/16 - Dispositions optiques associées aux capteurs, p. ex. pour diviser des faisceaux ou pour corriger la couleur
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD] pour les images fixes
H04N 23/54 - Montage de tubes analyseurs, de capteurs d'images électroniques, de bobines de déviation ou de focalisation
H04N 23/60 - Commande des caméras ou des modules de caméras
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
H04N 23/698 - Commande des caméras ou des modules de caméras pour obtenir un champ de vision élargi, p. ex. pour la capture d'images panoramiques
H04N 23/88 - Chaînes de traitement de la caméraLeurs composants pour le traitement de signaux de couleur pour l'équilibrage des couleurs, p. ex. circuits pour équilibrer le blanc ou commande de la température de couleur
H04N 23/951 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux en utilisant plusieurs images pour influencer la résolution, la fréquence d'images ou le rapport de cadre
H04N 25/13 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants
H04N 25/131 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants comprenant des éléments laissant passer les longueurs d'onde infrarouges
H04N 25/133 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants comprenant des éléments laissant passer la lumière panchromatique, p. ex. des filtres laissant passer la lumière blanche
H04N 25/40 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p. ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner
H04N 25/48 - Augmentation de la résolution en déplaçant le capteur par rapport à la scène
H04N 25/581 - Commande de la gamme dynamique impliquant plusieurs expositions acquises simultanément
H04N 25/60 - Traitement du bruit, p. ex. détection, correction, réduction ou élimination du bruit
H04N 25/67 - Traitement du bruit, p. ex. détection, correction, réduction ou élimination du bruit appliqué au bruit à motif fixe, p. ex. non-uniformité de la réponse
H04N 25/705 - Pixels pour la mesure de la profondeur, p. ex. RGBZ
H04N 25/79 - Agencements de circuits répartis entre des substrats, des puces ou des cartes de circuits différents ou multiples, p. ex. des capteurs d'images empilés
15.
Systems and Methods for Hybrid Depth Regularization
Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.
G06T 7/136 - DécoupageDétection de bords impliquant un seuillage
G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
G06T 7/44 - Analyse de la texture basée sur la description statistique de texture utilisant des opérateurs de l'image, p. ex. des filtres, des mesures de densité des bords ou des histogrammes locaux
16.
Systems and Methods for Estimating Depth and Visibility from a Reference Viewpoint for Pixels in a Set of Images Captured from Different Viewpoints
Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/232 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D utilisant des lentilles du type œil de mouche, p. ex. dispositions de lentilles circulaires
H04N 13/243 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant au moins trois capteurs d’images 2D
H04N 23/16 - Dispositions optiques associées aux capteurs, p. ex. pour diviser des faisceaux ou pour corriger la couleur
17.
Systems and Methods for Decoding Image Files Containing Depth Maps Stored as Metadata
Systems and methods in accordance with embodiments of the invention are configured to decode images containing an image of a scene and a corresponding depth map. A depth-based effect is applied to the image to generate a synthetic image of the scene. The synthetic image can be encoded into a new image file that contains metadata associated with the depth-based effect. In many embodiments, the original decoded image has a different depth-based effect applied to it with respect to the synthetic image.
H04N 13/178 - Métadonnées, p. ex. informations sur la disparité
G06T 3/4007 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur l’interpolation, p. ex. interpolation bilinéaire
G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
G06T 7/50 - Récupération de la profondeur ou de la forme
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/161 - Encodage, multiplexage ou démultiplexage de différentes composantes des signaux d’images
H04N 13/243 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant au moins trois capteurs d’images 2D
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/625 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée utilisant une transformée en cosinus discrète
H04N 19/85 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo
18.
AUTOMATED RADIAL BLURRING BASED ON SALIENCY AND CO-SALIENCY
Systems and methods are described for automatically performing automated radial blurring. A plurality of images may be accessed, and saliency parameters may be determined based on at least one of the plurality of images. Co-saliency parameters may be determined based on the plurality of images. A region of interest (ROI) in the at least one of the plurality of images may be determined based on the saliency parameters and the co-saliency parameters. Automated radial blurring of the at least one of the plurality of images may be performed based on the identified ROI.
Systems, methods and apparatuses are described for determining an image that corresponds to a received input instruction. Input may be received which comprises an instruction for an image sensor to capture at least one image of a subject and the instruction comprising at least one criterion for the at least one image of the subject. An image sensor may capture, based on the instruction, captured images of the subject. An instruction vector may be determined based on the instruction, and a captured image vector for each of the captured images of the subject may be determined. At least one captured image vector of the captured images and the instruction vector may be compared to determine a corresponding image from the captured images, and the corresponding image may be provided.
H04N 23/617 - Mise à niveau ou mise à jour des programmes ou des applications pour la commande des caméras
H04N 23/611 - Commande des caméras ou des modules de caméras en fonction des objets reconnus les objets reconnus comprenant des parties du corps humain
H04N 23/60 - Commande des caméras ou des modules de caméras
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
Systems, methods and apparatuses are described for determining an image that corresponds to a received input instruction. Input may be received which comprises an instruction for an image sensor to capture at least one image of a subject and the instruction comprising at least one criterion for the at least one image of the subject. An image sensor may capture, based on the instruction, captured images of the subject. An instruction vector may be determined based on the instruction, and a captured image vector for each of the captured images of the subject may be determined. At least one captured image vector of the captured images and the instruction vector may be compared to determine a corresponding image from the captured images, and the corresponding image may be provided.
Systems, methods, and apparatuses are provided herein for changing the positions and/or shapes of microlenses of a light field camera to generate light field images with enhanced depth of field and/or dynamic range. This may be accomplished by a light field camera determining a plurality of focus measurements for a plurality of microlenses, wherein one or more of the plurality of microlenses vary in distance from a main lens of the light field camera. The light field camera may use the plurality of focus measurements to determine a microlens of the plurality of microlenses that captures information that is the most focused. The light field camera can then determine defocus functions for the microlenses that are not capturing information that is the most focused. The light field camera can then generate a light field image using the determined defocus functions and the information captured by the plurality of microlenses.
Systems, methods, and apparatuses are provided herein for changing the positions and/or shapes of microlenses of a light field camera to generate light field images with enhanced depth of field and/or dynamic range. This may be accomplished by a light field camera determining a plurality of focus measurements for a plurality of microlenses, wherein one or more of the plurality of microlenses vary in distance from a main lens of the light field camera. The light field camera may use the plurality of focus measurements to determine a microlens of the plurality of microlenses that captures information that is the most focused. The light field camera can then determine defocus functions for the microlenses that are not capturing information that is the most focused. The light field camera can then generate a light field image using the determined defocus functions and the information captured by the plurality of microlenses.
A hand-held digital camera has a touch-sensitive display screen (“touch screen”) for image preview and user control of the camera, and a user-selectable panorama mode. Upon entering panorama mode the camera superimposes upon the touch screen a horizontal rectangular bar whose width and/or height are user-adjustable by interaction with the touch screen to select a desired horizontal sweep angle. After the sweep angle is set the camera automatically captures successive horizontally overlapping images during a sweep of the device through the selected sweep angle. Subsequently the camera synthesises a panoramic image from the successively captured images, the panoramic image having a width corresponding to the selected sweep angle.
Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image. In addition, each forward imaging transformation corresponds to the manner in which each imager in the imaging array generate the input images, and the high resolution image synthesized by the microprocessor has a resolution that is greater than any of the input images.
H04N 23/951 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux en utilisant plusieurs images pour influencer la résolution, la fréquence d'images ou le rapport de cadre
G06T 3/4007 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur l’interpolation, p. ex. interpolation bilinéaire
G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
G06T 3/4076 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur utilisant les images originales basse résolution pour corriger itérativement les images haute résolution
In an embodiment, a 3D facial modeling system includes a plurality of cameras configured to capture images from different viewpoints, a processor, and a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras, locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images, determine disparity between corresponding key feature points of located faces within the plurality of images, and generate a 3D model of the face using the depth of the key feature points.
Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 23/11 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde pour générer des signaux d'image à partir de longueurs d'onde de lumière visible et infrarouge
H04N 23/13 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde avec plusieurs capteurs
H04N 23/16 - Dispositions optiques associées aux capteurs, p. ex. pour diviser des faisceaux ou pour corriger la couleur
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD] pour les images fixes
H04N 23/54 - Montage de tubes analyseurs, de capteurs d'images électroniques, de bobines de déviation ou de focalisation
H04N 23/60 - Commande des caméras ou des modules de caméras
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
H04N 23/698 - Commande des caméras ou des modules de caméras pour obtenir un champ de vision élargi, p. ex. pour la capture d'images panoramiques
H04N 23/88 - Chaînes de traitement de la caméraLeurs composants pour le traitement de signaux de couleur pour l'équilibrage des couleurs, p. ex. circuits pour équilibrer le blanc ou commande de la température de couleur
H04N 23/951 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux en utilisant plusieurs images pour influencer la résolution, la fréquence d'images ou le rapport de cadre
H04N 25/40 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p. ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner
H04N 25/48 - Augmentation de la résolution en déplaçant le capteur par rapport à la scène
H04N 25/581 - Commande de la gamme dynamique impliquant plusieurs expositions acquises simultanément
H04N 25/60 - Traitement du bruit, p. ex. détection, correction, réduction ou élimination du bruit
H04N 25/67 - Traitement du bruit, p. ex. détection, correction, réduction ou élimination du bruit appliqué au bruit à motif fixe, p. ex. non-uniformité de la réponse
H04N 25/705 - Pixels pour la mesure de la profondeur, p. ex. RGBZ
H04N 25/79 - Agencements de circuits répartis entre des substrats, des puces ou des cartes de circuits différents ou multiples, p. ex. des capteurs d'images empilés
H04N 25/13 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants
H04N 25/131 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants comprenant des éléments laissant passer les longueurs d'onde infrarouges
H04N 25/133 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants comprenant des éléments laissant passer la lumière panchromatique, p. ex. des filtres laissant passer la lumière blanche
27.
Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 23/11 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde pour générer des signaux d'image à partir de longueurs d'onde de lumière visible et infrarouge
H04N 23/13 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde avec plusieurs capteurs
H04N 23/16 - Dispositions optiques associées aux capteurs, p. ex. pour diviser des faisceaux ou pour corriger la couleur
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD] pour les images fixes
H04N 23/54 - Montage de tubes analyseurs, de capteurs d'images électroniques, de bobines de déviation ou de focalisation
H04N 23/60 - Commande des caméras ou des modules de caméras
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
H04N 23/698 - Commande des caméras ou des modules de caméras pour obtenir un champ de vision élargi, p. ex. pour la capture d'images panoramiques
H04N 23/88 - Chaînes de traitement de la caméraLeurs composants pour le traitement de signaux de couleur pour l'équilibrage des couleurs, p. ex. circuits pour équilibrer le blanc ou commande de la température de couleur
H04N 23/951 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux en utilisant plusieurs images pour influencer la résolution, la fréquence d'images ou le rapport de cadre
H04N 25/40 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p. ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner
H04N 25/48 - Augmentation de la résolution en déplaçant le capteur par rapport à la scène
H04N 25/581 - Commande de la gamme dynamique impliquant plusieurs expositions acquises simultanément
H04N 25/60 - Traitement du bruit, p. ex. détection, correction, réduction ou élimination du bruit
H04N 25/67 - Traitement du bruit, p. ex. détection, correction, réduction ou élimination du bruit appliqué au bruit à motif fixe, p. ex. non-uniformité de la réponse
H04N 25/705 - Pixels pour la mesure de la profondeur, p. ex. RGBZ
H04N 25/79 - Agencements de circuits répartis entre des substrats, des puces ou des cartes de circuits différents ou multiples, p. ex. des capteurs d'images empilés
H04N 25/13 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants
H04N 25/131 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants comprenant des éléments laissant passer les longueurs d'onde infrarouges
H04N 25/133 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants comprenant des éléments laissant passer la lumière panchromatique, p. ex. des filtres laissant passer la lumière blanche
A method and system for detecting facial expressions in digital images and applications therefore are disclosed. Analysis of a digital image determines whether or not a smile and/or blink is present on a person's face. Face recognition, and/or a pose or illumination condition determination, permits application of a specific, relatively small classifier cascade.
H04N 23/611 - Commande des caméras ou des modules de caméras en fonction des objets reconnus les objets reconnus comprenant des parties du corps humain
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
H04N 23/61 - Commande des caméras ou des modules de caméras en fonction des objets reconnus
29.
System and methods for calibration of an array camera
Systems and methods for calibrating an array camera are disclosed. Systems and methods for calibrating an array camera in accordance with embodiments of this invention include the capturing of an image of a test pattern with the array camera such that each imaging component in the array camera captures an image of the test pattern. The image of the test pattern captured by a reference imaging component is then used to derive calibration information for the reference component. A corrected image of the test pattern for the reference component is then generated from the calibration information and the image of the test pattern captured by the reference imaging component. The corrected image is then used with the images captured by each of the associate imaging components associated with the reference component to generate calibration information for the associate imaging components.
H04N 13/282 - Générateurs de signaux d’images pour la génération de signaux d’images correspondant à au moins trois points de vue géométriques, p. ex. systèmes multi-vues
H04N 17/02 - Diagnostic, test ou mesure, ou leurs détails, pour les systèmes de télévision pour les signaux de télévision en couleurs
H04N 23/667 - Changement de mode de fonctionnement de la caméra, p. ex. entre les modes photo et vidéo, sport et normal ou haute et basse résolutions
H04N 23/951 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux en utilisant plusieurs images pour influencer la résolution, la fréquence d'images ou le rapport de cadre
30.
Systems and methods for depth estimation using generative models
Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.
Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.
G06T 7/136 - DécoupageDétection de bords impliquant un seuillage
G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
G06T 7/44 - Analyse de la texture basée sur la description statistique de texture utilisant des opérateurs de l'image, p. ex. des filtres, des mesures de densité des bords ou des histogrammes locaux
33.
Systems and Methods for Estimating Depth from Projected Texture using Camera Arrays
Systems and methods for estimating depth from projected texture using camera arrays are described. A camera array includes a conventional camera and at least one two-dimensional array of cameras, where the conventional camera has a higher resolution than the cameras in the at least one two-dimensional array of cameras, an illumination system configured to illuminate a scene with a projected texture, where an image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture, capture a set of images of the scene illuminated with the projected texture, and determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images.
G01B 11/22 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la profondeur
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p. ex. des franges de moiré, sur l'objet
G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la télémétrie laser, p. ex. par interférométrieRécupération de la profondeur ou de la forme à partir de la projection de lumière structurée
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
G06T 7/557 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir des champs de lumière, p. ex. de caméras plénoptiques
Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
H04N 13/243 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant au moins trois capteurs d’images 2D
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD] pour les images fixes
H04N 23/90 - Agencement de caméras ou de modules de caméras, p. ex. de plusieurs caméras dans des studios de télévision ou des stades de sport
H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
H04N 23/11 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde pour générer des signaux d'image à partir de longueurs d'onde de lumière visible et infrarouge
H04N 23/56 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande munis de moyens d'éclairage
35.
Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image. In addition, each forward imaging transformation corresponds to the manner in which each imager in the imaging array generate the input images, and the high resolution image synthesized by the microprocessor has a resolution that is greater than any of the input images.
G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
H04N 23/951 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux en utilisant plusieurs images pour influencer la résolution, la fréquence d'images ou le rapport de cadre
Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
H04N 13/128 - Ajustement de la profondeur ou de la disparité
G06T 7/557 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir des champs de lumière, p. ex. de caméras plénoptiques
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
G06T 7/50 - Récupération de la profondeur ou de la forme
H04N 23/11 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde pour générer des signaux d'image à partir de longueurs d'onde de lumière visible et infrarouge
H04N 23/13 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde avec plusieurs capteurs
H04N 23/16 - Dispositions optiques associées aux capteurs, p. ex. pour diviser des faisceaux ou pour corriger la couleur
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD] pour les images fixes
H04N 23/54 - Montage de tubes analyseurs, de capteurs d'images électroniques, de bobines de déviation ou de focalisation
H04N 23/60 - Commande des caméras ou des modules de caméras
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
H04N 23/88 - Chaînes de traitement de la caméraLeurs composants pour le traitement de signaux de couleur pour l'équilibrage des couleurs, p. ex. circuits pour équilibrer le blanc ou commande de la température de couleur
H04N 23/951 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux en utilisant plusieurs images pour influencer la résolution, la fréquence d'images ou le rapport de cadre
H04N 25/40 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p. ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner
H04N 25/48 - Augmentation de la résolution en déplaçant le capteur par rapport à la scène
H04N 25/60 - Traitement du bruit, p. ex. détection, correction, réduction ou élimination du bruit
H04N 25/67 - Traitement du bruit, p. ex. détection, correction, réduction ou élimination du bruit appliqué au bruit à motif fixe, p. ex. non-uniformité de la réponse
H04N 25/79 - Agencements de circuits répartis entre des substrats, des puces ou des cartes de circuits différents ou multiples, p. ex. des capteurs d'images empilés
H04N 25/581 - Commande de la gamme dynamique impliquant plusieurs expositions acquises simultanément
H04N 25/705 - Pixels pour la mesure de la profondeur, p. ex. RGBZ
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
H04N 5/262 - Circuits de studio, p. ex. pour mélanger, commuter, changer le caractère de l'image, pour d'autres effets spéciaux
H04N 25/131 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants comprenant des éléments laissant passer les longueurs d'onde infrarouges
H04N 25/133 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants comprenant des éléments laissant passer la lumière panchromatique, p. ex. des filtres laissant passer la lumière blanche
H04N 25/13 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants
An approach for an iris liveness detection is provided. A plurality of image pairs is acquired using one or more image sensors of a mobile device. A particular image pair is selected from the plurality of image pairs, and a hyperspectral image is generated for the particular image pair. Based on, at least in part, the hyperspectral image, a particular feature vector for the eye-iris region depicted in the particular image pair is generated, and one or more trained model feature vectors generated for facial features of a particular user of the device are retrieved. Based on, at least in part, the particular feature vector and the one or more trained model feature vectors, a distance metric is determined and compared with a threshold. If the distance metric exceeds the threshold, then a first message indicating that the plurality of image pairs fails to depict the particular user is generated. It is also determined whether at least one characteristic, of one or more characteristics determined for NIR images, changes from image-to-image by at least a second threshold. If so, then a second message is generated to indicate that the plurality of image pairs depicts the particular user of a mobile device. The second message may also indicate that an authentication of an owner to the mobile device was successful. Otherwise, a third message is generated to indicate that a presentation attack on the mobile device is in progress.
Systems and methods for calibrating an array camera are disclosed. Systems and methods for calibrating an array camera in accordance with embodiments of this invention include the capturing of an image of a test pattern with the array camera such that each imaging component in the array camera captures an image of the test pattern. The image of the test pattern captured by a reference imaging component is then used to derive calibration information for the reference component. A corrected image of the test pattern for the reference component is then generated from the calibration information and the image of the test pattern captured by the reference imaging component. The corrected image is then used with the images captured by each of the associate imaging components associated with the reference component to generate calibration information for the associate imaging components.
H04N 13/282 - Générateurs de signaux d’images pour la génération de signaux d’images correspondant à au moins trois points de vue géométriques, p. ex. systèmes multi-vues
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra
H04N 17/02 - Diagnostic, test ou mesure, ou leurs détails, pour les systèmes de télévision pour les signaux de télévision en couleurs
A method and system for detecting facial expressions in digital images and applications therefore are disclosed. Analysis of a digital image determines whether or not a smile and/or blink is present on a person's face. Face recognition, and/or a pose or illumination condition determination, permits application of a specific, relatively small classifier cascade.
H04N 23/611 - Commande des caméras ou des modules de caméras en fonction des objets reconnus les objets reconnus comprenant des parties du corps humain
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
H04N 23/61 - Commande des caméras ou des modules de caméras en fonction des objets reconnus
40.
Digital image capture device having a panorama mode
A hand-held digital camera has a touch-sensitive display screen (“touch screen”) for image preview and user control of the camera, and a user-selectable panorama mode. Upon entering panorama mode the camera superimposes upon the touch screen a horizontal rectangular bar whose width and/or height are user-adjustable by interaction with the touch screen to select a desired horizontal sweep angle. After the sweep angle is set the camera automatically captures successive horizontally overlapping images during a sweep of the device through the selected sweep angle. Subsequently the camera synthesizes a panoramic image from the successively captured images, the panoramic image having a width corresponding to the selected sweep angle.
Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.
Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
A method operable within an image capture device for stabilizing a sequence of images captured by the image capture device is disclosed. The method comprises, using lens based sensors indicating image capture device movement during image acquisition, performing optical image stabilization (OIS) during acquisition of each image of the sequence of images to provide a sequence of OIS corrected images. Movement of the device for each frame during which each OIS corrected image is captured is determined using inertial measurement sensors. At least an estimate of OIS control performed during acquisition of an image is obtained. The estimate is removed from the intra-frame movement determined for the frame during which the OIS corrected image was captured to provide a residual measurement of movement for the frame. Electronic image stabilization (EIS) of each OIS corrected image based on the residual measurement is performed to provide a stabilized sequence of images.
Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/232 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D utilisant des lentilles du type œil de mouche, p. ex. dispositions de lentilles circulaires
H04N 13/243 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant au moins trois capteurs d’images 2D
H04N 23/16 - Dispositions optiques associées aux capteurs, p. ex. pour diviser des faisceaux ou pour corriger la couleur
An approach for an iris liveness detection is provided. A plurality of image pairs is acquired using one or more image sensors of a mobile device. A particular image pair is selected from the plurality of image pairs, and a hyperspectral image is generated for the particular image pair. Based on, at least in part, the hyperspectral image, a particular feature vector for the eye-iris region depicted in the particular image pair is generated, and one or more trained model feature vectors generated for facial features of a particular user of the device are retrieved. Based on, at least in part, the particular feature vector and the one or more trained model feature vectors, a distance metric is determined and compared with a threshold. If the distance metric exceeds the threshold, then a first message indicating that the plurality of image pairs fails to depict the particular user is generated. It is also determined whether at least one characteristic, of one or more characteristics determined for NIR images, changes from image-to-image by at least a second threshold. If so, then a second message is generated to indicate that the plurality of image pairs depicts the particular user of a mobile device. The second message may also indicate that an authentication of an owner to the mobile device was successful. Otherwise, a third message is generated to indicate that a presentation attack on the mobile device is in progress.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A method of image processing within an image acquisition device. In one embodiment an image including one or more face regions is acquired and one or more iris regions are identified within the one or more face regions. The one or more iris regions are analyzed to identify any iris region containing an iris pattern that poses a risk of biometrically identifying a subject within the image. Responsive to identifying any such iris region, a respective substitute iris region, containing an iris pattern distinct from the identified iris pattern to avoid identifying the subject within the image, is determined and the identified iris region is replaced with the substitute iris region in the original image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A method is disclosed for processing at least a portion of an input digital image comprising rows of pixels extending in two mutually perpendicular directions over a 2D field. The method comprises defining a kernel for processing an image, the kernel comprising at least one row of contiguous elements of the same non-zero value (such rows being referred to herein as equal-valued kernel regions), the equal-valued kernel regions, if more than one, extending parallel to one another. For each pixel in at least selected parallel rows of pixels within the image portion, the cumulative sum of the pixel is calculated by adding a value of the pixel to the sum of all preceding pixel values in the same row of the image portion. The kernel is convolved with the image portion at successive kernel positions relative to the image portion such that each pixel in each selected row is a target pixel for a respective kernel position. For each kernel position, the convolving is performed, for each equal-valued kernel region, by calculating the difference between the cumulative sum of the pixel corresponding to the last element in the equal-valued kernel region and the cumulative sum of the pixel corresponding to the element immediately preceding the first element in the region, and summing the differences for all equal-valued kernel regions. The differences sum is scaled to provide a processed target pixel value.
An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A hand-held or otherwise portable or spatial or temporal performance-based image capture device includes one or more lenses, an aperture and a main sensor for capturing an original main image. A secondary sensor and optical system are for capturing a reference image that has temporal and spatial overlap with the original image. The device performs an image processing method including capturing the main image with the main sensor and the reference image with the secondary sensor, and utilizing information from the reference image to enhance the main image. The main and secondary sensors are contained together within a housing.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/32 - Alignement ou centrage du capteur d'image ou de la zone image
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
H04N 5/345 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en lisant partiellement une matrice de capteurs SSIS
H04N 5/347 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en combinant ou en mélangeant les pixels dans le capteur SSIS
H04N 5/77 - Circuits d'interface entre un appareil d'enregistrement et un autre appareil entre un appareil d'enregistrement et une caméra de télévision
H04N 9/804 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour la reproduction comportant une modulation par impulsions codées pour les composantes du signal d'image en couleurs
An approach for an iris liveness detection is provided. A plurality of image pairs is acquired using one or more image sensors of a mobile device. A particular image pair is selected from the plurality of image pairs, and a hyperspectral image is generated for the particular image pair. Based on, at least in part, the hyperspectral image, a particular feature vector for the eye-iris region depicted in the particular image pair is generated, and one or more trained model feature vectors generated for facial features of a particular user of the device are retrieved. Based on, at least in part, the particular feature vector and the one or more trained model feature vectors, a distance metric is determined and compared with a threshold. If the distance metric exceeds the threshold, then a first message indicating that the plurality of image pairs fails to depict the particular user is generated. It is also determined whether at least one characteristic, of one or more characteristics determined for NIR images, changes from image-to-image by at least a second threshold. If so, then a second message is generated to indicate that the plurality of image pairs depicts the particular user of a mobile device. The second message may also indicate that an authentication of an owner to the mobile device was successful. Otherwise, a third message is generated to indicate that a presentation attack on the mobile device is in progress.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A hand-held digital camera has a touch-sensitive display screen (“touch screen”) for image preview and user control of the camera, and a user-selectable panorama mode. Upon entering panorama mode the camera superimposes upon the touch screen a horizontal rectangular bar whose width and/or height are user-adjustable by interaction with the touch screen to select a desired horizontal sweep angle. After the sweep angle is set the camera automatically captures successive horizontally overlapping images during a sweep of the device through the selected sweep angle. Subsequently the camera synthesises a panoramic image from the successively captured images, the panoramic image having a width corresponding to the selected sweep angle.
Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
G06T 7/44 - Analyse de la texture basée sur la description statistique de texture utilisant des opérateurs de l'image, p. ex. des filtres, des mesures de densité des bords ou des histogrammes locaux
A method for acquiring an image comprises acquiring a first image frame including a region containing a subject at a first focus position; determining a first sharpness of the subject within the first image frame; identifying an imaged subject size within the first image frame; determining a second focus position based on the imaged subject size; acquiring a second image frame at the second focus position; and determining a second sharpness of the subject within the second image frame. A sharpness threshold is determined as a function of image acquisition parameters for the first and/or second image frame. Responsive to the second sharpness not exceeding the first sharpness and the sharpness threshold, camera motion parameters and/or subject motion parameters for the second image frame are determined before performing a focus sweep to determine an optimal focus position for the subject.
G03B 13/00 - ViseursAuxiliaires de mise au point pour appareils photographiquesMoyens de mise au point pour appareils photographiquesSystèmes de mise au point automatique pour appareils photographiques
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
G02B 7/28 - Systèmes pour la génération automatique de signaux de mise au point
G03B 13/36 - Systèmes de mise au point automatique
G02B 7/36 - Systèmes pour la génération automatique de signaux de mise au point utilisant des techniques liées à la netteté de l'image
H04N 5/14 - Circuits de signal d'image pour le domaine des fréquences vidéo
54.
Digital image capture device having a panorama mode
A hand-held digital camera has a touch-sensitive display screen (“touch screen”) for image preview and user control of the camera, and a user-selectable panorama mode. Upon entering panorama mode the camera superimposes upon the touch screen a horizontal rectangular bar whose width and/or height are user-adjustable by interaction with the touch screen to select a desired horizontal sweep angle. After the sweep angle is set the camera automatically captures successive horizontally overlapping images during a sweep of the device through the selected sweep angle. Subsequently the camera synthesizes a panoramic image from the successively captured images, the panoramic image having a width corresponding to the selected sweep angle.
A method for acquiring an image comprises acquiring a first image frame including a region containing a subject at a first focus position; determining a first sharpness of the subject within the first image frame; identifying an imaged subject size within the first image frame; determining a second focus position based on the imaged subject size; acquiring a second image frame at the second focus position; and determining a second sharpness of the subject within the second image frame. A sharpness threshold is determined as a function of image acquisition parameters for the first and/or second image frame. Responsive to the second sharpness not exceeding the first sharpness and the sharpness threshold, camera motion parameters and/or subject motion parameters for the second image frame are determined before performing a focus sweep to determine an optimal focus position for the subject.
G03B 13/00 - ViseursAuxiliaires de mise au point pour appareils photographiquesMoyens de mise au point pour appareils photographiquesSystèmes de mise au point automatique pour appareils photographiques
G02B 7/36 - Systèmes pour la génération automatique de signaux de mise au point utilisant des techniques liées à la netteté de l'image
56.
Image capture device with contemporaneous image correction mechanism
A hand-held or otherwise portable or spatial or temporal performance-based image capture device includes one or more lenses, an aperture and a main sensor for capturing an original main image. A secondary sensor and optical system are for capturing a reference image that has temporal and spatial overlap with the original image. The device performs an image processing method including capturing the main image with the main sensor and the reference image with the secondary sensor, and utilizing information from the reference image to enhance the main image. The main and secondary sensors are contained together within a housing.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/32 - Alignement ou centrage du capteur d'image ou de la zone image
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse
H04N 5/345 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en lisant partiellement une matrice de capteurs SSIS
H04N 5/347 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en combinant ou en mélangeant les pixels dans le capteur SSIS
H04N 5/77 - Circuits d'interface entre un appareil d'enregistrement et un autre appareil entre un appareil d'enregistrement et une caméra de télévision
H04N 9/804 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour la reproduction comportant une modulation par impulsions codées pour les composantes du signal d'image en couleurs
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/48 - Extraction d'éléments ou de caractéristiques de l'image en codant le contour de la forme
G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
In one embodiment, a gimbal adjustment system and an associated method for adjusting the position of an object. The system comprises a base, a plate and a shaft including a pivot attached to the plate. The pivot has a point of contact with the plate in a joint about which the plate is rotatable. Magnetic elements are positioned on the base and the plate to stabilize or rotate the plate. The object may be an optical unit attached to the plate. A combination comprising the plate, optical unit and magnetic elements may form a gimbaled assembly having a center of mass in the joint.
G02B 27/64 - Systèmes pour donner des images utilisant des éléments optiques pour la stabilisation latérale et angulaire de l'image
H02K 33/12 - Moteurs avec un aimant, un induit ou un système de bobines à mouvement alternatif, oscillant ou vibrant avec des induits se déplaçant dans des directions opposées par alimentation alternée de systèmes à deux bobines
G03B 5/04 - Réglage vertical de l'objectifPorte-objectifs décentrables en hauteur
A method of image processing within an image acquisition device. In one embodiment an image including one or more face regions is acquired and one or more iris regions are identified within the one or more face regions. The one or more iris regions are analyzed to identify any iris region containing an iris pattern that poses a risk of biometrically identifying a subject within the image. Responsive to identifying any such iris region, a respective substitute iris region, containing an iris pattern distinct from the identified iris pattern to avoid identifying the subject within the image, is determined and the identified iris region is replaced with the substitute iris region in the original image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06F 16/50 - Recherche d’informationsStructures de bases de données à cet effetStructures de systèmes de fichiers à cet effet de données d’images fixes
An image acquisition system for acquiring iris images for use in biometric recognition of a subject includes an optical system comprising a cluster of at least 2 lenses arranged in front of a common image sensor with each lens optical axis in parallel spaced apart relationship. Each lens has a fixed focus and a different aperture to provide a respective angular field of view. The lens with the closest focus has the smallest aperture and the lens with the farthest focus has the largest aperture so that iris images can be acquired across a focal range of at least from 200 mm to 300 mm.
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A method for calibrating an image capture device comprises mounting at least one sample device from a batch for movement through a plurality of orientations relative to a horizontal plane. For a given orientation, the sample device is focused at a sequence of positions, each position being at a respective focus distance from the device. A lens actuator setting is recorded for the sample device at each position. This is repeated at a plurality of distinct orientations of the sample device. Respective relationships are determined between lens actuator settings at any given position for distinct orientations from the plurality of distinct orientations and actuator settings at a selected orientation of the plurality of distinct orientations. Lens actuator settings for the image capture device to be calibrated are recorded at least at two points of interest (POI), each a specified focus distance from the device with the image capture device positioned at the selected orientation. The image capture device is calibrated for the plurality of distinct orientations based on the determined relationships and the recorded lens actuator settings.
H04N 17/02 - Diagnostic, test ou mesure, ou leurs détails, pour les systèmes de télévision pour les signaux de télévision en couleurs
H04N 17/00 - Diagnostic, test ou mesure, ou leurs détails, pour les systèmes de télévision
G02B 7/28 - Systèmes pour la génération automatique de signaux de mise au point
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
G02B 7/10 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement par déplacement axial relatif de plusieurs lentilles, p. ex. lentilles d'objectif à distance focale variable
An approach for an iris liveness detection is provided. A plurality of image pairs is acquired using one or more image sensors of a mobile device. A particular image pair is selected from the plurality of image pairs, and a hyperspectral image is generated for the particular image pair. Based on, at least in part, the hyperspectral image, a particular feature vector for the eye-iris region depicted in the particular image pair is generated, and one or more trained model feature vectors generated for facial features of a particular user of the device are retrieved. Based on, at least in part, the particular feature vector and the one or more trained model feature vectors, a distance metric is determined and compared with a threshold. If the distance metric exceeds the threshold, then a first message indicating that the plurality of image pairs fails to depict the particular user is generated. It is also determined whether at least one characteristic, of one or more characteristics determined for NIR images, changes from image-to-image by at least a second threshold. If so, then a second message is generated to indicate that the plurality of image pairs depicts the particular user of a mobile device. The second message may also indicate that an authentication of an owner to the mobile device was successful. Otherwise, a third message is generated to indicate that a presentation attack on the mobile device is in progress.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
An image processing apparatus comprises a set of infra-red (IR) sources surrounding an image capture sensor and a processor operatively coupled to said IR sources and said image capture sensor. The processor being arranged to acquire from the sensor a succession of images, each illuminated with a different combination of the IR sources. The processor is further arranged to combine component images corresponding to the succession of images by selecting a median value for corresponding pixel locations of the component images as a pixel value for the combined image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A method is disclosed for processing at least a portion of an input digital image comprising rows of pixels extending in two mutually perpendicular directions over a 2D field. The method comprises defining a kernel for processing an image, the kernel comprising at least one row of contiguous elements of the same non-zero value (such rows being referred to herein as equal-valued kernel regions), the equal-valued kernel regions, if more than one, extending parallel to one another. For each pixel in at least selected parallel rows of pixels within the image portion, the cumulative sum of the pixel is calculated by adding a value of the pixel to the sum of all preceding pixel values in the same row of the image portion. The kernel is convolved with the image portion at successive kernel positions relative to the image portion such that each pixel in each selected row is a target pixel for a respective kernel position. For each kernel position, the convolving is performed, for each equal-valued kernel region, by calculating the difference between the cumulative sum of the pixel corresponding to the last element in the equal-valued kernel region and the cumulative sum of the pixel corresponding to the element immediately preceding the first element in the region, and summing the differences for all equal-valued kernel regions. The differences sum is scaled to provide a processed target pixel value.
An optical system for an image acquisition device comprises an image sensor comprising an array of pixels including pixels sensitive to IR wavelengths for acquiring an image. A lens assembly includes a collecting lens surface with an optical axis, the lens assembly being arranged to focus IR light received from a given object distance on the sensor surface. The lens assembly includes at least a first reflective surface for reflecting collected light along an axis transverse to the optical axis so that a length of the optical system along the optical axis is reduced by comparison to a focal length of the lens assembly.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
G02B 7/04 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement
A method of processing an image comprises: acquiring an image of a scene including an object having a recognizable feature. A lens actuator setting providing a maximum sharpness for a region of the image including the object and a lens displacement corresponding to the lens actuator setting are determined. A distance to the object based on the lens displacement is calculated. A dimension of the feature as a function of the distance to the object, the imaged object size and a focal length of a lens assembly with which the image was acquired, is determined. The determined dimension of the feature is employed instead of an assumed dimension of the feature for subsequent processing of images of the scene including the object.
G02B 7/09 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement adaptés pour la mise au point automatique ou pour faire varier le grossissement de façon automatique
G01B 11/14 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la distance ou la marge entre des objets ou des ouvertures espacés
66.
Foreground / background separation in digital images
A method for providing improved foreground/background separation in a digital image of a scene is disclosed. The method comprises providing a first map comprising one or more regions provisionally defined as one of foreground or background within the digital image; and providing a subject profile corresponding to a region of interest of the digital image. The provisionally defined regions are compared with the subject profile to determine if any of the regions intersect with the profile region. The definition of one or more of the regions in the map is changed based on the comparison.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A hand-held digital camera has a touch-sensitive display screen (“touch screen”) for image preview and user control of the camera, and a user-selectable panorama mode. Upon entering panorama mode the camera superimposes upon the touch screen a horizontal rectangular bar whose width and/or height are user-adjustable by interaction with the touch screen to select a desired horizontal sweep angle. After the sweep angle is set the camera automatically captures successive horizontally overlapping images during a sweep of the device through the selected sweep angle. Subsequently the camera synthesizes a panoramic image from the successively captured images, the panoramic image having a width corresponding to the selected sweep angle.
A method of image processing within an image acquisition device comprises: acquiring an image including one or more face regions and identifying one or more eye-iris regions within the one or more face regions. The one or more eye-iris regions are analyzed to identify any eye-iris region comprising an eye-iris pattern of sufficient quality to pose a risk of biometrically identifying a person within the image. Responsive to identifying any such eye-iris region, a respective substitute eye-iris region comprising an eye-iris pattern sufficiently distinct from the identified eye-iris pattern to avoid identifying the person within the image is determined, and the identified eye-iris region is replaced with the substitute eye-iris region in the original image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06F 17/30 - Recherche documentaire; Structures de bases de données à cet effet
A template matching module is configured to program a processor to apply multiple differently-tuned object detection classifier sets in parallel to a digital image to determine one or more of an object type, configuration, orientation, pose or illumination condition, and to dynamically switch between object detection templates to match a determined object type, configuration, orientation, pose, blur, exposure and/or directional illumination condition.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/68 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons successives des signaux images avec plusieurs références, p.ex. mémoire adressable
A method of tracking faces in an image stream with a digital image acquisition device includes receiving images from an image stream including faces, calculating corresponding integral images, and applying different subsets of face detection rectangles to the integral images to provide sets of candidate regions. The different subsets include candidate face regions of different sizes and/or locations within the images. The different candidate face regions from different images of the image stream are each tracked.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
A hand-held or otherwise portable or spatial or temporal performance-based image capture device includes one or more lenses, an aperture and a main sensor for capturing an original main image. A secondary sensor and optical system are for capturing a reference image that has temporal and spatial overlap with the original image. The device performs an image processing method including capturing the main image with the main sensor and the reference image with the secondary sensor, and utilizing information from the reference image to enhance the main image. The main and secondary sensors are contained together within a housing.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/32 - Alignement ou centrage du capteur d'image ou de la zone image
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse
H04N 5/345 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en lisant partiellement une matrice de capteurs SSIS
H04N 5/347 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en combinant ou en mélangeant les pixels dans le capteur SSIS
H04N 5/77 - Circuits d'interface entre un appareil d'enregistrement et un autre appareil entre un appareil d'enregistrement et une caméra de télévision
H04N 9/804 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour la reproduction comportant une modulation par impulsions codées pour les composantes du signal d'image en couleurs
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/48 - Extraction d'éléments ou de caractéristiques de l'image en codant le contour de la forme
G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
H04N 5/907 - Enregistrement du signal de télévision utilisant des mémoires, p. ex. des tubes à mémoires ou des mémoires à semi-conducteurs
An image processing technique includes acquiring a main image of a scene and determining one or more facial regions in the main image. The facial regions are analyzed to determine if any of the facial regions includes a defect. A sequence of relatively low resolution images nominally of the same scene is also acquired. One or more sets of low resolution facial regions in the sequence of low resolution images are determined and analyzed for defects. Defect free facial regions of a set are combined to provide a high quality defect free facial region. At least a portion of any defective facial regions of the main image are corrected with image information from a corresponding high quality defect free facial region.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
A forward interpolation approach is disclosed for enabling a second version of an image to be constructed from a first version of the image. According to one implementation, an input pixel from the first version of the image is forward mapped to the second version of the image to determine a set of candidate pixels that may be affected by the input pixel. Each candidate pixel is then backward mapped to the first version of the image to determine whether they are actually affected by the input pixel. For each candidate pixel that is actually affected by the input pixel, a pixel value is determined for that candidate pixel based at least in part upon the pixel value of the input pixel. By using this forward and backward mapping technique, forward interpolation can be implemented quickly and efficiently.
A 9 pixel-by-9 pixel working window slides over an input Bayer image. For each such window, a demosaicing operation is performed. For each such window, corrective processing is performed relative to that window to produce relative differences for that window. For each such window for which relative differences have been produced, those relative differences are regulated. For each window, a maximum is found for that window's regulated relative differences; in one embodiment of the invention, this maximum is used to select which channel is sharp. For each window, the colors in that window are corrected based on the relative difference-based maximum found for that window. For each window, edge oversharpening is softened in order to avoid artifacts in the output image. The result is an output image in which axial chromatic aberrations have been corrected.
H04N 5/228 - Caméras de télévision - Détails de circuits pour tubes analyseurs
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 5/217 - Circuits pour la suppression ou la diminution de perturbations, p.ex. moiré ou halo lors de la production des signaux d'image
H04N 9/64 - Circuits pour le traitement de signaux de couleur
H04N 3/14 - Détails des dispositifs de balayage des systèmes de télévisionLeur combinaison avec la production des tensions d'alimentation par des moyens non exclusivement optiques-mécaniques au moyen de dispositifs à l'état solide à balayage électronique
Techniques for detecting and addressing image flicker are disclosed. An imaging device that senses a distorted image and subsequently removes the distortion during processing can utilize an analysis module that obtains statistics indicative of image flicker prior to removing the distortion. An imaging device that features a diode for illuminating a field of view can utilize the diode as a photosensor to determine one or more flicker statistics to determine whether ambient lighting conditions are of the type that cause image flicker.
A measure of frame-to-frame rotation is determined. A global XY alignment of a pair of image frames is performed. At least one section of each of the X and Y integral projection vectors is determined, where aligned global vectors demonstrate a significant localized difference. Based on X and Y locations of the at least one section of the X and Y integral projection vectors, location, relative velocity and/or approximate area of at least one moving object within the sequence of image frames is/are determined.
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c.-à-d. systèmes dans lesquels le signal vidéo n'est pas diffusé
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/66 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons ou corrélations simultanées de signaux images avec une pluralité de références, p.ex. matrice de résistances avec des références réglables par une méthode adaptative, p.ex. en s'instruisant
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/36 - Prétraitement de l'image, c. à d. traitement de l'information image sans se préoccuper de l'identité de l'image
77.
Fast rotation estimation of objects in sequences of acquired digital images
A measure of frame-to-frame rotation is determined. A global XY alignment of a pair of frames is performed. Local XY alignments in at least two matching corner regions of the pair of images are determined after the global XY alignment. Based on differences between the local XY alignments, a global rotation is determined between the pair of frames.
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c.-à-d. systèmes dans lesquels le signal vidéo n'est pas diffusé
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/66 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons ou corrélations simultanées de signaux images avec une pluralité de références, p.ex. matrice de résistances avec des références réglables par une méthode adaptative, p.ex. en s'instruisant
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/36 - Prétraitement de l'image, c. à d. traitement de l'information image sans se préoccuper de l'identité de l'image
A measure of frame-to-frame rotation is determined. Integral projection vector gradients are determined and normalized for a pair of images. Locations of primary maximum and minimum peaks of the integral projection vector gradients are determined. Based on normalized distances between the primary maximum and minimum peaks, a global image rotation is determined.
A template matching module is configured to program a processor to apply multiple differently-tuned object detection classifier sets in parallel to a digital image to determine one or more of an object type, configuration, orientation, pose or illumination condition, and to dynamically switch between object detection templates to match a determined object type, configuration, orientation, pose, blur, exposure and/or directional illumination condition.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/68 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons successives des signaux images avec plusieurs références, p.ex. mémoire adressable
80.
Image sharpening via gradient environment detection
In an embodiment, a device comprises a plurality of elements, including logical elements, wherein the elements are configured to perform the operations of: in a neighborhood of pixels surrounding and including a particular pixel, applying a filter to multiple groups of pixels in the neighborhood to generate a set of filtered values; generating, based at least in part upon the set of filtered values, one or more sets of gradient values; based at least in part upon the one or more sets of gradient values, computing a first metric for an image environment in which the particular pixel is situated; determining a second metric for the image environment in which the particular pixel is situated, wherein the second metric distinguishes between a detail environment; and based at least in part upon the first metric and the second metric, computing a gradient improvement (GI) metric for the particular pixel.
G06K 9/48 - Extraction d'éléments ou de caractéristiques de l'image en codant le contour de la forme
G06K 9/56 - Combinaisons de fonctions de prétraitement en utilisant un opérateur local, c. à d. des moyens pour opérer sur un point image élémentaire en fonction des éléments situés à proximité immédiate de ce point
A method for detecting a redeye defect in a digital image containing an eye comprises converting the digital image into an intensity image, and segmenting the intensity image into segments each having a local intensity maximum. Separately, the original digital image is thresholded to identify regions of relatively high intensity and a size falling within a predetermined range. Of these, a region is selected having substantially the highest average intensity, and those segments from the segmentation of the intensity image whose maxima are located in the selected region are identified.
A technique is disclosed for calculating a value for a second color for a particular pixel. The technique selects a first set of neighboring pixels situated on a first side of the particular pixel, and a second set of neighboring pixels situated on an opposite side of the particular pixel. Based upon color values from the first set of neighboring pixels, the technique determines a first representative relationship, and based upon color values from the second set of neighboring pixels, the technique determines a second representative relationship. Based upon these representative relationships, the technique determines a target relationship between the value for the second color for the particular pixel and a value for a first color for the particular pixel. Based upon the target relationship and the value for the first color for the particular pixel, the technique calculates the value for the second color for the particular pixel.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/34 - Découpage des formes se touchant ou se chevauchant dans la zone image
An image processing technique includes acquiring a main image of a scene and determining one or more facial regions in the main image. The facial regions are analysed to determine if any of the facial regions includes a defect. A sequence of relatively low resolution images nominally of the same scene is also acquired. One or more sets of low resolution facial regions in the sequence of low resolution images are determined and analysed for defects. Defect free facial regions of a set are combined to provide a high quality defect free facial region. At least a portion of any defective facial regions of the main image are corrected with image information from a corresponding high quality defect free facial region.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
84.
Object detection and rendering for wide field of view (WFOV) image acquisition systems
An image acquisition device having a wide field of view includes a lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The device has an object detection engine that includes one or more cascades of object classifiers, e.g., face classifiers. A WFoV correction engine may apply rectilinear and/or cylindrical projections to pixels of the WFoV image, and/or non-linear, rectilinear and/or cylindrical lens elements or lens portions serve to prevent and/or correct distortion within the original WFoV image. One or more objects located within the original and/or distortion-corrected WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of object classifiers.
An image acquisition sensor of a digital image acquisition apparatus is coupled to imaging optics for acquiring a sequence of images. Images acquired by the sensor are stored. A motion detector causes the sensor to cease capture of an image when the degree of movement in acquiring the image exceeds a threshold. A controller selectively transfers acquired images for storage. A motion extractor determines motion parameters of a selected, stored image. An image re-constructor corrects the selected image with associated motion parameters. A selected plurality of images nominally of the same scene are merged and corrected by the image re-constructor to produce a high quality image of the scene.
A method for detecting a redeye defect in a digital image containing an eye comprises converting the digital image into an intensity image, and segmenting the intensity image into segments each having a local intensity maximum. Separately, the original digital image is thresholded to identify regions of relatively high intensity and a size falling within a predetermined range. Of these, a region is selected having substantially the highest average intensity, and those segments from the segmentation of the intensity image whose maxima are located in the selected region are identified.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
87.
Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts
A digital image acquisition system includes a portable apparatus for capturing digital images and a digital processing component for detecting, analyzing, invoking subsequent image captures and informing the photographer regarding motion blur, and for reducing camera motion blur in an image captured by the apparatus. The digital processing component operates by comparing the image with at least one other image, for example a preview image, of nominally the same scene taken outside the exposure period of the main image. In one embodiment the digital processing component identifies at least one feature in a single preview image which is relatively less blurred than the corresponding feature in the main image, calculates a point spread function (PSF) in respect of such feature, and initiates a subsequent capture if determined that the motion blur exceeds a certain threshold. In another embodiment the digital processing determines the degree of blur by analyzing the motion blur in the captured image itself, and initiates a subsequent capture if determined that the motion blur exceeds a certain threshold. Such real time analysis may use the auto focusing mechanism to qualitatively determine the PSF.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A digital image acquisition system having no photographic film comprises an apparatus for capturing digital images and a flash unit for providing illumination during image capture. The system has a portrait mode for generating an image of a foreground object against a blurred background, the portrait mode being operable to capture first, second and third images (A, B and C) of nominally the same scene. One of the first and second images (A, B) is taken with flash and the other is taken without flash, and the third image (C) is blurred compared to the first and second images. The portrait mode is further operable to determine foreground and background regions of the scene using the first and second images (A, B), and to substitute the blurred background of the third image (C) for the background of an in-focus image of the scene. In one embodiment the in-focus image is one of the first and second images. In another embodiment the in-focus image is a fourth image.
An image capturing device (1) is disclosed comprising an electronic image detector (17) having a detecting surface (15), an optical projection system (5) for projecting an object within a field of view onto the detecting surface (15), and, optionally, a computing unit (19) for manipulating electronic information obtained from the image detector (17), wherein, the projection system (5) is adapted to project the object in a distorted way such that, when compared with a standard lens system, the projected image is expanded in a center region of the field of view and is compressed in a border region of the field of view. Preferably, the projection system (5) is adapted such that its point spread function in the border region of the field of view has a full width at half maximum corresponding essentially to the size of corresponding pixels of the image detector (17).
A method of processing an image includes traversing pixels of an image in a single pass over the image. An inverting function is applied to the pixels. A recursive filter is applied to the inverted pixel values. The filter has parameters which are derived from previously traversed pixel values of the image. A pixel value is combined with a filter parameter for the pixel to provide a processed pixel value for a processed image.
An image capturing device may include a detector including a plurality of sensing pixels, and an optical system adapted to project a distorted image of an object within a field of view onto the sensing pixels, wherein the optical system expands the image in a center of the field of view and compresses the image in a periphery of the field of view, wherein a first number of sensing pixels required to realize a maximal zoom magnification {circumflex over (Z)} at a minimum resolution of the image capturing device is less than a square of the maximal zoom magnification times a second number of sensing pixels required for the minimum resolution.
G02B 13/16 - Objectifs optiques spécialement conçus pour les emplois spécifiés ci-dessous à utiliser en combinaison avec des convertisseurs ou des amplificateurs d'image
A method for detecting a redeye defect in a digital image containing an eye comprises converting the digital image into an intensity image, and segmenting the intensity image into segments each having a local intensity maximum. Separately, the original digital image is thresholded to identify regions of relatively high intensity and a size falling within a predetermined range. Of these, a region is selected having substantially the highest average intensity, and those segments from the segmentation of the intensity image whose maxima are located in the selected region are identified.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
93.
Imaging system with relaxed assembly tolerances and associated methods
An imaging system includes an extended depth of field (EDOF) optical system, a sensor on a sensor substrate, and a securing mechanism adapted to secure the EDOF optical system directly to the sensor substrate.
A method of blurring an image includes acquiring two images of nominally a same scene taken at a different light exposure levels. At least one region of one of the images includes pixels having saturated intensity values. For at least one of the saturated pixels, values are extrapolated from the other image. At least a portion of a third image is blurred and re-scaled including pixels having the extrapolated values.
A digital image processing technique gathers visual meta data using a reference image. A main image and one or more reference images are captured on a hand-held or otherwise portable or spatial or temporal performance-based image capture device. The reference images are analyzed based on predefined criteria in comparison to the main image. Based on said analyzing, supplemental meta data are created and added to the main image at a digital data storage location.
A hand-held or otherwise portable or spatial or temporal performance-based image capture device includes one or more lenses, an aperture and a main sensor for capturing an original main image. A secondary sensor and optical system are for capturing a reference image that has temporal and spatial overlap with the original image. The device performs an image processing method including capturing the main image with the main sensor and the reference image with the secondary sensor, and utilizing information from the reference image to enhance the main image. The main and secondary sensors are contained together within a housing.
H04N 5/262 - Circuits de studio, p. ex. pour mélanger, commuter, changer le caractère de l'image, pour d'autres effets spéciaux
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
H04N 5/77 - Circuits d'interface entre un appareil d'enregistrement et un autre appareil entre un appareil d'enregistrement et une caméra de télévision
H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 5/345 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en lisant partiellement une matrice de capteurs SSIS
G06K 9/32 - Alignement ou centrage du capteur d'image ou de la zone image
H04N 9/804 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour la reproduction comportant une modulation par impulsions codées pour les composantes du signal d'image en couleurs
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
H04N 5/347 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en combinant ou en mélangeant les pixels dans le capteur SSIS
H04N 5/228 - Caméras de télévision - Détails de circuits pour tubes analyseurs
H04N 5/907 - Enregistrement du signal de télévision utilisant des mémoires, p. ex. des tubes à mémoires ou des mémoires à semi-conducteurs
97.
Foreground/background separation using reference images
A technique involves distinguishing between foreground and background regions of a digital image of a scene. First and second images are captured of nominally a same scene. The first image is a relatively high resolution image taken with the foreground more in focus than the background, while the second image is a relatively low resolution reference image taken with the background more in focus than the foreground. Regions of the captured images are assigned as foreground or background. In accordance with the assigning, one or more processed images are rendered based on the first image or the second image, or both.
An estimated total camera motion between temporally proximate image frames is computed. A desired component of the estimated total camera motion is determined including distinguishing an undesired component of the estimated total camera motion, and including characterizing vector values of motion between the image frames. A counter is incremented for each pixel group having a summed luminance that is greater than a threshold. A counter may be decremented for pixels that are under a second threshold, or a zero bit may be applied to pixels below a single threshold. The threshold or thresholds is/are determined based on a dynamic luminance range of the sequence. The desired camera motion is computed including representing the vector values based on final values of counts for the image frames. A corrected image sequence is generated including the desired component of the estimated total camera motion, and excluding the undesired component.
An image processing technique includes acquiring a main image of a scene and determining one or more facial regions in the main image. The facial regions are analysed to determine if any of the facial regions includes a defect. A sequence of relatively low resolution images nominally of the same scene is also acquired. One or more sets of low resolution facial regions in the sequence of low resolution images are determined and analysed for defects. Defect free facial regions of a set are combined to provide a high quality defect free facial region. At least a portion of any defective facial regions of the main image are corrected with image information from a corresponding high quality defect free facial region.
A method and apparatus for efficiently performing digital signal processing is provided. In one embodiment, kernel matrix computations are simplified by grouping similar kernel coefficients together. Each coefficient group contains only coefficients having the same value. At least one of the coefficient groups has at least two coefficients. Techniques are disclosed herein to efficiently apply successive first order difference operations to a data signal. The techniques allow for a low gate count. In particular, the techniques allow for a reduction of the number of multipliers without increasing clock frequency, in an embodiment. The techniques update pixels of a data signal at a rate of two clock cycles per each pixel, in an embodiment. The techniques allow hardware that is used to process a first pixel to be re-used to start the processing of a second pixel while the first pixel is still being processed.