METHOD AND SYSTEM FOR COUPLING A FIRST DATA SEQUENCE AND A SECOND DATA SEQUENCE TO EACH OTHER, AND METHOD AND DEVICE FOR VALIDATING THE FIRST AND SECOND DATA SEQUENCES AS BEING COUPLED
A system and method for coupling temporally related data sequences to each other to enable validation of them as being coupled. A first device, processing a first sequence captured during a first time, generates a first digital signature based on first data of the first sequence and on a first secret number; incorporates the first secret number and the first digital signature in the first sequence; and transmits the first sequence. A second device, processing a second sequence captured during a second time partly overlapping with the first time, generates a second digital signature based on second data of the second sequence and on a first digest generated on the first secret number; incorporates the generated second digital signature and the first digest in the second sequence, whereby the first and second sequences are coupled by the first secret number and the first digest; and transmits the second sequence.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06V 20/52 - Activités de surveillance ou de suivi, p. ex. pour la reconnaissance d’objets suspects
A power supply system with built-in fault detection includes a transformer, wherein the primary side and the secondary side are galvanically isolated and wherein an input voltage is supplied to the primary side of the transformer; one or more switching elements connected to the primary side of the transformer; a controller configured to control switching of the one or more switching elements to control an output voltage on the secondary side; an isolation capacitor connected between the secondary side of the transformer and the primary side of the transformer; fault circuitry connected to the isolation capacitor and configured to detect a switching pattern propagated through a short circuit between the primary side and secondary side of the transformer and further propagated through the isolation capacitor. Similarly, an electrical apparatus includes the power supply system and a method of handling faults in a non-earthed electrical apparatus employs the power supply.
H02M 3/335 - Transformation d'une puissance d'entrée en courant continu en une puissance de sortie en courant continu avec transformation intermédiaire en courant alternatif par convertisseurs statiques utilisant des tubes à décharge avec électrode de commande ou des dispositifs à semi-conducteurs avec électrodes de commande pour produire le courant alternatif intermédiaire utilisant des dispositifs du type triode ou transistor exigeant l'application continue d'un signal de commande utilisant uniquement des dispositifs à semi-conducteurs
G01R 31/52 - Test pour déceler la présence de courts-circuits, de fuites de courant ou de défauts à la terre
H02M 1/32 - Moyens pour protéger les convertisseurs autrement que par mise hors circuit automatique
H05K 7/14 - Montage de la structure de support dans l'enveloppe, sur cadre ou sur bâti
In some implementations, a client device may detect user input indicating a requested time along a timeline of a video, the video being stored at a server device. The client device may check if a cached image having a timestamp within a precision margin of the requested time is stored in a memory of the client device. Under a condition that the cached image is present in the memory, the client device may retrieve the cached image from the memory of the client device. Under a condition that the cached image is not present in the memory, the client device may retrieve a corresponding image from the server device. The client device may adjust a size of the precision margin to be proportional to a length of the timeline such that the size of the precision margin proportionally increases and decreases with the length of the timeline.
H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de servicesInterface pour utilisateurs finaux pour l'interaction avec le contenu, p. ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
An AC-DC power converter includes: a rectification circuit configured to convert an input AC voltage to a rectified AC voltage; a power factor correction boost converter circuit configured to convert the rectified AC voltage to a first boosted DC voltage, wherein the first boosted DC voltage has AC ripple; a switched mode power supply output stage having a transformer; a buck and/or boost converter having two input voltage terminals, wherein a first voltage input terminal is connected to the first boosted DC voltage or to a secondary side of the transformer, and wherein a second input voltage terminal is connected to a second DC voltage, wherein the second DC voltage is higher than the voltage on the first voltage input terminal within a predefined voltage range, and wherein the buck and/or boost converter uses pulse width modulation to generate an output DC voltage with reduced AC ripple.
H02M 7/217 - Transformation d'une puissance d'entrée en courant alternatif en une puissance de sortie en courant continu sans possibilité de réversibilité par convertisseurs statiques utilisant des tubes à décharge avec électrode de commande ou des dispositifs à semi-conducteurs avec électrode de commande utilisant des dispositifs du type triode ou transistor exigeant l'application continue d'un signal de commande utilisant uniquement des dispositifs à semi-conducteurs
H02M 1/14 - Dispositions de réduction des ondulations d'une entrée ou d'une sortie en courant continu
H02M 1/42 - Circuits ou dispositions pour corriger ou ajuster le facteur de puissance dans les convertisseurs ou les onduleurs
H02M 3/158 - Transformation d'une puissance d'entrée en courant continu en une puissance de sortie en courant continu sans transformation intermédiaire en courant alternatif par convertisseurs statiques utilisant des tubes à décharge avec électrode de commande ou des dispositifs à semi-conducteurs avec électrode de commande utilisant des dispositifs du type triode ou transistor exigeant l'application continue d'un signal de commande utilisant uniquement des dispositifs à semi-conducteurs avec commande automatique de la tension ou du courant de sortie, p. ex. régulateurs à commutation comprenant plusieurs dispositifs à semi-conducteurs comme dispositifs de commande finale pour une charge unique
5.
METHOD AND VIDEO PROCESSING SYSTEM FOR UPDATING A BUFFER
An image processing system stores a set of pixel values for a set of pixels relating to a sequence of video frames obtained from a buffer. The pixel values have been filtered using a filtering algorithm to reduce temporal noise. A measure noise related to the pixel values is obtained and a current set of pixel values is obtained from a sensor. A new set of pixel values is then determined based on the stored pixel values and the current pixel values using the filtering algorithm to reduce temporal noise. A measure of noise in the new set of pixel values is then determined and the new set of pixel values is quantized. The higher amount of noise in the new set of pixel values, the higher quantizing is performed. The quantized set of pixel values are compressed and the buffer is updated with the compressed quantized pixel values.
H04N 19/147 - Débit ou quantité de données codées à la sortie du codeur selon des critères de débit-distorsion
H04N 19/126 - Détails des fonctions de normalisation ou de pondération, p. ex. matrices de normalisation ou quantificateurs uniformes variables
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
6.
LENS HOLDER, A SENSOR UNIT, AND AN IMAGE CAPTURING DEVICE
A lens holder for an optics unit of an image capturing device, the lens holder having a body extending in a lens holder plane and comprising: a lens mount arranged on the body configured to support a lens of the optics unit and extending along a longitudinal axis perpendicular to the lens holder plane; and support posts extending from the body of the lens holder along the longitudinal axis and supports a sensor unit aligned with the optics unit, wherein each support post has a first end adjoining the body of the lens holder, and a free second end, and wherein the first end of one or more support posts adjoins a respective displacement portion of the body of the lens holder, which displacement portion is configured to be displaced out of the lens holder plane in response to a thermally induced force acting on the support post.
A method for tracking objects in a scene, comprises detecting object candidates in an image frame, and calculating, for each object candidate, an association measure which is indicative of a likelihood that the object candidate is associated with a current object track. Additionally, a view of the overview camera is correlated with a heatmap of the scene, the heatmap providing data indicative of areas in the scene having an elevated degree of occurrence of historical verified object tracks, and adjusting the association measure or an association threshold for object candidates which according to the heatmap are located in areas in the scene having an elevated degree of occurrence of historical verified object tracks so as to increase their probability of being associated with a current object track. Each object candidate is then associated with a current object track if the association measure is above an association threshold.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
H04N 23/61 - Commande des caméras ou des modules de caméras en fonction des objets reconnus
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
H04N 23/695 - Commande de la direction de la caméra pour modifier le champ de vision, p. ex. par un panoramique, une inclinaison ou en fonction du suivi des objets
8.
METHOD, APPARATUS, AND SYSTEM FOR ESTIMATING A CORRECTED DIRECTIONAL ANGLE MEASURED BY A RADAR BY USING INPUT FROM A CAMERA
A method for estimating a corrected directional angle measured by a radar by using input from a camera comprises receiving radar detections of first objects in a scene and camera detections, which are simultaneous with the radar detections, of second objects in the scene. Each radar detection is indicative of a first directional angle and a distance of a respective first object in relation to the radar, and each camera detection is indicative of a direction of a respective second object in relation to the camera. Radar and camera detections which are detections of a same object in the scene are identified by comparing the received radar detections to the received camera detections, and a corrected first directional angle for the identified radar detection is estimated by using the direction of the identified camera detection and the known positions and orientations of the radar and the camera.
G01S 7/41 - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe utilisant l'analyse du signal d'écho pour la caractérisation de la cibleSignature de cibleSurface équivalente de cible
G01S 13/42 - Mesure simultanée de la distance et d'autres coordonnées
G01S 13/86 - Combinaisons de systèmes radar avec des systèmes autres que radar, p. ex. sonar, chercheur de direction
9.
RESOLVING A MULTIPATH AMBIGUITY IN A TDM-MIMO RADAR
A TDM MIMO FMCW radar comprises at least one row of physical receivers with a first spacing (dr) in a first direction, and further comprises a plurality of physical transmitters arranged with a second spacing (dt) in said first direction. To determine whether a peak in an angle spectrum corresponds to a direct reflection or a first-order multipath artefact, an inverse phase-shift vector corresponding to a phase ({circumflex over (ϕ)}1) of the peak is applied and a constant signal with the amplitude of the peak is subtracted. To the thus obtained intermediate signal (v), a further inverse phase-shift vector—now corresponding to an offset phase (Δ{circumflex over (ϕ)}) of the two leading peaks of the angle spectrum—after which a constant signal is subtracted. It is then detected whether the thus obtained test signal (w) has any non-noise content. If yes, the peak corresponds to a multipath artefact, and otherwise to a direct reflection.
A method for improving image quality of images captured by a camera system having an image sensor and implementing optical image stabilisation (OIS) comprises receiving image data representing an image captured by the camera system, receiving stabilisation position data from an OIS device in the camera system, the stabilisation position data indicating a position where an optical axis of the optical path intersects with the image sensor, and includes the step of applying a lens correction function to the received image data, wherein the application of the lens correction function is adjusted based on the received stabilisation position data, and outputting the corrected image data.
H04N 23/68 - Commande des caméras ou des modules de caméras pour une prise de vue stable de la scène, p. ex. en compensant les vibrations du boîtier de l'appareil photo
G03B 5/00 - Réglage du système optique relatif à l'image ou à la surface du sujet, autre que pour la mise au point présentant un intérêt général pour les appareils photographiques, les appareils de projection ou les tireuses
A method for object attribute classification in an image is provided, and includes obtaining a plurality of object proposals from an artificial neural network entity trained to localize and classify objects using a plurality of feature map layers associated with different spatial resolutions; identifying a main object proposal and one or more other proposals; ranking the feature map layers from a least significant to a most significant feature map layer, and determining an attribute class for a first attribute based on attribute class confidence scores of the main and other proposals, including taking the ranking of the feature map layers as well as object location overlaps into account for the determining. A corresponding device, computer program and computer program product are also provided.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/771 - Sélection de caractéristiques, p. ex. sélection des caractéristiques représentatives à partir d’un espace multidimensionnel de caractéristiques
A method for determining similarity in appearance of objects in image frames of a video sequence, comprises: processing first raw image data applying first image processing settings; detecting a first object; extracting first feature vectors; processing second raw image data applying second image processing settings; detecting a second object; extracting second feature vectors; determining the similarity of the first and second objects by comparing the first feature vectors to the second feature vectors; if the first feature vectors and the second feature vectors differ by more than a first threshold, and the first image processing settings differ from the second image processing settings by more than a second threshold: re-processing the first raw image data corresponding applying the second image processing settings; extracting, from the first image area, updated first feature vectors; comparing the updated first feature vectors to the one or more second feature vectors.
A method of providing anonymized data for facilitating reidentification in a visual tracking system, the method comprising: detecting, in images obtained from a plurality of image sources, subareas which each contain a tracking target; computing, for each subarea, a feature vector which represents a visual appearance of the tracking target therein; for a first/second subgroup of the image sources, providing first/second reidentification data items by anonymizing each feature vector using a predefined one-way function modified by a first/second tracking-rights token, and disclosing the first/second reidentification data items annotated with locations of the respective subareas to a first/second tracking client, wherein the second tracking-rights token is distinct from the first tracking-rights token; and preventing access to the feature vectors.
Using a sequence of images depicting a traffic situation involving a plurality of moving vehicles recorded by a camera, a set of image event data are determined, where each image event data indicates a respective number of events related to the moving vehicles occurring during a respective imaging time interval. A set of incident event data are obtained from a database, where each incident event indicates a respective number of events occurring during a respective incident time interval, detected by a traffic event detector located at a known detector geographical position. It is determined, based on a matching procedure between the set of image event data and the set of incident event data, that the events associated with the set of image event data are the events associated with the set of incident event data, and thus the camera geographical position is associated with the detector geographical position.
Video tracks include positions and a classification relative to respective objects detected in a sequence of video image frames captured of a scene during a time period and the radar track includes positions and classification in relation to of an object detected in radar data captured for at least a part of the scene during the time period. An indication is obtained for each video track of the video tracks whether the radar track and the video track of the video tracks correspond based on a similarity according to a similarity measure. A second video track is ignored or deleted on condition that the radar track and a first video track correspond, the classifications associated with the radar track and the first video track correspond, the radar track and the second video track correspond, and the classifications associated with the radar track and the second video track do not correspond.
G01S 13/86 - Combinaisons de systèmes radar avec des systèmes autres que radar, p. ex. sonar, chercheur de direction
G01S 13/72 - Systèmes radar de poursuiteSystèmes analogues pour la poursuite en deux dimensions, p. ex. combinaison de la poursuite en angle et de celle en distance, radar de poursuite pendant l'exploration
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
A method estimates a radiance of an object and comprises: obtaining a thermal image of a scene comprising an apparent object region depicting the object; obtaining object data indicative of a location and an extension of an actual object region; determining a representative background radiance; obtaining a blur parameter indicative of a blur radius of a blur spot; determining a pixel value of a sample pixel of the apparent object region; determining for the sample pixel: an object radiance contribution factor based on a number of actual object pixels located within the blur range from the sample pixel, and a background radiance contribution factor based on a number of actual background pixels located within the blur range from the sample pixel; and estimating a diffraction-compensated radiance of the object based on the pixel value of the sample pixel, the representative background radiance, and the object and background radiance contribution factors.
A method for thermal image processing comprises obtaining a thermal image depicting a scene comprising objects; identifying a set of apparent object regions, each apparent object region includes a depiction of a respective object which is blurred due to diffraction, and each apparent object region being identified as a contiguous region of pixels having intensities differing from a representative background intensity by more than a threshold intensity, and exceeding a threshold size such that the apparent object region includes an actual object region and a blurred edge region and applying to each object region a contrast enhancement comprising: partitioning the blurred edge region into a background edge region and an object edge region intermediate the actual object region and the background edge region; and setting pixels of the object edge region to a representative object intensity determined from actual object pixels, and pixels of the background edge region.
G06T 5/94 - Modification de la plage dynamique d'images ou de parties d'images basée sur les propriétés locales des images, p. ex. pour l'amélioration locale du contraste
G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
Encoding and decoding of lidar data frames is presented, and in particular to entropy coding of lidar data wherein the context models used depend on the order in which the lidar sensor receives the lidar return signals. For example, an indication whether a lidar return signal with a particular index having a value i, 1≤i≤Y, corresponding to a sequential order based on a time of arrival of lidar return signals of the emitted ray, may be encoded as an entropy coded bit using a distinct context model for each possible value i of the index.
Encoding and decoding methods are employed for toggleable overlays in a video. An encoding method comprises receiving a plurality of image frames; determining one or more overlay image frames each comprising a plurality of toggleable overlays, each toggleable overlay being associated with an identifier. An image frame is associated with an overlay image frame of the one or more overlay image frames. Metadata is added to a header of the image frame, wherein the metadata comprises, for each toggleable overlay of the plurality of toggleable overlays: position data identifying a position of the toggleable overlay in the overlay image frame, size data identifying a size of the toggleable overlay in the overlay image frame, and identification data corresponding to the identifier of the toggleable overlay.
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/177 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un groupe d’images [GOP]
A computer-implemented method of tracking objects in a video sequence of a scene comprises determining a location of a sink in the scene where objects exit the scene and a location of a source where objects enter the scene; tracking a first object moving in the scene using a re-identification algorithm, wherein the first object is associated with a re-identification threshold of the re-identification algorithm; detecting that the first object has exited the scene at the sink; and responsive to detecting that the first object has exited the scene at the sink, adjusting the re-identification threshold associated with the first object such that a probability that the re-identification algorithm re-identifies a second object, entering the scene at the source after the first object has exited the scene at the sink, as the first object is reduced.
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p. ex. le suivi des coins ou des segments
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06V 10/75 - Organisation de procédés de l’appariement, p. ex. comparaisons simultanées ou séquentielles des caractéristiques d’images ou de vidéosApproches-approximative-fine, p. ex. approches multi-échellesAppariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexteSélection des dictionnaires
24.
METHOD AND SYSTEM FOR DIGITALLY SIGNING AUDIO AND VIDEO DATA
A method of digitally signing a video and audio sequence includes a first video digest generated by applying a digest algorithm to a first video portion of the video sequence, and a first audio digest is generated by applying a digest algorithm to a first audio portion of the audio sequence. A first video signature is generated by digitally signing the first video digest, and a first audio signature is generated by digitally signing the first audio digest. The first video and audio signatures are inserted in first target audio and video portions, respectively. A second video digest is generated from the first target video portion and first audio signature, and a second audio digest is generated from the first target audio portion and first video signature. Second video and audio signatures are generated by digitally signing the second video and audio digests, respectively.
A method and a video capturing system for controlling at least one infrared (IR) illumination source, illuminating an area captured by a camera, the method comprising modulating emission from the IR illumination source between first and second emission intensities at a first frequency during a first predetermined time period. Upon capturing images using an image sensor of the camera during the first predetermined time period, determining a radiation level indicator for each of a plurality of images captured by the image sensor. Further, evaluating a sequence of determined radiation level indicators to determine whether a frequency resulting from the modulated emission at the first frequency is detected in the sequence of radiation level indicators. If the frequency resulting from the modulated emission at the first frequency is determined as not detected in the sequence of radiation level indicators, the IR illumination source is stopped.
H04N 23/20 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image uniquement à partir d'un rayonnement infrarouge
H04N 23/71 - Circuits d'évaluation de la variation de luminosité
H04N 23/74 - Circuits de compensation de la variation de luminosité dans la scène en influençant la luminosité de la scène à l'aide de moyens d'éclairage
26.
A METHOD FOR CONTROLLING A CAMERA MONITORING A SCENE
Controlling a camera includes determining a first camera setting for monitoring a scene under a first lighting condition, and a second camera setting for monitoring the scene under a second lighting condition. The method further includes obtaining a reference image that represents the scene under the second lighting condition, as captured in the first camera setting. While monitoring the scene in the first camera setting, detecting a change in the scene from the first to the second lighting condition. The detecting the change includes performing a comparison between first image feature data derived from a first image of the scene captured with the camera set to the first camera setting, after the change from the first to the second lighting condition, and reference image feature data derived from the reference image, to determine that the first image feature data matches the reference image feature data.
There is provided a method and an encoder for inter-encoding an image frame in a sequence of image frames. The method comprises obtaining a compression level for each pixel block of the image frame, inter-encoding the image frame in a first encoding pass using the obtained compression level and identifying pixel blocks in the image frame that were intra-coded in the first encoding pass and for which the obtained compression level exceeds a compression level threshold. The method further comprises lowering the compression level for the identified pixel blocks, and inter-encoding the image frame in a second encoding pass using the lowered compression level for the identified pixel blocks and the obtained compression level for each remaining pixel block.
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
A method of stitching image data from one or more image sensors arranged to acquire image data depicting at least partly overlapping views of a scene comprises obtaining sets of image data representing a blending region; dividing each set of image data into portions of different interest levels; for each portion of image data, determining one or more image frequency bands based on the interest level of the portion, and obtaining image data of the determined one or more image frequency bands from the portion of image data; and blending the first set of image data and the second set of image data by multi-band blending, wherein only the obtained image data of the determined one or more image frequency bands are blended for each portion of image data.
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 20/52 - Activités de surveillance ou de suivi, p. ex. pour la reconnaissance d’objets suspects
A method of encoding images in a video, comprises: acquiring an original image from an image sensor of a video camera; encoding the original image using a generative image model, thereby obtaining a first encoded image; decoding the image to obtain a first decoded image; identify a region of interest (ROI) of the original image; for each ROI: perform an encoding quality check by comparing several reference points in the ROI of the original image against corresponding reference points in the ROI of the first decoded image, thereby obtaining a difference; if the difference is greater than a threshold, encoding the ROI using a non-generative image model, thereby obtaining a non-generative encoded image area; providing final encoded image data comprising a) the non-generative encoded image areas for the ROI having a difference greater than the threshold and b) the first encoded image for a remaining part of the original image.
H04N 19/154 - Qualité visuelle après décodage mesurée ou estimée de façon subjective, p. ex. mesure de la distorsion
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/17 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/20 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage d'objets vidéo
H04N 19/42 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par les détails de mise en œuvre ou le matériel spécialement adapté à la compression ou à la décompression vidéo, p. ex. la mise en œuvre de logiciels spécialisés
32.
VIDEO STREAM ENCODING BOTH OVERVIEW AND REGION-OF-INTEREST(S) OF A SCENE
A method for encoding a video stream includes obtaining images of a scene captured by a camera at a first resolution; identifying regions of interest (ROIs) in an image; adding, as part of an encoded video stream, a first video frame encoding at least part of the image at a second resolution lower than the first resolution; adding a second video frame marked as a no-display frame, and being an inter-frame referencing the first video frame with motion vectors for upscaling of the ROIs; adding a third video frame encoding the ROIs at a third resolution higher than the second resolution, and being an inter-frame referencing the second video frame.
H04N 19/33 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant des techniques hiérarchiques, p. ex. l'échelonnage dans le domaine spatial
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
An image processing device generates an output image with masking of objects of classes. An input image is downscaled to an object detection image having a lower resolution than the input image and lower than the output image. The object detection image is input to an object detection module and confidence scores for pixel areas are received from the object detection module. Each confidence score indicates a probability that the pixel area relates to an object to be masked. Based on the input image, an intermediate image is generated having a higher resolution than the object detection image resolution and an adaptive masking threshold is set such that the greater the ratio between the output image resolution and the object detection image resolution, the lower the masking threshold. The output image is then generated by masking pixel areas of the intermediate image.
G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
A method for processing of data by applying a first algorithm in a system comprising a server and a client device comprises at the client device: continuously receiving data; dividing the received data into M>1 subsets of data, each subset of data comprising, or being derived from, sensor data captured by a sensor during a capture time interval at a plurality of points in time; determining for each subset whether the processing of the subset should be performed by the client device or by the server, and upon determining that the processing should be performed by the client device, processing the subset into processed data by applying the first algorithm to the subset, and transmitting the processed data to the server, and upon determining that processing should be performed by the server, transmitting the subset of data to the server.
This disclosure relates to methods systems and non-transitory computer-readable storage mediums for distributing load in a multi-chip image processing unit for processing image data into processed image data. An example method comprises receiving first image data, analysing the first image data using a first algorithm, the first algorithm performing a set number of operations for a given size of image data input to the first algorithm, and outputs at least one characteristic of the first image data; using the at least one characteristic to estimate use of memory bandwidth in the first and second chip when processing the first image data into processed image data; and distributing processing of the first image data between the first and the second chip such that the estimated use of memory bandwidth is distributed evenly.
H04N 19/127 - Établissement des priorités des ressources en matériel ou en calcul
H04N 19/42 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par les détails de mise en œuvre ou le matériel spécialement adapté à la compression ou à la décompression vidéo, p. ex. la mise en œuvre de logiciels spécialisés
H04N 23/80 - Chaînes de traitement de la caméraLeurs composants
37.
METHOD AND DEVICE FOR HANDLING BANDWIDTH SHORTAGE IN RELATION TO TRANSMISSION OF VIDEO FRAMES
A system for handling bandwidth shortage for transmission of encoded video frames. In a sending device, a long term reference frame which indicates bandwidth shortage is created. During a first time period, the reference frame is sent to the receiving device for storage. During a subsequent second time period, bandwidth shortage is determined in the sending device, wherein the bandwidth during the second time period is insufficient if the encoded video frames are encoded according to an encoding principle used at times without bandwidth shortage. During the second time period, inter encoded frames referencing the long term reference frame are sent, wherein each frame includes encoded blocks, and at least a subset of the blocks of the inter encoded frames are empty blocks such that the bit rate of the inter encoded frames is lower or equal to the bandwidth during the second time period.
H04N 19/164 - Retour d’information en provenance du récepteur ou du canal de transmission
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/503 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre la prédiction temporelle
A camera system may include a utility box having a wall plate, a camera mounted in the utility box, magnets, wherein the magnets generate an attractive force to mount the camera to the wall plate of the utility box, a guide pin protruding from the camera or the wall plate of the utility box, and a recess to receive the guide pin, wherein the recess is located in the camera or in the wall plate of the utility box, wherein the guide pin limits motion of the camera when the camera is mounted on the wall plate of the utility box.
F16M 13/02 - Autres supports ou appuis pour positionner les appareils ou les objetsMoyens pour maintenir en position les appareils ou objets tenus à la main pour être portés par un autre objet ou lui être fixé, p. ex. à un arbre, une grille, un châssis de fenêtre, une bicyclette
45.
SYSTEMS, METHODS, AND NON-TRANSITORY COMPUTER-READABLE MEDIA FOR TRANSFORMING RAW IMAGE DATA INTO A VIDEO STREAM COMPRISING A PLURALITY OF ENCODED IMAGE FRAMES
Systems, methods and non-transitory computer-readable media transform raw image data into a video stream comprising a plurality of encoded image frames. To reduce memory bandwidth and power consumption, compression rates of sets of imaging data temporarily stored in memory during the transformation of raw image data into the video stream are adapted based on encoding configurations, on a pixel block level.
H04N 19/423 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par les détails de mise en œuvre ou le matériel spécialement adapté à la compression ou à la décompression vidéo, p. ex. la mise en œuvre de logiciels spécialisés caractérisés par les dispositions des mémoires
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/182 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un pixel
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/463 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression par compression des paramètres d’encodage avant la transmission
H04N 19/60 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée
H04N 23/80 - Chaînes de traitement de la caméraLeurs composants
A thermometric camera for determining a temperature of an object in motion comprises a bolometer sensor and circuitry. The bolometer sensor captures a sequence of image frames of the object while the object is moving. The circuitry executes: an object identifying function configured to identify an area corresponding to the object in each image frame of a series of image frames among the sequence of image frames; a combining function configured to combine the identified areas from each image frame in the series of image frames into a stacked image of the object, wherein pixel values in the stacked image of the object are estimated as a sum of pixel values of the corresponding pixels in the image frames in the series of image frames; and a temperature determining function configured to determine the temperature of the object in motion from pixel values in the stacked image of the object.
A method detects one or more occluded areas of a scene analysed by an object tracking system. The method includes building a map of one or more occluded areas in a scene. Building the map comprises running a re-identification algorithm on a video sequence to try to resume a lost object track. If the object track is successfully resumed, the method includes determining an area of the scene where the first object track is lost and an area of the scene where the first object track is resumed. A connection between the first and the second area of the scene is added the map such that the map identifies that an object track being lost in the first area of the scene has been resumed in the second area of the scene.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/26 - Segmentation de formes dans le champ d’imageDécoupage ou fusion d’éléments d’image visant à établir la région de motif, p. ex. techniques de regroupementDétection d’occlusion
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
A method for thermal image processing, comprises: acquiring a thermal image depicting a scene; obtaining a frequency distribution based on pixel intensities of the thermal image; processing the frequency distribution to determine whether the frequency distribution comprises a peak, caused by a hot object in the scene, which is separated from a thermal background of the scene by more than an intensity threshold; and in response to determining that the frequency distribution comprises the peak, processing the thermal image to suppress a ghost image of the hot object in the thermal image, wherein the ghost image is caused by internal reflections of radiation from the hot object in the thermal camera, and wherein the processing of the thermal image comprises: estimating a location of ghost image pixels forming the ghost image; and suppressing the ghost image in the thermal image by adjusting intensities of the ghost image pixels.
A method of processing digital video data comprises continuously capturing digital video data representing image frames. While capturing the digital video data, the digital video data is encoded into a sequence of encoded image frames, the sequence comprising key frames and delta frames, and storing the sequence of encoded image frames. It is then determined that the stored sequence of encoded image frames is to be entropy coded and, as a consequence, entropy coding the sequence of encoded image frames into an entropy coded sequence of image frames and storing the entropy coded sequence of image frames.
H04N 19/91 - Codage entropique, p. ex. codage à longueur variable ou codage arithmétique
H04N 19/156 - Disponibilité de ressources en matériel ou en calcul, p. ex. codage basé sur des critères d’économie d’énergie
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
58.
METHOD FOR DETERMINING A COLOR OF A TRACKED OBJECT
A method, system and software for determine a color of a tracked object. Using a first video sequence and foreground objects detected therein, a color rendering metric may be determined for each area of a plurality of areas in the scene. Such color rendering metrics may then be used in case a tracked object is determined to have different colors in different images of a second video sequence, such that the colors detected in an area associated with a higher color rendering metric is selected over an area associated with a lower color rendering metric.
The present system and method generally relate to the field of camera surveillance, and in particular to object re-identification in video streams captured by a camera.
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
G06V 10/776 - ValidationÉvaluation des performances
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/52 - Activités de surveillance ou de suivi, p. ex. pour la reconnaissance d’objets suspects
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Security surveillance apparatus; antennas; central
processing units; electric door and elevator bells; electric
loss, signal and LED indicators; cables, electric; magnetic
encoders; electric couplings; remote control apparatus for
opening and closing doors; electric and electronic door
locks; voltage surge protectors; electric alarm bells; push
buttons for bells; transmitting sets (telecommunication);
transmitters of electronic signals; data processing
apparatus; answering units; electric monitoring apparatus;
apparatus and instruments for measuring, signalling and
checking (supervision); apparatus for the recording,
transmission or reproduction of sound or images;
telecommunication switchboards (exchanges), instruments and
their parts and fittings (included in this class); telephone
apparatus; intercommunication apparatus for doors and
elevators; software; global system for mobile
telecommunications (GSM) ports; GSM gateways; access control
apparatus; access control installations; intercom apparatus;
intercom installations; video intercom systems; audio
intercom systems; software for access control apparatus and
intercom apparatus; software for the management of
telecommunication systems and switchboards; fingerprint
readers; alarm monitoring system; door access card readers;
touch pads and touch screens for access control apparatus
and intercom apparatus; control panels; accessories for
access control apparatus and intercom apparatus; computer
application software for mobile devices and handheld
personal computers; cloud servers; cloud computing software;
microphones; speakers; horns (speakers); audio converters;
audio devices; software for the management of audio, audio
recordings, speakers and microphones; emergency lift
communication devices; software for emergency lift
communication devices; induction loop for lifts. Computer consulting services regarding program design for
microprocessors, updating of computer programs for text,
video, audio, images and data processing, consultation for
product development; consulting activities in the form of
testing and consultation for new products and development of
new products; technical consultation in the field of
computer infrastructure, computer hardware, video
techniques, camera techniques, image and audio processing,
computer software, system integration and recorded computer
programs, access control and intercom; designing computer
systems, design and development of products (hardware and
software) for others in the field of computers, video
techniques, camera techniques, image processing, audio
processing, access control and intercom; designing,
development, consultation and research in the field of
computer application software of mobile devices and handheld
personal computers; technical consultation and research in
the fields of computers, software and electronic data,
system integration, computer processing and video
techniques, camera techniques and image processing, access
control apparatus, intercom apparatus, security surveillance
and video surveillance; engineering; consultancy services
relating to technical research; industrial designing,
computer programming, computer system analysis; software
design; maintenance and support of software, updating
software; cloud computing services; rental and leasing of
computer processing apparatus and computers; hosting
computer sites (web sites); development of electronic
surveillance apparatus; designing, development, technical
consultation and research concerning electronic
surveillance, burglar alarms, security, security alarms,
security systems and access control systems for security
purposes, intercom; supply of technical know-how; software
as a service.
Techniques for Long Term Reference (LTR) frame updating in a video encoding process are performed by an image processing device as part of the video encoding process. The method comprises encoding a first LTR frame. The method comprises encoding a plurality of frames referencing directly or indirectly to the first LTR frame. The method comprises sequentially updating the first LTR frame by evaluating a cost for encoding a block of image data in one of the plurality of frames and by updating an image area in the first LTR frame when the cost fulfils a cost criterion. The image area is updated based on the block of image data in at least one of the plurality of frames. The method comprises encoding the sequentially updated first LTR frame as a second LTR frame.
H04N 19/156 - Disponibilité de ressources en matériel ou en calcul, p. ex. codage basé sur des critères d’économie d’énergie
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/127 - Établissement des priorités des ressources en matériel ou en calcul
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/177 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un groupe d’images [GOP]
62.
METHOD AND SYSTEM FOR DETECTING A CHANGE OF RATIO OF OCCLUSION OF A TRACKED OBJECT
A method of detecting a change of ratio of occlusion of a tracked object in a video sequence. For each of a plurality of image frames, a bounding box of the tracked object is determined and for each pair of successive image frames, an intersection over union (IoU) of a first bounding box in a first image frame and a second bounding box in a second image frame is calculated. Similarly, for a further pair of successive image frames, a further IoU of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames is calculated. If the further IoU differs from the calculated IoUs by more than a threshold amount, the ratio of occlusion of the tracked object has changed.
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p. ex. le suivi des coins ou des segments
G06V 10/75 - Organisation de procédés de l’appariement, p. ex. comparaisons simultanées ou séquentielles des caractéristiques d’images ou de vidéosApproches-approximative-fine, p. ex. approches multi-échellesAppariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexteSélection des dictionnaires
63.
CAMERA ARRANGEMENT COMPRISING A HOLDER CONFIGURATION FOR A CAMERA HEAD AND THE HOLDER CONFIGURATION
A camera arrangement comprises a camera head, a holder ring and a holder base. The camera head has an imaging unit and an interaction portion with constant cross section. The camera holder ring has an inner surface configured to fit onto the interaction portion of the camera head, and the camera holder base has a receptacle orifice dimensioned to receive the camera holder ring fitted onto the interaction portion of the camera head and grip an outer surface of the holder ring. The receptacle orifice has reduced dimensions due to a tightening function to fixate the camera holder ring and the camera head in relation to the camera holder base, and thereby to fixate an orientation of the camera in relation to the camera holder base. The camera holder ring is movable along a length of the interaction portion until the tightening function of the receptacle orifice has been actuated.
A method for encoding lidar data where subsequent frames of lidar data to be encoded are received. Each frame of lidar data comprises a number of lidar return signals for each of a plurality of rays emitted at a respective elevation and azimuth angle by a lidar, and each lidar return signal includes lidar measurement values. Each frame of lidar data is then represented as an image frame of a video sequence, wherein, for each ray of the plurality of rays of the frame of lidar data, lidar measurement values of different lidar return signals are represented in different image portions of the image frame. The different image portions are stacked after each other in a row direction or a column direction of the image frame. The video sequence is then encoded using video encoding.
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/174 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une tranche, p. ex. une ligne de blocs ou un groupe de blocs
H04N 19/184 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant des bits, p. ex. de flux vidéo compressé
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
65.
SYSTEM AND METHOD FOR BACKGROUND MODELLING FOR A VIDEO STREAM
The present disclosure relates to a method of background modelling for a video stream acquired by a camera having movement capabilities. The method comprises: acquiring a video stream; repeatedly updating a background model for the video stream by analyzing changes in the sequence of image frames and categorizing image areas in the image frames which do not change over time as background; detecting camera movement; performing a reset of the background model by: applying an image segmentation and/or object detection algorithm to identify at least foreground objects; after having performed the reset of the background model, returning to repeatedly updating the background model for the video stream by analyzing changes in the sequence of image frames and categorizing image areas in the image frames which do not change over time as background. The disclosure further relates to an image processing system.
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse
A system includes a mounting bracket configured for attachment to the surface, a plastic clamp having a base part and a connection part. The mounting bracket is provided with an opening into which the base part of the plastic clamp is insertable with a snap fit and the connection part of the plastic clamp is configured for snap fit engagement with an object attachable to the mounting bracket. The base part comprises two legs, each comprising a guide surface and an L-shaped corner section, the guide surface being configured to engage the mounting bracket in response to insertion of the base part into the opening and the L-shaped corner section being configured to face an edge portion of the opening when the base part has been received by the opening. The legs deflect in response to guide surfaces engaging the mounting bracket so the base part is insertable into opening.
G08B 13/196 - Déclenchement influencé par la chaleur, la lumière, ou les radiations de longueur d'onde plus courteDéclenchement par introduction de sources de chaleur, de lumière, ou de radiations de longueur d'onde plus courte utilisant des systèmes détecteurs de radiations passifs utilisant des systèmes de balayage et de comparaison d'image utilisant des caméras de télévision
67.
ENHANCEMENT VIDEO CODING FOR VIDEO MONITORING APPLICATIONS
A method of encoding an input video including a sequence of video frames as a hybrid video stream, comprises downsampling the input video from an original spatial resolution to a reduced spatial resolution and an intermediate spatial resolution; providing the input video at the reduced spatial resolution to a base encoder to obtain a base encoded stream; providing a first enhancement stream based on first residuals at the intermediate spatial resolution; and providing a second enhancement stream based on second residuals based at the original spatial resolution, which is at least partially encoded using temporal prediction. The method further comprises detecting at least one non-motion region in a video frame, and causing the set of first residuals but not the set of second residuals to vanish throughout the non-motion region.
H04N 19/30 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant des techniques hiérarchiques, p. ex. l'échelonnage
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
68.
Method, system and non-transitory computer-readable media for prioritizing objects for feature extraction
A method for prioritizing feature extraction for object re-identification in an object tracking application. Region of interests (ROI) for object feature extraction is determined based on motion areas in the image frame. Each object detected in an image frame and which is at least partly overlapping with a ROI is associated with the ROI. A list of candidate objects for feature extraction is determined by, for each ROI associated with two or more objects: adding each object of the two or more objects that is not overlapping with any of the other objects among the two or more objects with more than a threshold amount. From the list of candidate objects, at least one object is selected, and image data of the image frame depicting the selected object is used for determining a feature vector for the selected object.
A method, system and software for searching for an object in a forensic search application comprises: determining a plurality of static areas in the scene; obtaining a first image depicting the scene comprising an object; determining a plurality of candidate color transforms; determining a plurality of candidate color values of the object by applying each of the plurality of candidate color transform to third pixel data from the first image, said third pixel data depicting the object in the first image; searching for an object in a forensic search application using a search request comprising a first color value; determining that the first color value matches one or more candidate color values; and returning a search response based at least in part on the object.
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
G06V 10/74 - Appariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
70.
SYSTEM AND METHOD FOR STABILIZING BOUNDING BOXES FOR OBJECTS IN A VIDEO STREAM
A method of stabilizing bounding boxes for objects in a video stream comprises: receiving a video stream comprising a sequence of image frames; detecting an object in the image frames and generating a bounding box surrounding the object; measuring a noise level for the video stream; and
temporally filtering the bounding box over a plurality of image frames based on the measured noise level, thereby stabilizing the bounding box in the video stream. The disclosure further relates to an image processing system.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
71.
ARRANGEMENT FOR ATTACHMENT OF DEVICES TO A SURFACE AND METHOD FOR ATTACHING AN ARRANGEMENT TO A SURFACE
An arrangement for attachment of devices to a surface, such as a wall, comprises at least one device, and for each device, at least two mounting brackets. Each device has a rear side provided with a recess for each mounting bracket, wherein each mounting bracket has a male connector end and a female connector end, the male connector end being configured to be received by the female connector end of an adjoining mounting bracket associated to a neighbouring device. Each male connector end is provided with a through hole for receiving an attachment means, such as a screw. Each mounting bracket is insertable into its associated recess with an orientation in which the male connector end extends beyond a periphery of the device, and each mounting bracket is insertable into its associated recess with an orientation in which the female connector end is accessible from the periphery of the device.
E04B 2/72 - Murs faits d'éléments de forme relativement mince
F16M 13/02 - Autres supports ou appuis pour positionner les appareils ou les objetsMoyens pour maintenir en position les appareils ou objets tenus à la main pour être portés par un autre objet ou lui être fixé, p. ex. à un arbre, une grille, un châssis de fenêtre, une bicyclette
72.
METHODS, SYSTEMS AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUMS FOR DETECTING AN OBJECT OF A FIRST OBJECT TYPE IN A VIDEO SEQUENCE
The present disclosure relates to methods, systems and non-transitory computer-readable storage mediums for detecting an object of a first object type in a video sequence. A first algorithm is used to detect areas or objects in the scene as captured in the video stream that have an uncertain object type status. A second algorithm is used to provide a background model of the video sequence. For areas or objects having the uncertain object type status, the background model is used to check if the area or object is considered to be part of the background or the foreground in the video sequence. If the area or object is determined to belong to the foreground, the area or object is classified as the first object type. If the area or object is determined to not belong to the foreground, the area or object is not classified as the first object type.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
Methods and apparatus, including computer program products, implementing and using techniques for controlling the display of an overlay in a video. An overlay to be displayed in a video is defined. An overlay area is defined, which includes the overlay. At least one foreground object in the video is defined. A spatial overlap between the foreground object and the overlay area is determined. In response to determining that a size of the spatial overlap exceeds a first threshold, the entire overlay is stopped from being displayed within the overlay area.
A device and a method for buffering a graphical overlay to be applied to an image is disclosed. A graphical overlay description specifying content, size, and position in the image of a graphical element of the graphical overlay is obtained, and the graphical overlay is divided into a plurality of sequential line fragments. For each line fragment it is determined, using the graphical overlay description, whether the line fragment overlaps a part of the graphical element. On condition that the line fragment overlaps a part of the graphical element, information representing the part of the graphical element is buffered in a buffer memory for the line fragment. On condition that the line fragment does not overlap any part of the graphical element, a run-length coding representing identical pixels is buffered in the buffer memory for the line fragment.
A method for classifying a detected object is disclosed. First and second object detectors detect first and second objects in first and second image frames, respectively, of a video sequence, and first and second probability scores respectively are calculated indicating a probability that the detected object belongs to a specific class. The second image frame is subsequent to the first image frame. The first object detector has a higher object detection precision and a longer processing time than the second object detector. The first and second object detections are performed in parallel. Reducing the first classification threshold or increasing the first probability score are performed if the first probability score is below a first classification threshold and the second probability score is above a second classification threshold. The first object is determined to belong to the specific class based on the probability scores and the classification thresholds.
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
G06T 7/223 - Analyse du mouvement utilisant la correspondance de blocs
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p. ex. véhicules ou piétonsReconnaissance des objets de la circulation, p. ex. signalisation routière, feux de signalisation ou routes
A method for controlling the output power of power sourcing equipment powering a device through an ethernet cable, comprising: performing a power negotiation between the power sourcing equipment and the powered device, the negotiation comprising establishing a relation between an output voltage of the power sourcing equipment and a loop current from the powered device to the power sourcing equipment by measuring the loop current in a current-sensing device; obtaining indications of whether the powered device needs increased power and whether the power sourcing equipment is able to deliver increased power; if the powered device needs increased power and the power sourcing equipment is of a power sourcing equipment type able to deliver increased power, physically manipulating the current-sensing device such that it senses a manipulated lower loop current; and providing a manipulated higher output power from the power sourcing equipment to the powered device.
A method of encoding a video stream is provided, including obtaining a first image with a first FOV; encoding the first image as part of a first encoded video frame; obtaining a second image with a second FOV different from the first FOV; generating a first additional video frame referencing the first video frame, including motion vectors transforming image content of the first image to a FOV closer to the second FOV than the first FOV, wherein the motion vectors are formed based on a difference between the first and second FOVs; inserting the first additional video frame into the encoded video stream as a no-display frame, and encoding the second image as part of a second video frame of the encoded video stream referencing the first additional video frame. A corresponding device, computer program and computer program product are also provided.
H04N 19/30 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant des techniques hiérarchiques, p. ex. l'échelonnage
H04N 19/139 - Analyse des vecteurs de mouvement, p. ex. leur amplitude, leur direction, leur variance ou leur précision
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
84.
METHOD AND IMAGE-PROCESSING DEVICE FOR DETECTING A REFLECTION OF AN IDENTIFIED OBJECT IN AN IMAGE FRAME
An image-processing device generates a three-dimensional model of a background scene of the image frame based on three-dimensional information about the background scene. The image-processing device defines a three-dimensional bounding box of the object in the three-dimensional model. The image-processing device defines a centre coordinate in the three-dimensional model and a colour value of surface elements of the three-dimensional bounding box. The image-processing device determines a three-dimensional coordinate of a surface in the three-dimensional model which reflects light from a surface element into the camera, by tracing rays from the centre coordinate and based on a normal of the surface. The image-processing device further identifies a first pixel in the image frame corresponding to the three-dimensional coordinate and detects the reflection of the object.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
G06V 20/52 - Activités de surveillance ou de suivi, p. ex. pour la reconnaissance d’objets suspects
A power sourcing equipment (PSE) device for controlling delivery of electric power to a powered device (PD) connected to the PSE device comprises a first input port for reception of input data and electric power from a power over Ethernet (PoE) device such as a PoE switch, a second input port for reception of electric power from a power source other than the PoE device and an output port for provision of the input data received at the first input port and the electric power received at the second input port to the PD. The PSE device further comprises control circuitry that is configured to determine a discontinuation of reception of electric power from the PoE device and configured to, in response to a determination of a discontinuation of reception of electric power from the PoE device, control discontinuation of provision of the electric power to the PD.
A device and a method mask an object in a video stream. The camera is arranged in a system including the camera and another device. A location and field of view is known for the device and the camera. Furthermore, the field of view of the device and the camera are non-overlapping. Information indicating that an object is approaching the field of view of the camera is obtained. The obtained information is determined from the device indicating a location and a direction of movement of the object and the known locations and fields of view of the camera and the device. In response to the information, a threshold for detecting objects to be masked in the video stream captured by the camera is reduced. An object to be masked in the video stream is detected using the reduced threshold, and masking of the object in the video stream is inserted.
A method of providing a signed bitstream, performed in association with a process of capturing an audio signal and encoding it as a bitstream, which includes a sequence of data units representing time segments of the audio signal. The method comprises: assigning a score to each data unit; monitoring an accumulated score of data units back to a reference point; when the accumulated score reaches a threshold, inserting into the bitstream a signature unit including a digital signature of fingerprints of a subsequence of the data units back to the reference point; and resetting the reference point. The score is based on a) a detected content of the time segment of the audio signal corresponding to the data unit, b) contextual information relating the time segment to a history of the audio signal, and/or c) information relating to the conditions of capturing the time segment.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G10L 19/018 - Mise en place d’un filigrane audio, c.-à-d. insertion de données inaudibles dans le signal audio
G10L 19/02 - Techniques d'analyse ou de synthèse de la parole ou des signaux audio pour la réduction de la redondance, p. ex. dans les vocodeursCodage ou décodage de la parole ou des signaux audio utilisant les modèles source-filtre ou l’analyse psychoacoustique utilisant l'analyse spectrale, p. ex. vocodeurs à transformée ou vocodeurs à sous-bandes
G10L 25/51 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation
89.
Device and method for enhancing tracking of objects in a scene captured in a video sequence
A method for selecting a crop score threshold for enhancing tracking of objects in a scene captured in a video sequence is disclosed. A respective track is obtained for two different objects, each track comprising crops of object instances of the objects in in a video sequence, each crop having a crop score and a feature vector. Each track is split into respective more tracklets thereby forming four or more tracklets. For each candidate crop score threshold a respective difference between each tracklet and each other tracklet is determined based on differences between feature vectors of crops having a crop score above the candidate crop score threshold of each tracklet, and each other tracklet. A crop score threshold is selected from the set of crop score thresholds resulting in a maximum difference between the differences between tracklets of different tracks and the differences between tracklets of the same track.
A computer-implemented method in a processor device of a camera, the method comprising: acquiring image frames comprising video data, communicating with a receiver device for continuously transmitting a video stream to the receiver device over a communication network; detecting at least one object in the image frames of the video data, the detected at least one object belong to at least one predetermined object class selected as surveillance target; cropping sub-areas in the image frames of the video data, the sub-areas including the at least one detected object, and adding the cropped sub-areas to image frames of the video stream being continuously transmitted as a single video stream to the receiver device.
G06V 20/52 - Activités de surveillance ou de suivi, p. ex. pour la reconnaissance d’objets suspects
G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
G06T 5/92 - Modification de la plage dynamique d'images ou de parties d'images basée sur les propriétés globales des images
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
91.
METHOD AND DEVICE FOR DETERMINING A MICROPHONE STATUS
A method for determining a status of a microphone comprises inducing a piezoelectric component, located in a vicinity of and in mechanical connection with the microphone, to emit a predetermined impulse waveform; determining whether a response signal waveform from the microphone corresponds to the predetermined impulse waveform; and upon the response signal waveform from the microphone corresponding to the predetermined impulse waveform, determining that the status of the microphone is operational.
A method for fusing a primary image (P) and a secondary image (S) into a target image comprises determining a local noise level of a region of the primary image; deriving a low-frequency, LF, component (PLF) from said region of the primary image and a high-frequency, HF, component (SHF) from a corresponding region of the secondary image, wherein the LF and HF components refer to a common cut-off frequency; combining the LF component and the HF component into a target image region, wherein the HF component's relative contribution to the target image region increases gradually with the local noise level; and repeating the preceding operations and merging all output target image regions thus obtained into the target image. A system implementing said method is also provided.
A method for encoding an image frame, performed by an image processing device, comprising obtaining image data, and identifying an image area in an image frame based on that the image area fulfilling an identification criterion. The method further comprises determining a bit depth reduction factor for the identified image area by analyzing the image data in the identified image area, and replacing some of the bit values of the pixel values in the identified image area with dummy values. How many of the bit values that are replaced with dummy values is defined by the bit depth reduction factor. The method comprises encoding the image frame upon said some of the bit values having been replaced in the identified image area.
H04N 19/85 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/182 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un pixel
H04N 19/184 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant des bits, p. ex. de flux vidéo compressé
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/20 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage d'objets vidéo
94.
SYSTEM AND METHOD FOR VISUALIZING MOVEMENT OF A DETECTED OBJECT IN A SCENE
A system and a method for visualizing movement of a detected object in a scene are disclosed. For each image of a sequence of images captured by a camera and depicting the scene: information is obtained indicating a location in horizontal direction in the image of the detected object, and a graphical shape is displayed on a display in a 2-dimensional graphical representation of a field of view of the camera in a horizontal plane, wherein the graphical shape has a location in the 2-dimensional graphical representation representing the location in horizontal direction in the image of the detected object.
A method for identifying that a first FMCW radar unit is subject to parallel incoherent interference from a second FMCW radar unit is disclosed. The method comprises: acquiring a range-Doppler map, calculating a range-resolved signal for a negative half (corresponding to negative Doppler shifts) and a positive half (corresponding to positive Doppler shifts) of the range-Doppler map, and calculating a range-dependent noise profile for the range-Doppler map as, for each range interval, the smaller of the range-resolved signal for the negative half and the positive half. The method further comprises: identifying parallel incoherent interference in the range-Doppler map if a measure of a deviation between the range-resolved signal for the negative half and the positive half is smaller than a predetermined deviation threshold, and a measure of a difference between the range-dependent noise profile and a global noise floor of the range-Doppler map exceeds a predetermined noise threshold.
G01S 7/41 - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe utilisant l'analyse du signal d'écho pour la caractérisation de la cibleSignature de cibleSurface équivalente de cible
A method of performing background light subtraction in an infra-red (IR) illuminated image depicting a scene, comprises: providing a rolling shutter image sensor; providing an IR light source configured to be turned on and off; changing an on-off status of the IR light source a plurality of times while capturing an image; capturing two or more image frames each image frame comprising: a first set of lines of pixels comprising image data captured with the IR light turned on; a second set of lines of pixels comprising image data captured with the IR light turned off; creating an IR-illuminated image; creating a non-IR-illuminated image; subtracting background light from the IR-illuminated image using pixel values in the non-IR-illuminated image.
H04N 25/531 - Commande du temps d'intégration en commandant des obturateurs déroulants dans un capteur SSIS CMOS
G06V 10/143 - Détection ou éclairage à des longueurs d’onde différentes
G06V 20/54 - Trafic, p. ex. de voitures sur la route, de trains ou de bateaux
G06V 20/62 - Texte, p. ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
H04N 23/11 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde pour générer des signaux d'image à partir de longueurs d'onde de lumière visible et infrarouge
H04N 23/20 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image uniquement à partir d'un rayonnement infrarouge
H04N 23/21 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image uniquement à partir d'un rayonnement infrarouge à partir du rayonnement infrarouge proche [NIR]
H04N 23/23 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image uniquement à partir d'un rayonnement infrarouge à partir du rayonnement infrarouge thermique
H04N 23/56 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande munis de moyens d'éclairage
H04N 23/71 - Circuits d'évaluation de la variation de luminosité
H04N 23/72 - Combinaison de plusieurs commandes de compensation
H04N 23/73 - Circuits de compensation de la variation de luminosité dans la scène en influençant le temps d'exposition
H04N 23/74 - Circuits de compensation de la variation de luminosité dans la scène en influençant la luminosité de la scène à l'aide de moyens d'éclairage
H04N 23/76 - Circuits de compensation de la variation de luminosité dans la scène en agissant sur le signal d'image
H04N 25/131 - Agencement de matrices de filtres colorés [CFA]Mosaïques de filtres caractérisées par les caractéristiques spectrales des éléments filtrants comprenant des éléments laissant passer les longueurs d'onde infrarouges
H04N 25/587 - Commande de la gamme dynamique impliquant plusieurs expositions acquises de manière séquentielle, p. ex. en utilisant la combinaison de champs d'image pairs et impairs
98.
TRANSMITTER, A RECEIVER AND METHODS THEREIN FOR VALIDATION OF A VIDEO SEQUENCE
A transmitter and a method therein for enabling validation of a video sequence comprising encoded image frames by providing the sequence with a data structure and a digital signature. The method comprises: performing lossless compression of each encoded image frame to obtain a respective losslessly compressed (LC) encoded image frame; identifying small frames each having a data size that is smaller than a predefined number of bytes as small LC encoded image frames; generating a data structure comprising: the small LC encoded image frames; and individual hashes of either all encoded image frames lacking a respective small frame or all other obtained LC encoded image frames being different from the small frames; generating a digital signature; and providing the data structure and the digital signature to the video sequence.
H04N 19/46 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression
H04N 19/103 - Sélection du mode de codage ou du mode de prédiction
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ