There is provided a computer implemented method of detecting data drift for changing a machine learning (ML) model, comprising: monitoring a target input dataset for being fed into the ML model, extracting a plurality of target candidate features from the target input dataset, applying a feature selection process for selecting a target subset of the plurality of target candidate features, accessing a historical input dataset previously determined as being suitable for being fed into the ML model, extracting a plurality of historical candidate features from the historical input dataset, applying the feature selection process for selecting a historical subset of the plurality of historical candidate features, computing a comparison between the target subset and the historical subset, and in response to the comparison meeting a requirement indicating a significant difference between the target subset and the historical subset, generating an indication for changing the ML model.
There is provided a method of generating a reduced ML model, comprising: obtaining a sample dataset, extracting global features from the sample dataset, applying a feature selection process for selecting a first subset of the global features, analyzing a classification performance of the ML model fed the first subset, to identify an error in classification by the ML model, identifying a subset of the sample dataset related to the error, extracting second features from the subset of the sample data, applying the feature selection process for selecting a second subset of the second features, and creating a reduced version of the ML model, comprising an ensemble of: a first ML model component trained by applying the first subset of global features to the sample dataset, and a second ML model component trained by applying the second subset of features to the subset of the sample data related to the error.
A system and a method for augmenting a dataset comprising a textual content using instruction to cause a conversational language model to create variations of text items by changing text properties such as length, style, terminology, dialect, rhyming and the like. The method may also be used with combined prompts and iteratively.
A system includes a display and a camera module configured to capture forward real video and to output the forward real video on the display. A light source may be configured to generate light, and a navigation module may receive instructions regarding locating a target destination and may cause the light to be focused into a beam directed at the target destination as the user views both the forward real video and beam on the display. When the user reaches the desired location or orientation, the target destination is visibly illuminated by the beam of light.
Simultaneous location and mapping with improved accuracy by applying segmentation on images, and selecting patches having characteristics which are likely to enable reliable matching is disclosed. Geometric matching may be apply for generating sets of matching patches from different images. The camera distances and angulation may be calculated between estimated locations of patches in two or three dimensions, The location and mapping may be used for indoor and outdoor navigation, mapping, robotics, drones, localization, mapping, aerial image matching, panorama stitching and the like.
There is provided a system for monitoring temperature changes associated with risk of a medical condition of a subject, comprising: at least one processor in communication with a temperature sensor configured for sensing an ambient temperature and executing a code for: computing a trend of a plurality of temperature measurements of the ambient temperature obtained over a time interval by the temperature sensor, wherein the trend includes differentials between successive temperature measurements and preceding temperature measurements, and analyzing the trend including the differentials to determine risk of the medical condition of the subject.
G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
7.
DETECTION OF AN ARTIFICIAL IRIS FOR SPOOFING AN IRIS RECOGNITION SYSTEM
There is provided a computer implemented method of detecting an attempt to breach security of an iris recognition system by an artificial iris, comprising: analyzing at least a portion of a limbal ring depicted in an image of an iris of an individual captured by an imaging sensor at a wavelength range within at least one of near infrared (NIR) and short wave infrared (SWIR), and detecting likelihood of an artificial iris worn by the individual according to the analysis of the at least the portion of the limbal ring.
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
G06V 40/18 - Eye characteristics, e.g. of the iris
8.
FILTERING A STREERABLE LASER BEAM DURING REAL-TIME OBJECT DETECTION AND TARGETING IN PHYSICAL SPACE
There is provided a system for guiding a steerable laser to a target object, comprising: at least one processor executing a code for: for each image of a plurality of images captured by an image sensor and obtained in a plurality of iterations: filtering an illumination of a steerable laser overlapping a target object or in near proximity to a target object from the image, to create a filtered image, detecting the target object on the filtered image by a detector model, and generating instructions for at least one of: further directing of the steerable laser for illumination of the target object, and maintaining the illumination of the steerable on the target object.
G06T 5/20 - Image enhancement or restoration using local operators
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
H04N 23/56 - Cameras or camera modules comprising electronic image sensorsControl thereof provided with illuminating means
H04N 23/61 - Control of cameras or camera modules based on recognised objects
H04N 23/81 - Camera processing pipelinesComponents thereof for suppressing or minimising disturbance in the image signal generation
9.
COMPUTER RESOURCE MONITORING BY PROCESSING IMAGES USING A MODEL COMPRISING LARGE LANGUAGE MODELS
A system and a method for classifying images pertaining to computer resource usage of a user, using an ensemble of close ended questions and a neural network based language model. The method may be used to as aid to monitor and/or enforce computer usage policies in industrial organizations. government services, academy. education, and/or the like.
G06F 11/34 - Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
10.
AUTOMATIC TEXTUAL DOCUMENT EVALUATION USING LARGE LANGUAGE MODELS
A system and a method for analyzing conversation text using a flow of query prompts, an ensemble of close ended questions and a neural network based language model. The method may be used as a tutor or examination bot, for mental coherency screening, data mining, clustering groups of trainees or customers according to training needs or interests, and the likes.
There is provided a system for analyzing images for facial expression recognition, comprising: at least one short wave infrared (SWIR) illumination element that generates SWIR illumination for illumination of a face of a person, at least one SWIR sensor that captures at least one SWIR image of the face under the SWIR illumination, and a non-transitory medium storing program instructions, which, when executed by a processor, cause the processor to analyze the at least one SWIR image for recognizing a facial expression depicted by the face.
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
H04N 23/56 - Cameras or camera modules comprising electronic image sensorsControl thereof provided with illuminating means
H04N 23/611 - Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
12.
DOCUMENT CLASSIFICATION USING LARGE LANGUAGE MODELS
A system and a method for classifying text from a document using an ensemble of close ended questions and a neural network based large language model, which might have been trained for different purposes. The method comprising feeding the language model with queries based on the text and the ensemble and post processing output of the language model using a knowledge representation rule based model or an additional machine language model. The method may be used as spam or phishing filter, detect sensitive contents due to various criteria, and/or the like.
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
Disclosed herein is a method of improving computation of a 3 dimensional (3D) model of an object, comprising adjusting one or more illumination parameter of one or more light sources illuminating a target object having one or more high reflection surfaces, and operating one or more image sensors to capture a plurality of images depicting the target object from a plurality of different viewpoints while the object is illuminated by the one or more light sources. Wherein the plurality of images are used by one or more processors to compute a 3D model of the target object based on a plurality of features extracted from the plurality of images.
A method of authenticating video, comprising using a computing device of a receiving party, for: receiving a video showing a light pattern being projected onto an area captured in the video, the video being associated with a time point, extracting the light pattern from the received video, and verifying authenticity of the received video based on the extracted light pattern, on a reference light pattern identifier, and on a time difference between a time of receipt of the video by the receiving party and the time point.
A method of authenticating video, the method comprising steps performed by a server computer, the steps comprising: receiving a first request for an identifier from a computing device of a first party, selecting the identifier based on an identity of the first party and on a time point, the selected identifier identifying a light pattern, providing the computing device of the first party with the identifier, for the first party to use for generating and projecting the light pattern onto an area while being captured in a video, receiving a second request for the identifier from a computing device of a second party, and communicating the identifier to the computing device of the second party, the second party being in receipt of the video, for the second party to use for verifying authenticity of the video.
G06V 10/14 - Optical characteristics of the device performing the acquisition or on the illumination arrangements
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06V 20/40 - ScenesScene-specific elements in video content
A system and associated method include a housing that includes a sensor, a transmitter, and a power source. A solar panel is configured to receive light energy from an artificial light source to generate an electrical charge. An electrical connection may convey the electrical charge to the power source to energize the sensor, and a fastener fixates the solar panel within 10 cm of the artificial light source. The solar panel may be further positioned with respect to the housing and the artificial light source. For instance, the solar panel may be positioned so as to minimally obstruct visible light from the artificial light source within an area to be illuminated. The housing may comprise part of an Internet of Things (IoT) device.
A method of self-localizing with respect to surrounding objects, comprising obtaining an approximated geolocation of the vehicle, retrieving mapping data comprising a geolocation of one or more stationary objects located in an area surrounding the approximated geolocation, receiving imagery data of a surrounding environment of the vehicle captured by a plurality of distinct imaging sensors deployed in the vehicle, applying one or more trained machine learning models to identify one or more of the stationary objects in the imagery data, computing a relative positioning of the vehicle with respect to one or more of the stationary objects based on an orientation of each of the plurality of imaging sensors with respect to the stationary object(s), computing an absolute positioning of the vehicle based on the relative positioning and the geolocation of the stationary object(s), and outputting the vehicle's absolute positioning.
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
There is provided a system of cryptography for securing data on a blockchain, comprising: at least one hardware processor executing a code for: obtaining at least one encrypted data item, encrypted with a public key compliant with a homomorphic encryption mechanism, feeding the at least one encrypted data item into a computational process that computationally processes the at least one encrypted data item with computations compliant with the homomorphic encryption mechanism, and providing at least one encrypted outcome of the computational process to a smart contract for posting on a blockchain, wherein the at least one encrypted outcome is compliant with the homomorphic encryption mechanism and decrypted with a private key corresponding to the public key.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
There is provided a retroreflector device, comprising: an incident object made of a material transparent to electromagnetic radiation, the incident object designed to refract an incident ray hitting an incident surface to generate a refracted ray that hits a back surface, and a retroreflective surface positioned in proximity to the back surface of the incident object, wherein the retroreflective surface and the incident object are configured to refract the incident ray to generate the refracted ray for hitting the retroreflective surface at an angle of incidence below a threshold.
There is provided a computer implemented method of computing a location of an object, comprising: accessing a wide field of view (wFOV) image captured by a wFOV image sensor located relative to an object, analyzing the wFOV image to identify a predefined feature, wherein the predefined feature indicates a low accuracy location of the object, capturing a high resolution image by a high resolution image sensor located relative to the object, the high resolution image depicting the predefined feature, and computing a high accuracy of location of the object according to an analysis of the predefined feature and according to a correlation between a location and orientation of the wFOV image sensor and the high resolution image sensor.
H04N 23/951 - Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
H04N 23/61 - Control of cameras or camera modules based on recognised objects
H04N 23/74 - Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
G08G 9/00 - Traffic control systems for craft where the kind of craft is irrelevant or unspecified
There is provided a system for imaging of a scene, comprising: at least one short wave infrared (SWIR) illumination element that generates SWIR illumination at a SWIR wavelength range, at least one filter that filters out electromagnetic radiation at wavelengths which are mostly non-absorbed by water vapor in air depicted in the scene, and at least one SWIR sensor that captures the SWIR illumination of the SWIR wavelength range which is passed by the at least one filter and generates at least one SWIR image of the scene.
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a patternLocating or processing of specific regions to guide the detection or recognition
G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
There is provided a system for analyzing images for facial expression recognition, comprising: at least one short wave infrared (SWIR) illumination element that generates SWIR illumination for illumination of a face of a person, at least one SWIR sensor that captures at least one SWIR image of the face under the SWIR illumination, and a non-transitory medium storing program instructions, which, when executed by a processor, cause the processor to analyze the at least one SWIR image for recognizing a facial expression depicted by the face.
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
H04N 23/56 - Cameras or camera modules comprising electronic image sensorsControl thereof provided with illuminating means
H04N 23/611 - Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
23.
Image analysis for controlling movement of an object
There is provided a computer implemented method of controlling movement of an object, comprising: accessing a current image of a surface relative to an object at a current location, wherein an imaging sensor is set to capture the current image depicting an overlap with a previously captured image of the surface when the object was at a previous location, registering the current image to the overlap of the previously captured image, computing the current location of the object relative to a reference location according to an analysis of the registration, and feeding the current location into a controller for controlling movement of the object.
There is provided a vehicle sensor system, comprising: a plurality of sensors with mostly overlapping field of views that simultaneously acquire temporary images, a processing circuitry that: analyzes the temporary images to identify at least one blocked image area in at least one temporary image of the plurality of temporary images which is less or not blocked in at least one spatially corresponding image area of at least one other temporary image of the plurality of temporary images, selects visual data from the at least one spatially corresponding image areas over corresponding visual data from the at least one blocked images area, merges the plurality of temporary images into a final image using the selected visual data and excludes the at least one blocked image area, and an output interface that forwards the final image to a vehicle controller.
G06V 10/62 - Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extractionPattern tracking
G06V 10/50 - Extraction of image or video features by performing operations within image blocksExtraction of image or video features by using histograms, e.g. histogram of oriented gradients [HoG]Extraction of image or video features by summing image-intensity valuesProjection analysis
G06T 7/70 - Determining position or orientation of objects or cameras
There is provided a method of re-classifying a clinically significant feature of a medical image as an artifact, comprising: feeding a target medical image captured by a specific medical imaging sensor at a specific setup into a machine learning model, obtaining a target feature map as an outcome of the machine learning model, wherein the target feature map includes target features classified as clinically significant, analyzing the target feature map with respect to sample feature map(s) obtained as an outcome of the machine learning model fed a sample medical image captured by at least one of: the same specific medical imaging sensor and the same specific setup, wherein the sample feature map(s) includes sample features classified as clinically significant, identifying target feature(s) depicted in the target feature map having attributes matching sample feature(s) depicted in the sample feature map(s), and re-classifying the identified target feature(s) as an artifact.
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/771 - Feature selection, e.g. selecting representative features from a multi-dimensional feature space
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human interventionEvaluation of the quality of the acquired patterns
G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Disclosed herein are methods and systems for determining liveness of a user, comprising analyzing visual content of a screen of a client device used by a user to access an online service, adjusting one or more visual objects displayed on the screen of the client device according to dynamically changing patterns, capturing a sequence of images depicting one or more reflecting surfaces associated with the user viewing the screen while the visual objects are displayed, analyzing the images to identify a reflection of the displayed visual objects in the reflecting surfaces and verifying liveness of the user based on one or more of a plurality of reflection attributes of the identified reflection.
Disclosed herein are methods and systems for encoding data in composite patterns such that the encoded data is perceptible in one or more infrared spectral ranges while significantly imperceptible in visible light spectral range by encoding the data in one or more first partial patterns and/or in one or more second partial patterns of the composite pattern where the first partial pattern(s) is painted using a first print material and the second partial pattern(s) is painted using a second. The first and second paint materials are characterized by reflecting substantially similar light in the visible light spectral range and significantly different light in the infrared spectral range(s) such that the first and second patterns are indistinguishable in the visible light spectral range while highly distinguishable in the infrared spectral range(s). further disclosed are methods and systems for decoding the composite patterns to decode and extract the encoded data.
G06K 7/12 - Methods or arrangements for sensing record carriers by electromagnetic radiation, e.g. optical sensingMethods or arrangements for sensing record carriers by corpuscular radiation using a selected wavelength, e.g. to sense red marks and ignore blue marks
G06K 7/14 - Methods or arrangements for sensing record carriers by electromagnetic radiation, e.g. optical sensingMethods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
A method of autonomous vehicle control, comprising: receiving an image of a lenticular human-imperceptible marker embedded in an element of an environment that an autonomous vehicle is moving in, the marker having a pattern usable for determining positional data of the moving vehicle, the image captured using human-invisible light, analyzing the received image of the human-imperceptible marker, and controlling the autonomous vehicle based on the analyzed image of the human-imperceptible marker.
Disclosed herein are methods and system for detecting road marking expressed using alternating infrared reflective tiles comprising high infrared reflective tiles and low infrared red reflective tiles painted on a road surface using paint material(s) characterized by: (1) reflecting light in visible light spectral range deviating less than a first value from the light reflected by the road surface and (2) reflecting light in an infrared spectral range deviating more than a second value from the light reflected by the road surface. Infrared image(s) and visible light image(s) of the road surface which are registered to each other may be analyzed to compute an infrared reflective value and a luminance value for each pixel respectively. A ratio may be computed between the infrared reflective value of and the luminance value of corresponding pixels to identify high and low infrared reflective tiles in pixels having a ratio exceeding a third value.
Disclosed herein are methods and systems for enhancing road markings using Infrared (IR) retroreflective spherical elements, comprising immersing a plurality of IR retroreflective spherical elements in one or more paint materials to produce a composition applied to paint road markings on one or more surfaces of one or more road segments. Each of the plurality of IR retroreflective spherical elements is at least partially transparent in visible light spectral range and in one or more infrared spectral ranges and is at least partially coated with one or more IR reflective materials characterized by (1) reflecting more than a first value of light in the one or more infrared spectral ranges, and (2) transferring more than a second value of light in the visible light spectral range. The painted road markings expressing driving information relating to the one or more road segments.
E01F 9/518 - Road surface markingsKerbs or road edgings, specially adapted for alerting road users characterised by the road surface marking material, e.g. comprising additives for improving friction or reflectivityMethods of forming, installing or applying markings in, on or to road surfaces formed in situ, e.g. by painting, by casting into the road surface or by deforming the road surface
31.
Imperceptible road markings to support automated vehicular systems
Disclosed herein are methods and systems for painting driving markings invisible in visible light spectrum, comprising generating driving assistance markings expressing driving information relating to one or more road segments, computing instructions for painting the driving assistance markings on one or more elements of the road segment(s) using one or more paint material(s) characterized by: (1) reflecting light in a visible light spectral range deviating less than a first value from the visible light spectral range reflected by a surface of the element(s) and (2) reflecting light in an infrared spectral range deviating more than a second value from the infrared spectral range reflected by the surface of the element(s), and outputting the painting instructions for applying the one or more paint materials on the element(s) according to the instructions such that the driving assistance markings are visible in the infrared spectrum and significantly invisible in the visible spectrum.
Disclosed herein are methods and systems for painting driving assistance markings using one or more paint materials which are visible in a plurality of light spectral ranges, in particular, visible light and one or more infrared light spectral ranges. Further disclosed are methods and systems for analyzing images captured in multiple spectral ranges to identify the driving assistance markings and/or part thereof in a plurality of different spectral ranges and identify aggregated driving assistance markings by aggregating the driving assistance markings identified in the plurality of different spectral ranges. Also disclosed herein are methods and systems for presenting and detecting enhanced driving assistance markings on one or more elements under one or more paint materials which are highly transparent in one or more infrared spectral ranges while reflecting visible light conforming to a color of the element(s) surface thus not affecting appearance of the element(s) in the visible light spectrum.
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
E01F 9/506 - Road surface markingsKerbs or road edgings, specially adapted for alerting road users characterised by the road surface marking material, e.g. comprising additives for improving friction or reflectivityMethods of forming, installing or applying markings in, on or to road surfaces
E01C 23/20 - Devices for marking-out, applying or forming traffic or like markings on finished pavingProtecting fresh markings for forming markings in situ
G05D 1/02 - Control of position or course in two dimensions
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/143 - Sensing or illuminating at different wavelengths
Disclosed herein are methods and systems for detecting dynamic objects using road painted patterns perceptible in infrared spectral range, comprising receiving images captured in one or more infrared spectral ranges depicting a road segment painted with background patterns which are highly imperceptible in visible light spectrum while highly visible in one or more infrared spectral ranges, analyzing the images to detecting one or more dynamic objects located in front of the background patterns. The light reflected by the one or more dynamic objects in the one or more infrared spectral ranges deviating from the light reflected by the one or more background pattern and computing a location of the one or more identified objects. Further disclosed are methods and systems for calibration of systems and/or sensors based on reference markings which are highly imperceptible in visible light spectrum while highly visible in the infrared spectral range(s).
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
An article having an invisible infrared pattern is disclosed. The article includes at least one infrared pattern printed onto a surface. The infrared pattern includes regions of high absorption and high reflection for a plurality of wavelengths of infrared radiation ranging between 700 and 2000 nm. A coating is overlaid over the infrared pattern. The coating is made of a material and has a thickness that is penetrable by infrared radiation and that has an average opacity of at least 20 for light in the visible range.
G06K 7/14 - Methods or arrangements for sensing record carriers by electromagnetic radiation, e.g. optical sensingMethods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
G08C 23/04 - Non-electric signal transmission systems, e.g. optical systems using light waves, e.g. infrared
35.
Liveness detection in an interactive video session
Disclosed herein are methods and systems for determining whether a user engaged in an interactive video session is a genuine user or a potential impersonator by analyzing a plurality of consecutive images depicting the user while engaged in the video session to identify one or more dynamic facial patterns in the face of the user while his lips are moving. Each such dynamic facial pattern may express a movement of one or more of a plurality of wrinkles and/or other dynamic facial features (e.g., nostrils, distance between nostrils, ear, skin portion, muscle, etc.) in the face of the user. The user may be then determined to be genuine or not based on a comparison between the identified dynamic facial pattern(s) and one or more reference dynamic facial patterns.
There is provided a system for computing a secure statistical classifier, comprising: at least one hardware processor executing a code for: accessing code instructions of an untrained statistical classifier, accessing a training dataset, accessing a plurality of cryptographic keys, creating a plurality of instances of the untrained statistical classifier, creating a plurality of trained sub-classifiers by training each of the plurality of instances of the untrained statistical classifier by iteratively adjusting adjustable classification parameters of the respective instance of the untrained statistical classifier according to a portion of the training data serving as input and a corresponding ground truth label, and at least one unique cryptographic key of the plurality of cryptographic keys, wherein the adjustable classification parameters of each trained sub-classifier have unique values computed according to corresponding at least one unique cryptographic key, and providing the statistical classifier, wherein the statistical classifier includes the plurality of trained sub-classifiers.
There is provided a system for measuring a physiological parameter of a person indicative of physiological pathology, comprising: a plurality of remote non-contact sensors, each of a different type of sensing modality, at least one hardware processor executing a code for: simultaneously receiving over a time interval, from each of the plurality of remote non-contact sensors monitoring a person, a respective dataset, extracting, from each respective dataset, a respective sub-physiological parameter of a plurality of sub-physiological parameters, analyzing a combination of the plurality of sub-physiological parameters, and computing a physiological parameter indicative of physiological pathology according to the analysis, wherein an accuracy of the physiological parameter computed from the combination is higher than an accuracy of the physiological parameter independently computed using any one of the plurality of sub-physiological parameters.
There is provided a computer implemented method of measuring a temperature of a subject, comprising: receiving a sequence of a plurality of thermal images of a subject captured by a thermal sensor, analyzing the sequence of the plurality of thermal images to identify at least one target thermal image depicting an upper region of a tongue of the subject, analyzing the at least one target thermal image to identify an estimated temperature of the upper region of the tongue, and providing the estimated temperature of the upper region of the tongue.
G06T 7/174 - SegmentationEdge detection involving the use of two or more images
G01J 5/00 - Radiation pyrometry, e.g. infrared or optical thermometry
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G01J 5/48 - ThermographyTechniques using wholly visual means
A system for processing digital images comprising: at least one remote hardware processor; and at least one device, comprising at least one processing circuitry configured for: receiving from at least one image sensor, electrically coupled to the processing circuitry, at least one digital image captured by the at least one image sensor; partitioning at least one object, identified in the at least one digital image, into a plurality of object segments; replacing in the at least one digital image each of the plurality of object segments with a schematic segment, illustrating respective object segment, to produce at least one schematic image; and sending the at least one schematic image to the remote hardware processor; wherein the remote hardware processor is adapted to: receiving the at least one schematic image from the at least one device; analyzing the at least one schematic image to identify at least one behavioral pattern.
Provided herein are methods and systems for verifying a path in a monitored space, comprising transmitting a device identification (ID) of the mobile wireless device while the mobile wireless device moves through a monitored space, receiving one or more location certificates transmitted, in response to reception of the device ID, by one or more wireless transceivers deployed at a predefined location in the monitored space and having a limited transmission range, each location certificate comprising at least the device ID and a transceiver ID of the respective wireless transceiver, storing the one or more location certificates, and transmitting the one or more location certificates to one or more verification units configured to verify a path of the mobile wireless device in the monitored space estimated according to the predefined location of the one or more wireless transceivers identified by the transceiver ID extracted from the one or more location certificates.
H04W 12/00 - Security arrangementsAuthenticationProtecting privacy or anonymity
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
There is provided a computed implemented method of automatically generating an adapted presentation of at least one candidate anomalous object detected from anatomical imaging data of a target individual, comprising: providing anatomical imaging data of the target individual acquired by an anatomical imaging device, analyzing the anatomical imaging data by a detection classifier for detecting at least one candidate anomalous object of the anatomical imaging data and computed associated location thereof, computing, by a presentation parameter classifier, at least one presentation parameter for adapting a presentation of a sub-set of the anatomical imaging data including the at least one candidate anomalous object according to at least the location of the candidate anomalous object, and generating according to the at least one presentation parameter, an adapted presentation of the sub-set of the anatomical imaging data including the at least one candidate anomalous object.
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
42.
Tampering detection based on non-reproducible marks in a tampering evident element
Provided herein are methods, systems and computer program products for detecting tampering, comprising a sealing process and a seal verification process. The sealing process comprising analyzing a seal applied to seal an object as a tamper evident element, recording one or more manufacturing defects of the seal identified based on the analysis, each of the one or more manufacturing defects comprising one or more non-reproducible deviations from seal generation instructions used to produce the seal, and generating a signature comprising the one or more manufacturing defects. The seal verification process comprising obtaining the signature, analyzing the seal sealing the object, and determining whether the object is tampered based on a comparison between the analyzed seal and the signature.
A system for detecting malicious software, comprising at least one hardware processor adapted to: execute a tested software object in a plurality of computing environments each configured according to a different hardware and software configuration; monitor a plurality of computer actions performed in each of the plurality of computing environments when executing the tested software object; identify at least one difference between the plurality of computer actions performed in a first of the plurality of computing environments and the plurality of computer actions performed in a second of the plurality of computing environments; and instruct a presentation of an indication of the identified at least one difference on a hardware presentation unit.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 11/36 - Prevention of errors by analysis, debugging or testing of software
G06F 11/34 - Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation
44.
SYSTEM AND METHOD FOR DETECTING SUSPICIOUS ACTIONS OF A SOFTWARE OBJECT
A system for detecting malicious software, comprising at least one hardware processor adapted to: execute a tested software object in a plurality of computing environments each configured according to a different hardware and software configuration; monitor a plurality of computer actions performed in each of the plurality of computing environments when executing the tested software object; identify at least one difference between the plurality of computer actions performed in a first of the plurality of computing environments and the plurality of computer actions performed in a second of the plurality of computing environments; and instruct a presentation of an indication of the identified at least one difference on a hardware presentation unit.