Avigilon Patent Holding 1 Corporation

Canada

Back to Profile

1-91 of 91 for Avigilon Patent Holding 1 Corporation Sort by
Query
Aggregations
IPC Class
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints 65
G06K 9/62 - Methods or arrangements for recognition using electronic means 24
G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning 13
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast 13
G06F 17/30 - Information retrieval; Database structures therefor 9
See more
Found results for  patents

1.

Semantic representation module of a machine-learning engine in a video analysis system

      
Application Number 16545571
Grant Number 10706284
Status In Force
Filing Date 2019-08-20
First Publication Date 2019-12-12
Grant Date 2020-07-07
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis G.
  • Friedlander, David S.
  • Xu, Gang
  • Seow, Ming-Jung
  • Risinger, Lon W.
  • Solum, David M.
  • Yang, Tao
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath

Abstract

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06N 20/00 - Machine learning
  • G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06N 3/08 - Learning methods
  • G06N 3/00 - Computing arrangements based on biological models
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06K 9/62 - Methods or arrangements for recognition using electronic means

2.

Method and system for configurable security and surveillance systems

      
Application Number 16387499
Grant Number 10854068
Status In Force
Filing Date 2019-04-17
First Publication Date 2019-08-15
Grant Date 2020-12-01
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor Hammadou, Tarik

Abstract

A method and system for a configurable security and surveillance system are provided. A configurable security and surveillance system may comprise at least one programmable sensor agent and/or at least one programmable content analysis agent. A plurality of processing features may be offered by the configurable security and surveillance system by programming configurable hardware devices in the programmable sensor agents and/or the programmable content analysis agents via a system manager. Device programming files may be utilized to program the configurable hardware devices. The device programming files may be encrypted and decryption keys may be requested to enable the programming of different processing features into the programmable sensor agents and/or the programmable content analysis agents. The device programming files and/or the decryption keys may be received via a network transfer and/or via a machine-readable media from an e-commerce vendor.

IPC Classes  ?

  • G08B 29/18 - Prevention or correction of operating errors
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • G08B 29/00 - Checking or monitoring of signalling or alarm systemsPrevention or correction of operating errors, e.g. preventing unauthorised operation
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelengthActuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
  • H04N 7/167 - Systems rendering the television signal unintelligible and subsequently intelligible

3.

Method and system for identifying an individual in a digital image using location meta-tags

      
Application Number 16271328
Grant Number 10776611
Status In Force
Filing Date 2019-02-08
First Publication Date 2019-08-08
Grant Date 2020-09-15
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

4.

Semantic representation module of a machine-learning engine in a video analysis system

      
Application Number 16226496
Grant Number 10423835
Status In Force
Filing Date 2018-12-19
First Publication Date 2019-04-25
Grant Date 2019-09-24
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis G.
  • Friedlander, David S.
  • Xu, Gang
  • Seow, Ming-Jung
  • Risinger, Lon W.
  • Solum, David M.
  • Yang, Tao
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath

Abstract

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06N 20/00 - Machine learning
  • G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06K 9/62 - Methods or arrangements for recognition using electronic means

5.

Inter-trajectory anomaly detection using adaptive voting experts in a video surveillance system

      
Application Number 16147238
Grant Number 11386666
Status In Force
Filing Date 2018-09-28
First Publication Date 2019-01-31
Grant Date 2022-07-12
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David Samuel
  • Saitwal, Kishor Adinath

Abstract

A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.

IPC Classes  ?

  • G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

6.

Semantic representation module of a machine-learning engine in a video analysis system

      
Application Number 15921595
Grant Number 10198636
Status In Force
Filing Date 2018-03-14
First Publication Date 2018-07-19
Grant Date 2019-02-05
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis G.
  • Friedlander, David S.
  • Xu, Gang
  • Seow, Ming-Jung
  • Risinger, Lon W.
  • Solum, David M.
  • Yang, Tao
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath

Abstract

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06F 17/30 - Information retrieval; Database structures therefor
  • G06N 99/00 - Subject matter not provided for in other groups of this subclass
  • G06K 9/62 - Methods or arrangements for recognition using electronic means

7.

Method and system for tagging an individual in a digital image

      
Application Number 15867023
Grant Number 10216980
Status In Force
Filing Date 2018-01-10
First Publication Date 2018-06-28
Grant Date 2019-02-26
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

8.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 15710370
Grant Number 10990811
Status In Force
Filing Date 2017-09-20
First Publication Date 2018-03-22
Grant Date 2021-04-27
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user. The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

9.

Semantic representation module of a machine-learning engine in a video analysis system

      
Application Number 15494010
Grant Number 09946934
Status In Force
Filing Date 2017-04-21
First Publication Date 2017-08-10
Grant Date 2018-04-17
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis G.
  • Friedlander, David S.
  • Xu, Gang
  • Seow, Ming-Jung
  • Risinger, Lon W.
  • Solum, David M.
  • Yang, Tao
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath

Abstract

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning

10.

Method and system for configurable security and surveillance systems

      
Application Number 15442490
Grant Number 10311711
Status In Force
Filing Date 2017-02-24
First Publication Date 2017-06-15
Grant Date 2019-06-04
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor Hammadou, Tarik

Abstract

A method and system for a configurable security and surveillance system are provided. A configurable security and surveillance system may comprise at least one programmable sensor agent and/or at least one programmable content analysis agent. A plurality of processing features may be offered by the configurable security and surveillance system by programming configurable hardware devices in the programmable sensor agents and/or the programmable content analysis agents via a system manager. Device programming files may be utilized to program the configurable hardware devices. The device programming files may be encrypted and decryption keys may be requested to enable the programming of different processing features into the programmable sensor agents and/or the programmable content analysis agents. The device programming files and/or the decryption keys may be received via a network transfer and/or via a machine-readable media from an e-commerce vendor.

IPC Classes  ?

  • G08B 29/18 - Prevention or correction of operating errors
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • G08B 29/00 - Checking or monitoring of signalling or alarm systemsPrevention or correction of operating errors, e.g. preventing unauthorised operation
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelengthActuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
  • H04N 7/167 - Systems rendering the television signal unintelligible and subsequently intelligible

11.

Semantic representation module of a machine-learning engine in a video analysis system

      
Application Number 15338072
Grant Number 09665774
Status In Force
Filing Date 2016-10-28
First Publication Date 2017-02-16
Grant Date 2017-05-30
Owner Avigilon Patent Holding 1 Corporation (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis G.
  • Friedlander, David S.
  • Xu, Gang
  • Seow, Ming-Jung
  • Risinger, Lon W.
  • Solum, David M.
  • Yang, Tao
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath

Abstract

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning

12.

Method and system for attaching a metatag to a digital image

      
Application Number 15262995
Grant Number 10853690
Status In Force
Filing Date 2016-09-12
First Publication Date 2016-12-29
Grant Date 2020-12-01
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A.

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as Facebook® to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06F 16/56 - Information retrievalDatabase structures thereforFile system structures therefor of still image data having vectorial format
  • G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/46 - Extraction of features or characteristics of the image

13.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 15203749
Grant Number 09798922
Status In Force
Filing Date 2016-07-06
First Publication Date 2016-10-27
Grant Date 2017-10-24
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06F 17/30 - Information retrieval; Database structures therefor

14.

Method and system for configurable security and surveillance systems

      
Application Number 15130729
Grant Number 09595182
Status In Force
Filing Date 2016-04-15
First Publication Date 2016-08-25
Grant Date 2017-03-14
Owner AVIGILON PATENT HOLDING 1 CORPORATION (USA)
Inventor Hammadou, Tarik

Abstract

A method and system for a configurable security and surveillance system are provided. A configurable security and surveillance system may comprise at least one programmable sensor agent and/or at least one programmable content analysis agent. A plurality of processing features may be offered by the configurable security and surveillance system by programming configurable hardware devices in the programmable sensor agents and/or the programmable content analysis agents via a system manager. Device programming files may be utilized to program the configurable hardware devices. The device programming files may be encrypted and decryption keys may be requested to enable the programming of different processing features into the programmable sensor agents and/or the programmable content analysis agents. The device programming files and/or the decryption keys may be received via a network transfer and/or via a machine-readable media from an e-commerce vendor.

IPC Classes  ?

  • G08B 29/00 - Checking or monitoring of signalling or alarm systemsPrevention or correction of operating errors, e.g. preventing unauthorised operation
  • G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelengthActuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04N 7/167 - Systems rendering the television signal unintelligible and subsequently intelligible

15.

Method and system for tagging an image of an individual in a plurality of photos

      
Application Number 15048951
Grant Number 09569659
Status In Force
Filing Date 2016-02-19
First Publication Date 2016-06-16
Grant Date 2017-02-14
Owner Avigilon Patent Holding 1 Corporation (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

16.

Background model for complex and dynamic scenes

      
Application Number 15019759
Grant Number 09959630
Status In Force
Filing Date 2016-02-09
First Publication Date 2016-06-09
Grant Date 2018-05-01
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung
  • Yang, Tao

Abstract

Systems and methods for viewing a scene depicted in a sequence of video frames and identifying and tracking objects between separate frames of the sequence. Each tracked object is classified based on known categories and a stream of context events associated with the object is generated. A sequence of primitive events based on the stream of context events is generated and stored together, along with detailed data and generalized data related to an event. All of the data is then evaluated to learn patterns of behavior that occur within the scene.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 7/20 - Analysis of motion
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06T 7/254 - Analysis of motion involving subtraction of images

17.

Semantic representation module of a machine-learning engine in a video analysis system

      
Application Number 14992973
Grant Number 09489569
Status In Force
Filing Date 2016-01-11
First Publication Date 2016-05-05
Grant Date 2016-11-08
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis G.
  • Friedlander, David S.
  • Xu, Gang
  • Seow, Ming-Jung
  • Risinger, Lon W.
  • Solum, David M.
  • Yang, Tao
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath

Abstract

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06F 17/30 - Information retrieval; Database structures therefor
  • G06N 99/00 - Subject matter not provided for in other groups of this subclass

18.

Method and system for configurable security and surveillance systems

      
Application Number 14594867
Grant Number 09342978
Status In Force
Filing Date 2015-01-12
First Publication Date 2015-07-16
Grant Date 2016-05-17
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor Hammadou, Tarik

Abstract

A method and system for a configurable security and surveillance system are provided. A configurable security and surveillance system may comprise at least one programmable sensor agent and/or at least one programmable content analysis agent. A plurality of processing features may be offered by the configurable security and surveillance system by programming configurable hardware devices in the programmable sensor agents and/or the programmable content analysis agents via a system manager. Device programming files may be utilized to program the configurable hardware devices. The device programming files may be encrypted and decryption keys may be requested to enable the programming of different processing features into the programmable sensor agents and/or the programmable content analysis agents. The device programming files and/or the decryption keys may be received via a network transfer and/or via a machine-readable media from an e-commerce vendor.

IPC Classes  ?

  • G08B 29/00 - Checking or monitoring of signalling or alarm systemsPrevention or correction of operating errors, e.g. preventing unauthorised operation
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • H04N 7/167 - Systems rendering the television signal unintelligible and subsequently intelligible

19.

Semantic representation module of a machine-learning engine in a video analysis system

      
Application Number 14584967
Grant Number 09235752
Status In Force
Filing Date 2014-12-29
First Publication Date 2015-04-23
Grant Date 2016-01-12
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis G.
  • Friedlander, David S.
  • Xu, Gang
  • Seow, Ming-Jung
  • Risinger, Lon W.
  • Solum, David M.
  • Yang, Tao
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath

Abstract

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06F 17/30 - Information retrieval; Database structures therefor
  • G06N 99/00 - Subject matter not provided for in other groups of this subclass

20.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 14550206
Grant Number 09412009
Status In Force
Filing Date 2014-11-21
First Publication Date 2015-03-26
Grant Date 2016-08-09
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06F 17/30 - Information retrieval; Database structures therefor

21.

Visualizing and updating long-term memory percepts in a video surveillance system

      
Application Number 14337703
Grant Number 10489679
Status In Force
Filing Date 2014-07-22
First Publication Date 2015-03-19
Grant Date 2019-11-26
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Gottumukkal, Rajkiran Kumar
  • Seow, Ming-Jung

Abstract

Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.

IPC Classes  ?

  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

22.

Method and system for attaching a metatag to a digital image

      
Application Number 14551035
Grant Number 09465817
Status In Force
Filing Date 2014-11-23
First Publication Date 2015-03-19
Grant Date 2016-10-11
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A.

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as Facebook® to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/62 - Methods or arrangements for recognition using electronic means

23.

Method and system for automatically measuring and forecasting the demographic characterization of customers to help customize programming contents in a media network

      
Application Number 11805321
Grant Number 08706544
Status In Force
Filing Date 2007-05-23
First Publication Date 2014-04-22
Grant Date 2014-04-22
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Sharma, Rajeev
  • Mummareddy, Satish
  • Hershey, Jeff
  • Moon, Hankyu

Abstract

The present invention is a method and system for forecasting the demographic characterization of customers to help customize programming contents on each means for playing output of each site of a plurality of sites in a media network through automatically measuring, characterizing, and estimating the demographic information of customers that appear in the vicinity of each means for playing output. The analysis of demographic information of customers is performed automatically based on the visual information of the customers, using a plurality of means for capturing images and a plurality of computer vision technologies on the visual information. The measurement of the demographic information is performed in each measured node, where the node is defined as means for playing output. Extrapolation of the measurement characterizes the demographic information per each node of a plurality of nodes in a site of a plurality of sites of a media network. The forecasting and customization of the programming contents is based on the characterization of the demographic information.

IPC Classes  ?

24.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 14094739
Grant Number 09224035
Status In Force
Filing Date 2013-12-02
First Publication Date 2014-04-17
Grant Date 2015-12-29
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

25.

Method and system for tagging an individual in a digital image

      
Application Number 14094752
Grant Number 09875395
Status In Force
Filing Date 2013-12-02
First Publication Date 2014-04-03
Grant Date 2018-01-23
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

26.

Method and system for attaching a metatag to a digital image

      
Application Number 14093576
Grant Number 08908933
Status In Force
Filing Date 2013-12-02
First Publication Date 2014-03-27
Grant Date 2014-12-09
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A.

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

27.

System and method for utilizing facial recognition technology for identifying an unknown individual from a digital image

      
Application Number 14020809
Grant Number 10223578
Status In Force
Filing Date 2013-09-07
First Publication Date 2014-03-20
Grant Date 2019-03-05
Owner AVIGILON PATENT HOLDING CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A.

Abstract

A method and system for of identifying an unknown individual from a digital image is disclosed herein. In one embodiment, the present invention allows an individual to photograph a facial image an unknown individual, transfer that facial image to a server for processing into a feature vector, and then search social networking Web sites to obtain information on the unknown individual. The Web sites comprise myspace.com, facebook.com, linkedin.com, www.hi5.com, www.bebo.com, www.friendster.com, www.igoogle.com, netlog.com, and orkut.com. A method of networking is also disclosed. A method for determining unwanted individuals on a social networking website is also disclosed.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

28.

Semantic representation module of a machine learning engine in a video analysis system

      
Application Number 13855332
Grant Number 08923609
Status In Force
Filing Date 2013-04-02
First Publication Date 2014-03-13
Grant Date 2014-12-30
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis G.
  • Friedlander, David S.
  • Xu, Gang
  • Seow, Ming-Jung
  • Risinger, Lon W.
  • Solum, David M.
  • Yang, Tao
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath

Abstract

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/48 - Extraction of features or characteristics of the image by coding the contour of the pattern
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning

29.

Method and system for optimizing the observation and annotation of complex human behavior from video sources

      
Application Number 12011385
Grant Number 08665333
Status In Force
Filing Date 2008-01-25
First Publication Date 2014-03-04
Grant Date 2014-03-04
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Sharma, Rajeev
  • Mummareddy, Satish
  • Schapira, Emilio
  • Jung, Namsoon

Abstract

The present invention is a method and system for optimizing the observation and annotation of complex human behavior from video sources by automatically detecting predefined events based on the behavior of people in a first video stream from a first means for capturing images in a physical space, accessing a synchronized second video stream from a second means for capturing images that are positioned to observe the people more closely using the timestamps associated with the detected events from the first video stream, and enabling an annotator to annotate each of the events with more labels using a tool. The present invention captures a plurality of input images of the persons by a plurality of means for capturing images and processes the plurality of input images in order to detect the predefined events based on the behavior in an exemplary embodiment. The processes are based on a novel usage of a plurality of computer vision technologies to analyze the human behavior from the plurality of input images. The physical space may be a retail space, and the people may be customers in the retail space.

IPC Classes  ?

  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

30.

Method and system for rating of out-of-home digital media network based on automatic measurement

      
Application Number 11818485
Grant Number 08660895
Status In Force
Filing Date 2007-06-14
First Publication Date 2014-02-25
Grant Date 2014-02-25
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Saurabh, Varij
  • Hershey, Jeff
  • Mummareddy, Satish
  • Sharma, Rajeev
  • Jung, Namsoon

Abstract

The present invention is a method and system for producing a set of ratings for out-of-home media based on the measurement of behavior patterns and demographics of the people in a digital media network. The present invention captures a plurality of input images of the people in the vicinity of sampled out-of-home media in a digital media network by a plurality of means for capturing images, and tracks each person. The present invention processes the plurality of input images in order to analyze the behavior and demographics of the people. The present invention aggregates the measurements for the behavior patterns and demographics of the people, analyzes the data, and extracts characteristic information based on the estimated parameters from the aggregated measurements. Finally, the present invention calculates a set of ratings based on the characteristic information. The plurality of computer vision technologies can comprise face detection, person tracking, body parts detection, and demographic classification of the people, on the captured visual information of the people in the vicinity of the out-of-home media.

IPC Classes  ?

  • G06Q 30/02 - MarketingPrice estimation or determinationFundraising

31.

Method and system for efficient sampling of videos using spatiotemporal constraints for statistical behavior analysis

      
Application Number 12313359
Grant Number 08570376
Status In Force
Filing Date 2008-11-19
First Publication Date 2013-10-29
Grant Date 2013-10-29
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Sharma, Rajeev
  • Jung, Namsoon

Abstract

The present invention is a method and system for selecting and storing videos by applying semantically-meaningful selection criteria to the track sequences of the trips made by people in an area covered by overlapping multiple cameras. The present invention captures video streams of the people in the area by multiple cameras and tracks the people in each of the video streams, producing track sequences in each video stream. The present invention determines a first set of video segments that contains the trip information of the people, and compacts each of the video streams by removing a second set of video segments that do not contain the trip information of the people from each of the video streams. The present invention selects video segments from the first set of video segments based on predefined selection criteria for the statistical behavior analysis. The stored video data is an efficient compact format of video segments that contain the track sequences of the people and selected according to semantically-meaningful and domain-specific selection criteria. The final storage format of the videos is a trip-centered format, which sequences videos from across multiple cameras, and it can be used to facilitate multiple applications dealing with behavior analysis in a specific domain.

IPC Classes  ?

  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

32.

Method and system for tagging an image of an individual in plurality of photos

      
Application Number 13753543
Grant Number 08798321
Status In Force
Filing Date 2013-01-30
First Publication Date 2013-06-06
Grant Date 2014-08-05
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A
  • Shah, Alex

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

33.

Videore: method and system for storing videos from multiple cameras for behavior re-mining

      
Application Number 12286138
Grant Number 08457466
Status In Force
Filing Date 2008-09-29
First Publication Date 2013-06-04
Grant Date 2013-06-04
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Sharma, Rajeev
  • Jung, Namsoon

Abstract

The present invention is a method and system for storing videos by track sequences and selection of video segments in a manner to support “re-mining” by indexing and playback of individual visitors' entire trip to an area covered by overlapping cameras, allowing analysis and recognition of detailed behavior. The present invention captures video streams of the people in the area by multiple cameras and tracks the people in each of the video streams, producing track sequences in each video stream. Using the track sequences, the present invention finds trip information of the people. The present invention determines a first set of video segments that contain the trip information of the people, and compacts each of the video streams by removing a second set of video segments that do not contain the trip information of the people from each of the video streams. The video segments in the first set of video segments are associated with the people by indexing the video segments per person based on the trip information. The final storage format of the videos is a trip-centered format which sequences videos from across multiple cameras in a manner to facilitate multiple applications dealing with behavior analysis, and it is an efficient compact format without losing any video segments that contain the track sequences of the people.

IPC Classes  ?

34.

Background model for complex and dynamic scenes

      
Application Number 13746760
Grant Number 10032282
Status In Force
Filing Date 2013-01-22
First Publication Date 2013-05-30
Grant Date 2018-07-24
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung
  • Yang, Tao

Abstract

Techniques are disclosed for learning and modeling a background for a complex and/or dynamic scene over a period of observations without supervision. A background/foreground component of a computer vision engine may be configured to model a scene using an array of ART networks. The ART networks learn the regularity and periodicity of the scene by observing the scene over a period of time. Thus, the ART networks allow the computer vision engine to model complex and dynamic scene backgrounds in video.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 7/20 - Analysis of motion
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06T 7/254 - Analysis of motion involving subtraction of images

35.

Inter-trajectory anomaly detection using adaptive voting experts in a video surveillance system

      
Application Number 13722812
Grant Number 10121077
Status In Force
Filing Date 2012-12-20
First Publication Date 2013-05-16
Grant Date 2018-11-06
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David Samuel
  • Saitwal, Kishor Adinath

Abstract

A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

36.

Method of processing a transaction for a parking session

      
Application Number 13679854
Grant Number 09734462
Status In Force
Filing Date 2012-11-16
First Publication Date 2013-05-09
Grant Date 2017-08-15
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor Ioli, Edward D.

Abstract

A method, using a mobile device, of initiating the processing of a transaction for a parking session between a parking system and a payment provider on behalf of a user, the method comprising storing a user identifier in memory on the mobile device, receiving a parking identifier and transmitting the user identifier and the parking identifier to a network server.

IPC Classes  ?

  • G06Q 20/32 - Payment architectures, schemes or protocols characterised by the use of specific devices using wireless devices
  • G06Q 10/00 - AdministrationManagement
  • G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentialsReview and approval of payers, e.g. check of credit lines or negative lists
  • G07B 15/00 - Arrangements or apparatus for collecting fares, tolls or entrance fees at one or more control points
  • G08G 1/00 - Traffic control systems for road vehicles
  • G06Q 50/30 - Transportation; Communications

37.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 13674019
Grant Number 08897506
Status In Force
Filing Date 2012-11-10
First Publication Date 2013-03-21
Grant Date 2014-11-25
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06F 17/30 - Information retrieval; Database structures therefor

38.

Method and system for measuring emotional and attentional response to dynamic digital media content

      
Application Number 12317917
Grant Number 08401248
Status In Force
Filing Date 2008-12-30
First Publication Date 2013-03-19
Grant Date 2013-03-19
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Moon, Hankyu
  • Sharma, Rajeev
  • Jung, Namsoon

Abstract

The present invention is a method and system to provide an automatic measurement of people's responses to dynamic digital media, based on changes in their facial expressions and attention to specific content. First, the method detects and tracks faces from the audience. It then localizes each of the faces and facial features to extract emotion-sensitive features of the face by applying emotion-sensitive feature filters, to determine the facial muscle actions of the face based on the extracted emotion-sensitive features. The changes in facial muscle actions are then converted to the changes in affective state, called an emotion trajectory. On the other hand, the method also estimates eye gaze based on extracted eye images and three-dimensional facial pose of the face based on localized facial images. The gaze direction of the person, is estimated based on the estimated eye gaze and the three-dimensional facial pose of the person. The gaze target on the media display is then estimated based on the estimated gaze direction and the position of the person. Finally, the response of the person to the dynamic digital media content is determined by analyzing the emotion trajectory in relation to the time and screen positions of the specific digital media sub-content that the person is watching.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

39.

Identifying anomalous object types during classification

      
Application Number 13622281
Grant Number 08548198
Status In Force
Filing Date 2012-09-18
First Publication Date 2013-01-24
Grant Date 2013-10-01
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David
  • Gottumukkal, Rajkiran Kumar
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are disclosed for identifying anomaly object types during classification of foreground objects extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to discover object type clusters and classify objects depicted in the image data based on pixel-level micro-features that are extracted from the image data. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects and identifying anomaly object types.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G01V 3/00 - Electric or magnetic prospecting or detectingMeasuring magnetic field characteristics of the earth, e.g. declination or deviation

40.

Foreground object tracking

      
Application Number 13545950
Grant Number 08374393
Status In Force
Filing Date 2012-07-10
First Publication Date 2012-11-01
Grant Date 2013-02-12
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung
  • Yang, Tao

Abstract

Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

41.

Context processor for video analysis system

      
Application Number 13494605
Grant Number 08705861
Status In Force
Filing Date 2012-06-12
First Publication Date 2012-10-11
Grant Date 2014-04-22
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Gottumukkal, Rajkiran Kumar
  • Saitwal, Kishor Adinath

Abstract

Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/34 - Segmentation of touching or overlapping patterns in the image field

42.

Classifier anomalies for observed behaviors in a video surveillance system

      
Application Number 13472214
Grant Number 08494222
Status In Force
Filing Date 2012-05-15
First Publication Date 2012-09-06
Grant Date 2013-07-23
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David
  • Saitwal, Kishor Adinath
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G08G 5/00 - Traffic control systems for aircraft

43.

Behavioral recognition system

      
Application Number 13413549
Grant Number 08620028
Status In Force
Filing Date 2012-03-06
First Publication Date 2012-06-28
Grant Date 2013-12-31
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis Gene
  • Blythe, Bobby Ernest
  • Friedlander, David Samuel
  • Gottumukkal, Rajkiran Kumar
  • Risinger, Lon William
  • Saitwal, Kishor Adinath
  • Seow, Ming-Jung
  • Solum, David Marvin
  • Xu, Gang
  • Yang, Tao

Abstract

Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means

44.

Apparatus and method for measuring audience data from image stream using dynamically-configurable hardware architecture

      
Application Number 12583323
Grant Number 08165386
Status In Force
Filing Date 2009-08-18
First Publication Date 2012-04-24
Grant Date 2012-04-24
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Moon, Hankyu
  • Irick, Kevin Maurice
  • Narayanan, Vijaykrishnan
  • Sharma, Rajeev
  • Jung, Namsoon

Abstract

The present invention is an embedded audience measurement platform, which is called HAM. The HAM includes hardware, apparatus, and method for measuring audience data from image stream using dynamically-configurable hardware architecture. The HAM provides an end-to-end solution for audience measurement, wherein reconfigurable computational modules are used as engines per node to power the complete solution implemented in a flexible hardware architecture. The HAM is also a complete system for broad audience measurement, which has various components built into the system. Examples of the components comprise demographics classification, gaze estimation, emotion recognition, behavior analysis, and impression measurement.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means

45.

Apparatus and method for hardware implementation of object recognition from an image stream using artificial neural network

      
Application Number 12157087
Grant Number 08081816
Status In Force
Filing Date 2008-06-06
First Publication Date 2011-12-20
Grant Date 2011-12-20
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Irick, Kevin Maurice
  • Narayanan, Vijaykrishnan
  • Moon, Hankyu
  • Sharma, Rajeev
  • Jung, Namsoon

Abstract

The present invention is an apparatus and method for object recognition from at least an image stream from at least an image frame utilizing at least an artificial neural network. The present invention further comprises means for generating multiple components of an image pyramid simultaneously from a single image stream, means for providing the active pixel and interlayer neuron data to at least a subwindow processor, means for multiplying and accumulating the product of a pixel data or interlayer data and a synapse weight, and means for performing the activation of an accumulation. The present invention allows the artificial neural networks to be reconfigurable, thus embracing a broad range of object recognition applications in a flexible way. The subwindow processor in the present invention also further comprises means for performing neuron computations for at least a neuron. An exemplary embodiment of the present invention is used for object recognition, including face detection and gender recognition, in hardware. The apparatus comprises a digital circuitry system or IC that embodies the components of the present invention.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means

46.

Method and system for attaching a metatag to a digital image

      
Application Number 12948709
Grant Number 08600174
Status In Force
Filing Date 2010-11-17
First Publication Date 2011-05-26
Grant Date 2013-12-03
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A.

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means

47.

Classifier anomalies for observed behaviors in a video surveillance system

      
Application Number 12561956
Grant Number 08180105
Status In Force
Filing Date 2009-09-17
First Publication Date 2011-03-17
Grant Date 2012-05-15
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David
  • Saitwal, Kishor Adinath
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G08G 5/00 - Traffic control systems for aircraft

48.

Video surveillance system configured to analyze complex behaviors using alternating layers of clustering and sequencing

      
Application Number 12561977
Grant Number 08170283
Status In Force
Filing Date 2009-09-17
First Publication Date 2011-03-17
Grant Date 2012-05-01
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David
  • Saitwal, Kishor Adinath
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A video surveillance system may be configured to observe a scene (as depicted in a sequence of video frames) and, over time, develop hierarchies of concepts including classes of objects, actions and behaviors. That is, the video surveillance system may develop models at progressively more complex levels of abstraction used to identify what events and behaviors are common and which are unusual. When the models have matured, the video surveillance system issues alerts on unusual events.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G08G 5/00 - Traffic control systems for aircraft

49.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 12555789
Grant Number 08311294
Status In Force
Filing Date 2009-09-08
First Publication Date 2011-03-10
Grant Date 2012-11-13
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04M 1/66 - Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

50.

Method and system for event detection by multi-scale image invariant analysis

      
Application Number 11353264
Grant Number 07903141
Status In Force
Filing Date 2006-02-14
First Publication Date 2011-03-08
Grant Date 2011-03-08
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Mariano, Vladimir
  • Sharma, Rajeev

Abstract

The present invention is a method and system for detecting scene events in an imaged sequence by analysis of occlusion of user-defined regions of interest in the image. The present invention is based on the multi-scale groups of nearby pixel locations employing contrast functions, a feature that is invariant to changing illumination conditions. The feature allows the classification of each pixel location in the region of interest as occluded or not. Scene events, based on the occlusion of the regions of interest, are defined and subsequently detected in an image sequence. Example applications of this invention are automated surveillance of persons for security, and automated person counting, tracking and aisle-touch detection for market research.

IPC Classes  ?

  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

51.

Identifying anomalous object types during classification

      
Application Number 12551276
Grant Number 08270733
Status In Force
Filing Date 2009-08-31
First Publication Date 2011-03-03
Grant Date 2012-09-18
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David
  • Gottumukkal, Rajkiran Kumar
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are disclosed for identifying anomaly object types during classification of foreground objects extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to discover object type clusters and classify objects depicted in the image data based on pixel-level micro-features that are extracted from the image data. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects and identifying anomaly object types.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G01V 3/00 - Electric or magnetic prospecting or detectingMeasuring magnetic field characteristics of the earth, e.g. declination or deviation

52.

Visualizing and updating long-term memory percepts in a video surveillance system

      
Application Number 12551303
Grant Number 08786702
Status In Force
Filing Date 2009-08-31
First Publication Date 2011-03-03
Grant Date 2014-07-22
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Gottumukkal, Rajkiran Kumar
  • Seow, Ming-Jung

Abstract

Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.

IPC Classes  ?

  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

53.

Digital image search system and method

      
Application Number 12941103
Grant Number 08199980
Status In Force
Filing Date 2010-11-08
First Publication Date 2011-03-03
Grant Date 2012-06-12
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A.

Abstract

A method and system for matching an unknown facial image of an individual with an image of an unknown twin using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04M 1/66 - Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

54.

Clustering nodes in a self-organizing map using an adaptive resonance theory network

      
Application Number 12551154
Grant Number 08270732
Status In Force
Filing Date 2009-08-31
First Publication Date 2011-03-03
Grant Date 2012-09-18
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David
  • Saitwal, Kishor Adinath
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are disclosed for discovering object type clusters using pixel-level micro-features extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to classify objects depicted in the image data based on the pixel-level micro-features. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G01V 3/00 - Electric or magnetic prospecting or detectingMeasuring magnetic field characteristics of the earth, e.g. declination or deviation

55.

Visualizing and updating classifications in a video surveillance system

      
Application Number 12551332
Grant Number 08797405
Status In Force
Filing Date 2009-08-31
First Publication Date 2011-03-03
Grant Date 2014-08-05
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Friedlander, David Samuel
  • Gottumukkal, Rajkiran Kumar
  • Saitwal, Kishor Adinath
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are disclosed for visually conveying classifications derived from pixel-level micro-features extracted from image data. The image data may include an input stream of video frames depicting one or more foreground objects. The classifications represent information learned by a video surveillance system. A request may be received to view a classification. A visual representation of the classification may be generated. A user interface may be configured to display the visual representation of the classification and to allow a user to view and/or modify properties associated with the classification.

IPC Classes  ?

  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

56.

Unsupervised learning of temporal anomalies for a video surveillance system

      
Application Number 12551364
Grant Number 08167430
Status In Force
Filing Date 2009-08-31
First Publication Date 2011-03-03
Grant Date 2012-05-01
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung

Abstract

Techniques are described for analyzing a stream of video frames to identify temporal anomalies. A video surveillance system configured to identify when agents depicted in the video stream engage in anomalous behavior, relative to the time-of-day (TOD) or day-of-week (DOW) at which the behavior occurs. A machine-learning engine may establish the normalcy of a scene by observing the scene over a specified period of time. Once the observations of the scene have matured, the actions of agents in the scene may be evaluated and classified as normal or abnormal temporal behavior, relative to the past observations.

IPC Classes  ?

  • G03B 1/48 - Gates or pressure devices, e.g. plate
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

57.

Detecting anomalous trajectories in a video surveillance system

      
Application Number 12551395
Grant Number 08285060
Status In Force
Filing Date 2009-08-31
First Publication Date 2011-03-03
Grant Date 2012-10-09
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are disclosed for determining anomalous trajectories of objects tracked over a sequence of video frames. In one embodiment, a symbol trajectory may be derived from observing an object moving through a scene. The symbol trajectory represents semantic concepts extracted from the trajectory of the object. Whether the symbol trajectory is anomalous may be determined, based on previously observed symbol trajectories. A user may be alerted upon determining that the symbol trajectory is anomalous.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G03B 19/18 - Motion-picture cameras

58.

Foreground object tracking

      
Application Number 12552197
Grant Number 08218818
Status In Force
Filing Date 2009-09-01
First Publication Date 2011-03-03
Grant Date 2012-07-10
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung
  • Yang, Tao

Abstract

Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

59.

Foreground object detection in a video surveillance system

      
Application Number 12552210
Grant Number 08218819
Status In Force
Filing Date 2009-09-01
First Publication Date 2011-03-03
Grant Date 2012-07-10
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung
  • Yang, Tao

Abstract

Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the detected foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

60.

Visualizing and updating learned event maps in surveillance systems

      
Application Number 12543204
Grant Number 08625884
Status In Force
Filing Date 2009-08-18
First Publication Date 2011-02-24
Grant Date 2014-01-07
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Gottumukkal, Rajkiran Kumar
  • Seow, Ming-Jung

Abstract

Techniques are disclosed for visually conveying an event map. The event map may represent information learned by a surveillance system. A request may be received to view the event map for a specified scene. The event map may be generated, including a background model of the specified scene and at least one cluster providing a statistical distribution of an event in the specified scene. Each statistical distribution may be derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. Each event may be observed to occur at a location in the specified scene corresponding to a location of the respective cluster in the event map. The event map may be configured to allow a user to view and/or modify properties associated with each cluster. For example, the user may label a cluster and set events matching the cluster to always (or never) generate an alert.

IPC Classes  ?

  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06T 7/00 - Image analysis
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

61.

Inter-trajectory anomaly detection using adaptive voting experts in a video surveillance system

      
Application Number 12543318
Grant Number 08340352
Status In Force
Filing Date 2009-08-18
First Publication Date 2011-02-24
Grant Date 2012-12-25
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David Samuel
  • Saitwal, Kishor Adinath

Abstract

A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

62.

Visualizing and updating learned trajectories in video surveillance systems

      
Application Number 12543242
Grant Number 08280153
Status In Force
Filing Date 2009-08-18
First Publication Date 2011-02-24
Grant Date 2012-10-02
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Friedlander, David Samuel
  • Gottumukkal, Rajkiran Kumar
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are disclosed for visually conveying a trajectory map. The trajectory map provides users with a visualization of data observed by a machine-learning engine of a behavior recognition system. Further, the visualization may provide an interface used to guide system behavior. For example, the interface may be used to specify that the behavior recognition system should alert (or not alert) when a particular trajectory is observed to occur.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

63.

Intra-trajectory anomaly detection using adaptive voting experts in a video surveillance system

      
Application Number 12543307
Grant Number 08379085
Status In Force
Filing Date 2009-08-18
First Publication Date 2011-02-24
Grant Date 2013-02-19
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Friedlander, David Samuel
  • Saitwal, Kishor Adinath

Abstract

A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.

IPC Classes  ?

  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • H04N 5/225 - Television cameras
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means

64.

Background model for complex and dynamic scenes

      
Application Number 12543336
Grant Number 08358834
Status In Force
Filing Date 2009-08-18
First Publication Date 2011-02-24
Grant Date 2013-01-22
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung
  • Yang, Tao

Abstract

Techniques are disclosed for learning and modeling a background for a complex and/or dynamic scene over a period of observations without supervision. A background/foreground component of a computer vision engine may be configured to model a scene using an array of ART networks. The ART networks learn the regularity and periodicity of the scene by observing the scene over a period of time. Thus, the ART networks allow the computer vision engine to model complex and dynamic scene backgrounds in video.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

65.

Visualizing and updating sequences and segments in a video surveillance system

      
Application Number 12543351
Grant Number 08493409
Status In Force
Filing Date 2009-08-18
First Publication Date 2011-02-24
Grant Date 2013-07-23
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Friedlander, David Samuel
  • Gottumukkal, Rajkiran Kumar
  • Saitwal, Kishor Adinath

Abstract

Techniques are disclosed for visually conveying a sequence storing an ordered string of symbols generated from kinematic data derived from analyzing an input stream of video frames depicting one or more foreground objects. The sequence may represent information learned by a video surveillance system. A request may be received to view the sequence or a segment partitioned form the sequence. A visual representation of the segment may be generated and superimposed over a background image associated with the scene. A user interface may be configured to display the visual representation of the sequence or segment and to allow a user to view and/or modify properties associated with the sequence or segment.

IPC Classes  ?

66.

Adaptive voting experts for incremental segmentation of sequences with prediction in a video surveillance system

      
Application Number 12543379
Grant Number 08295591
Status In Force
Filing Date 2009-08-18
First Publication Date 2011-02-24
Grant Date 2012-10-23
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Friedlander, David Samuel
  • Saitwal, Kishor Adinath
  • Xu, Gang

Abstract

A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

67.

Method and system for tagging an image of an individual in a plurality of photos

      
Application Number 12341318
Grant Number 08369570
Status In Force
Filing Date 2008-12-22
First Publication Date 2010-09-16
Grant Date 2013-02-05
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A.
  • Shah, Alex

Abstract

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

68.

Adaptive update of background pixel thresholds using sudden illumination change detection

      
Application Number 12388409
Grant Number 08285046
Status In Force
Filing Date 2009-02-18
First Publication Date 2010-08-19
Grant Date 2012-10-09
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Saitwal, Kishor Adinath
  • Blythe, Bobby Ernest
  • Yang, Tao

Abstract

Techniques are disclosed for a computer vision engine to update both a background model and thresholds used to classify pixels as depicting scene foreground or background in response to detecting that a sudden illumination changes has occurred in a sequence of video frames. The threshold values may be used to specify how much pixel a given pixel may differ from corresponding values in the background model before being classified as depicting foreground. When a sudden illumination change is detected, the values for pixels affected by sudden illumination change may be used to update the value in the background image to reflect the value for that pixel following the sudden illumination change as well as update the threshold for classifying that pixel as depicting foreground/background in subsequent frames of video.

IPC Classes  ?

  • G06K 9/34 - Segmentation of touching or overlapping patterns in the image field

69.

Method and system for estimating gaze target, gaze sequence, and gaze map from video

      
Application Number 12221552
Grant Number 07742623
Status In Force
Filing Date 2008-08-04
First Publication Date 2010-06-22
Grant Date 2010-06-22
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Moon, Hankyu
  • Sharma, Rajeev
  • Jung, Namsoon

Abstract

The present invention is a method and system to estimate the visual target that people are looking, based on automatic image measurements. The system utilizes image measurements from both face-view cameras and top-down view cameras. The cameras are calibrated with respect to the site and the visual target, so that the gaze target is determined from the estimated position and gaze direction of a person. Face detection and two-dimensional pose estimation locate and normalize the face of the person so that the eyes can be accurately localized and the three-dimensional facial pose can be estimated. The eye gaze is estimated based on either the positions of localized eyes and irises or on the eye image itself, depending on the quality of the image. The gaze direction is estimated from the eye gaze measurement in the context of the three-dimensional facial pose. From the top-down view the body of the person is detected and tracked, so that the position of the head is estimated using a body blob model that depends on the body position in the view. The gaze target is determined based on the estimated gaze direction, estimated head pose, and the camera calibration. The gaze target estimation can provide a gaze trajectory of the person or a collective gaze map from many instances of gaze.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

70.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 12707669
Grant Number 07885435
Status In Force
Filing Date 2010-02-17
First Publication Date 2010-06-17
Grant Date 2011-02-08
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A

Abstract

The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals or objects using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar objects or faces to the user The system features classification of images from a variety of Internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, or the matching object, image and associated meta-data is sent back to the user. The image may be manipulated to emphasize similar characteristics between the received facial image and the matching facial image. The meta-data sent down with the image may include sponsored links and advertisements.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04M 1/66 - Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
  • G06F 17/30 - Information retrieval; Database structures therefor

71.

Long-term memory in a video analysis system

      
Application Number 12208551
Grant Number 08121968
Status In Force
Filing Date 2008-09-11
First Publication Date 2010-03-11
Grant Date 2012-02-21
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung
  • Friedlander, David Samuel
  • Xu, Gang

Abstract

A long-term memory used to store and retrieve information learned while a video analysis system observes a stream of video frames is disclosed. The long-term memory provides a memory with a capacity that grows in size gracefully, as events are observed over time. Additionally, the long-term memory may encode events, represented by sub-graphs of a neural network. Further, rather than predefining a number of patterns recognized and manipulated by the long-term memory, embodiments of the invention provide a long-term memory where the size of a feature dimension (used to determine the similarity between different observed events) may grow dynamically as necessary, depending on the actual events observed in a sequence of video frames.

IPC Classes  ?

  • G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
  • G06F 15/18 - in which a program is changed according to experience gained by the computer itself during a complete run; Learning machines (adaptive control systems G05B 13/00;artificial intelligence G06N)
  • G06N 5/02 - Knowledge representationSymbolic representation
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning

72.

Detecting anomalous events using a long-term memory in a video analysis system

      
Application Number 12336354
Grant Number 08126833
Status In Force
Filing Date 2008-12-16
First Publication Date 2010-03-11
Grant Date 2012-02-28
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Cobb, Wesley Kenneth
  • Seow, Ming-Jung
  • Xu, Gang

Abstract

Techniques are described for detecting anomalous events using a long-term memory in a video analysis system. The long-term memory may be used to store and retrieve information learned while a video analysis system observes a stream of video frames depicting a given scene. Further, the long-term memory may be configured to detect the occurrence of anomalous events, relative to observations of other events that have occurred in the scene over time. A distance measure may used to determine a distance between an active percept (encoding an observed event depicted in the stream of video frames) and a retrieved percept (encoding a memory of previously observed events in the long-term memory). If the distance exceeds a specified threshold, the long-term memory may publish the occurrence of an anomalous event for review by users of the system.

IPC Classes  ?

  • G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
  • G06F 15/18 - in which a program is changed according to experience gained by the computer itself during a complete run; Learning machines (adaptive control systems G05B 13/00;artificial intelligence G06N)
  • G06N 5/02 - Knowledge representationSymbolic representation
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning

73.

Digital image search system and method

      
Application Number 12573129
Grant Number 07831069
Status In Force
Filing Date 2009-10-04
First Publication Date 2010-01-28
Grant Date 2010-11-09
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A.

Abstract

A method and system for matching an unknown facial image of an individual with an image of an unknown twin using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04M 1/66 - Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

74.

Method and system for measuring human response to visual stimulus based on changes in facial expression

      
Application Number 12154002
Grant Number 08462996
Status In Force
Filing Date 2008-05-19
First Publication Date 2009-11-19
Grant Date 2013-06-11
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Moon, Hankyu
  • Sharma, Rajeev
  • Jung, Namsoon

Abstract

The present invention is a method and system for measuring human emotional response to visual stimulus, based on the person's facial expressions. Given a detected and tracked human face, it is accurately localized so that the facial features are correctly identified and localized. Face and facial features are localized using the geometrically specialized learning machines. Then the emotion-sensitive features, such as the shapes of the facial features or facial wrinkles, are extracted. The facial muscle actions are estimated using a learning machine trained on the emotion-sensitive features. The instantaneous facial muscle actions are projected to a point in affect space, using the relation between the facial muscle actions and the affective state (arousal, valence, and stance). The series of estimated emotional changes renders a trajectory in affect space, which is further analyzed in relation to the temporal changes in visual stimulus, to determine the response.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

75.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 12267554
Grant Number 07668348
Status In Force
Filing Date 2008-11-07
First Publication Date 2009-05-07
Grant Date 2010-02-23
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A

Abstract

The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals or objects using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar objects or faces to the user The system features classification of images from a variety of Internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, or the matching object, image and associated meta-data is sent back to the user. The image may be manipulated to emphasize similar characteristics between the received facial image and the matching facial image. The meta-data sent down with the image may include sponsored links and advertisements.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04M 1/66 - Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

76.

Context processor for video analysis system

      
Application Number 12112864
Grant Number 08200011
Status In Force
Filing Date 2008-04-30
First Publication Date 2009-04-02
Grant Date 2012-06-12
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Gottumukkal, Rajkiran Kumar
  • Saitwal, Kishor Adinath

Abstract

Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/34 - Segmentation of touching or overlapping patterns in the image field

77.

Dark scene compensation in a background-foreground module of a video analysis system

      
Application Number 12129539
Grant Number 08064695
Status In Force
Filing Date 2008-05-29
First Publication Date 2009-04-02
Grant Date 2011-11-22
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Saitwal, Kishor Adinath
  • Blythe, Bobby Ernest

Abstract

Embodiments of the present invention provide a method and a module for identifying a background of a scene depicted in an acquired stream of video frames that may be used by a video-analysis system. For each pixel or block of pixels in an acquired video frame a comparison measure is determined. The comparison measure depends on difference of color values exhibited in the acquired video frame and in a background image respectively by the pixel or block of pixels and a corresponding pixel and block of pixels in the background image. To determine the comparison measure, the resulting difference is considered in relation to a range of possible color values. If the comparison measure is above a dynamically adjusted threshold, the pixel or the block of pixels is classified as a part of the background of the scene.

IPC Classes  ?

  • G06K 9/34 - Segmentation of touching or overlapping patterns in the image field
  • H04N 7/12 - Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

78.

Background-foreground module for video analysis system

      
Application Number 12129521
Grant Number 08094943
Status In Force
Filing Date 2008-05-29
First Publication Date 2009-04-02
Grant Date 2012-01-10
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Blythe, Bobby Ernest
  • Saitwal, Kishor Adinath
  • Yang, Tao
  • Seow, Ming-Jung

Abstract

Embodiments of the present invention provide a method and a module for identifying a background of a scene depicted in an acquired stream of video frames that may be used by a video-analysis system. For each pixel or block of pixels in an acquired video frame a comparison measure is determined. The comparison measure depends on difference of color values exhibited in the acquired video frame and in a background image respectively by the pixel or block of pixels and a corresponding pixel and block of pixels in the background image. To determine the comparison measure, the resulting difference is considered in relation to a range of possible color values. If the comparison measure is above a dynamically adjusted threshold, the pixel or the block of pixels is classified as a part of the background of the scene.

IPC Classes  ?

  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

79.

Identifying stale background pixels in a video analysis system

      
Application Number 12129551
Grant Number 08041116
Status In Force
Filing Date 2008-05-29
First Publication Date 2009-04-02
Grant Date 2011-10-18
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Saitwal, Kishor Adinath
  • Blythe, Bobby Ernest

Abstract

Embodiments of the present invention provide a method and a module for identifying a background of a scene depicted in an acquired stream of video frames that may be used by a video-analysis system. For each pixel or block of pixels in an acquired video frame a comparison measure is determined. The comparison measure depends on difference of color values exhibited in the acquired video frame and in a background image respectively by the pixel or block of pixels and a corresponding pixel and block of pixels in the background image. To determine the comparison measure, the resulting difference is considered in relation to a range of possible color values. If the comparison measure is above a dynamically adjusted threshold, the pixel or the block of pixels is classified as a part of the background of the scene.

IPC Classes  ?

  • G06K 9/34 - Segmentation of touching or overlapping patterns in the image field
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

80.

Estimator identifier component for behavioral recognition system

      
Application Number 12208526
Grant Number 08175333
Status In Force
Filing Date 2008-09-11
First Publication Date 2009-04-02
Grant Date 2012-05-08
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Gottumukkal, Rajkiran K.
  • Seow, Ming-Jung
  • Yang, Tao
  • Saitwal, Kishor Adinath

Abstract

An estimator/identifier component for a computer vision engine of a machine-learning based behavior-recognition system is disclosed. The estimator/identifier component may be configured to classify an object being one of two or more classification types, e.g., as being a vehicle or a person. Once classified, the estimator/identifier may evaluate the object to determine a set of kinematic data, static data, and a current pose of the object. The output of the estimator/identifier component may include the classifications assigned to a tracked object, as well as the derived information and object attributes.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

81.

Tracker component for behavioral recognition system

      
Application Number 12208538
Grant Number 08300924
Status In Force
Filing Date 2008-09-11
First Publication Date 2009-04-02
Grant Date 2012-10-30
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath
  • Seow, Ming-Jung
  • Yang, Tao
  • Blythe, Bobby Ernest

Abstract

A tracker component for a computer vision engine of a machine-learning based behavior-recognition system is disclosed. The behavior-recognition system may be configured to learn, identify, and recognize patterns of behavior by observing a video stream (i.e., a sequence of individual video frames). The tracker component may be configured to track objects depicted in the sequence of video frames and to generate, search, match, and update computational models of such objects.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means

82.

Smart network camera system-on-a-chip

      
Application Number 12209736
Grant Number 08576281
Status In Force
Filing Date 2008-09-12
First Publication Date 2009-03-12
Grant Date 2013-11-05
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor Hammadou, Tarik

Abstract

Aspects of a method and system for processing video data are disclosed and may include detecting, within a single chip in a programmable surveillance video camera, one or more moving objects in a raw video signal generated by the programmable surveillance video camera. One or more characteristics of the detected one or more objects may be extracted within the single chip in the programmable surveillance video camera. The extraction may be based on the raw video signal and may be performed prior to compression of the raw video data. The characteristics of the detected one or more objects may include shape, texture, color, motion presence, motion direction, sequence name, location, links, and/or alarm type. One or more textual representations of at least one of the characteristics of the detected one or more objects may be generated within the single chip in the programmable surveillance video camera.

IPC Classes  ?

  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

83.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 12138559
Grant Number 07587070
Status In Force
Filing Date 2008-06-13
First Publication Date 2009-03-05
Grant Date 2009-09-08
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Myers, Charles A
  • Shah, Alex

Abstract

A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04M 1/66 - Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

84.

Cognitive model for a machine-learning engine in a video analysis system

      
Application Number 12170283
Grant Number 08189905
Status In Force
Filing Date 2008-07-09
First Publication Date 2009-01-15
Grant Date 2012-05-29
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis G.
  • Friedlander, David S.
  • Xu, Gang
  • Seow, Ming-Jung
  • Risinger, Lon W.
  • Solum, David M.
  • Yang, Tao
  • Gottumukkal, Rajkiran K.
  • Saitwal, Kishor Adinath

Abstract

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

85.

Digital image search system and method

      
Application Number 12198887
Grant Number 07599527
Status In Force
Filing Date 2008-08-27
First Publication Date 2008-12-25
Grant Date 2009-10-06
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A.

Abstract

A method and system for matching an unknown facial image of an individual with an image of an unknown twin using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. The method and system uses human perception techniques to weight the feature vectors.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04M 1/66 - Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

86.

Method and system for background estimation in localization and tracking of objects in a smart video camera

      
Application Number 11748775
Grant Number 07961946
Status In Force
Filing Date 2007-05-15
First Publication Date 2008-11-20
Grant Date 2011-06-14
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor Hammadou, Tarik

Abstract

Aspects of a method and system for change detection in localization and tracking of objects in a smart video camera are provided. A programmable surveillance video camera comprises processors for detecting objects in a video signal based on an object mask. The processors may generate a textual representation of the video signal by utilizing a description language to indicate characteristics of the detected objects, such as shape, texture, color, and/or motion, for example. The object mask may be based on a detection field value generated for each pixel in the video signal by comparing a first observation field and a second observation field associated with each of the pixels. The first observation field may be based on a difference between an input video signal value and an estimated background value while the second observation field may be based on a temporal difference between first observation fields.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

87.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 12034664
Grant Number 07428321
Status In Force
Filing Date 2008-02-21
First Publication Date 2008-09-23
Grant Date 2008-09-23
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A

Abstract

The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals or objects using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar objects or faces to the user. The system features classification of images from a variety of Internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, or the matching object, image and associated meta-data is sent back to the user. The image may be manipulated to emphasize similar characteristics between the received facial image and the matching facial image. The meta-data sent down with the image may include sponsored links and advertisements.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04M 1/66 - Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

88.

Behavioral recognition system

      
Application Number 12028484
Grant Number 08131012
Status In Force
Filing Date 2008-02-08
First Publication Date 2008-08-14
Grant Date 2012-03-06
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Eaton, John Eric
  • Cobb, Wesley Kenneth
  • Urech, Dennis Gene
  • Blythe, Bobby Ernest
  • Friedlander, David Samuel
  • Gottumukkal, Rajkiran Kumar
  • Risinger, Lon William
  • Saitwal, Kishor Adinath
  • Seow, Ming-Jung
  • Solum, David Marvin
  • Xu, Gang
  • Yang, Tao

Abstract

Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

89.

Object verification enabled network (OVEN)

      
Application Number 11999649
Grant Number 07904477
Status In Force
Filing Date 2007-12-06
First Publication Date 2008-06-19
Grant Date 2011-03-08
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Jung, Namsoon
  • Sharma, Rajeev

Abstract

The present invention is a method and system for handling a plurality of information units in an information processing system, such as a multimodal human computer interaction (HCI) system, through verification process for the plurality of information units. The present invention converts each information unit in the plurality of information units into verified object by augmenting the first meaning in the information unit with a second meaning and expresses the verified objects by object representation for each verified object. The present invention utilizes a processing structure, called polymorphic operator, which is capable of applying a plurality of relationships among the verified objects based on a set of predefined rules in a particular application domain for governing the operation among the verified objects. The present invention is named Object Verification Enabled Network (OVEN). The OVEN provides a computational framework for the information processing system that needs to handle complex data and event in the system, such as handling a huge amount of data in a database, correlating information pieces from multiple sources, applying contextual information to the recognition of inputs in a specific domain, processing fusion of the multiple inputs from different modalities, handling unforeseen challenges in deploying a commercially working information processing system in a real-world environment, and handling collaboration among multiple users.

IPC Classes  ?

  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled
  • G06F 17/30 - Information retrieval; Database structures therefor

90.

Image classification and information retrieval over wireless digital networks and the internet

      
Application Number 11534667
Grant Number 07450740
Status In Force
Filing Date 2006-09-24
First Publication Date 2007-03-29
Grant Date 2008-11-11
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor
  • Shah, Alex
  • Myers, Charles A.

Abstract

The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals or objects using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar objects or faces to the user. The system features classification of images from a variety of Internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, or the matching object, image and associated meta-data is sent back to the user. The image may be manipulated to emphasize similar characteristics between the received facial image and the matching facial image. The meta-data sent down with the image may include sponsored links and advertisements.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04M 1/66 - Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

91.

Method and system for a programmable camera for configurable security and surveillance systems

      
Application Number 11219951
Grant Number 08508607
Status In Force
Filing Date 2005-09-06
First Publication Date 2007-03-08
Grant Date 2013-08-13
Owner AVIGILON PATENT HOLDING 1 CORPORATION (Canada)
Inventor Hammadou, Tarik

Abstract

A method and system for a programmable camera for a configurable security and surveillance system are provided. A programmable sensor agent for video surveillance may comprise a network interface, a processor, an image processor, and an image sensor. The image processor may comprise at least one configurable device. A device programming file may be received by the network interface from a system manager and may be programmed into at least one configurable device in the image processor via a JTAG interface in the processor. The processor may also verify that the programming has been completed successfully. The programmable sensor agent may also comprise a display interface. The device programming file may be selected via the system manager and/or via the display interface in the programmable sensor agent. The programmable sensor agent may also comprise a battery for backup power, a wireless processor, and/or a global positioning system (GPS) processor.

IPC Classes  ?

  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control