A terminal device includes a communication unit that communicates with a terminal device of a communication counterpart, a communication detection unit that detects a communication state of the server device, a communication determination unit that determines whether or not it is necessary to switch a connection target from a first server device to a second server device different from the first server device, a switching request unit that in a case where it is determined that it is necessary to switch the connection target to the second server device, transmits a switching request signal for requesting a terminal device of a communication counterpart to switch a connection target to the second server device, and a communication switching unit that in a case where a switching preparation completion signal indicating that preparation for switching a connection target to the second server device is completed is received from a terminal device of a communication counterpart, stops transmission of audio information to the first server device and switches the connection target from the first server device to the second server device.
An information processing device includes a location information obtaining unit that, from each of a plurality of imaging devices which is used in a mobile object and which searches for an object, obtains location information indicating the current location of the imaging device; and a deciding unit that, based on the location information of each of the plurality of imaging devices, decides on the imaging device to be used in searching for the object.
G06V 20/52 - Activités de surveillance ou de suivi, p. ex. pour la reconnaissance d’objets suspects
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p. ex. véhicules ou piétonsReconnaissance des objets de la circulation, p. ex. signalisation routière, feux de signalisation ou routes
G06V 40/10 - Corps d’êtres humains ou d’animaux, p. ex. occupants de véhicules automobiles ou piétonsParties du corps, p. ex. mains
H04N 23/661 - Transmission des signaux de commande de la caméra par le biais de réseaux, p. ex. la commande via Internet
H04N 23/90 - Agencement de caméras ou de modules de caméras, p. ex. de plusieurs caméras dans des studios de télévision ou des stades de sport
3.
Rechargeable battery for portable wireless communication device
A control device includes a video-data acquiring unit that controls a camera used in a vehicle, and that acquires imaging data captured by the camera; a biometric-information detecting unit that detects biometric information of an occupant of the vehicle, and a fluctuation in the biometric information; an imaging-data processing unit that generates an imaging file to which a thumbnail image is associated, from the imaging data; a thumbnail-image generating unit that generates, when the biometric-information detecting unit detects a fluctuation in the biometric information, a thumbnail image from video of a predetermined range including before and after a point of time of detection; and a recording control unit that records the imaging file generated by the imaging-data processing unit and the thumbnail image generated by the thumbnail-image generating unit in a recording unit, associating with each other.
H04N 5/77 - Circuits d'interface entre un appareil d'enregistrement et un autre appareil entre un appareil d'enregistrement et une caméra de télévision
A61B 5/00 - Mesure servant à établir un diagnostic Identification des individus
A61B 5/024 - Mesure du pouls ou des pulsations cardiaques
G06V 20/59 - Contexte ou environnement de l’image à l’intérieur d’un véhicule, p. ex. concernant l’occupation des sièges, l’état du conducteur ou les conditions de l’éclairage intérieur
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
A transmission system includes: terminal devices each configured to transmit primary information including an image and/or audio of a user in a real space and secondary information including an image and/or audio of the user in a virtual space in association with a time; and a server device configured to: acquire the primary and the secondary information from the terminal devices; set avatar information regarding the image and the audio of avatars of the users in the virtual space based on the secondary information and transmit the avatar information to the terminal devices; determine whether the avatars are in an intercommunication state based on an arrangement state of the avatars in the virtual space; and switch the avatar information of the avatars in the intercommunication state to avatar information based on the primary information and transmit the avatar information to the terminal devices for the avatars in the intercommunication state.
A virtual space control system 1 causes an information terminal device 50 to display a virtual space and to display an avatar corresponding to a user in the displayed virtual space. An information processing device 50 included in the virtual space control system 1 performs, on a display screen of a first information terminal device with which a first user has logged in to the virtual space, display for showing a second avatar at a position that is exposed without being hidden by information displayed on the display screen when it is detected that, in the virtual space, a second avatar corresponding to a second user is in an information sharing range of a first avatar corresponding to the first user or is in an approaching state in which the probability of entry thereto is high.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
Smooth display of a moving image is enabled with adequate gradation maintained. A processing unit that controls a display element on the basis of image data, and a control unit that controls the processing unit are included, and the control unit obtains an amount of movement from image data in a first frame and image data in a second frame after the first frame, sets, on the basis of the amount of movement, the number of subframes for the second frame, the subframes being for display of some of pixels included in the image data on the second frame, and causes the processing unit to control the display element such that the subframes set for the second frame are displayed in a time period for display of the second frame.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
A three-dimensional video imaging device includes a light source unit, a three-chip imaging element unit, and a processing unit. The three-chip imaging element unit includes a first prism, a reflection dichroic film, a first imaging element, a second imaging element that is a short-range time-of-flight (TOF) sensor, a third imaging element, a second prism, a half mirror, and a third prism. The processing unit includes an emission control unit, a second imaging element control unit, a third imaging element control unit, a first distance data calculation unit, a second distance data calculation unit, a measurement range determination unit, and a distance data output switching unit.
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
H04N 13/243 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant au moins trois capteurs d’images 2D
H04N 23/11 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image à partir de différentes longueurs d'onde pour générer des signaux d'image à partir de longueurs d'onde de lumière visible et infrarouge
H04N 25/50 - Commande des paramètres d'exposition de capteurs SSIS
11.
MACHINE LEARNING APPARATUS, MACHINE LEARNING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING MACHINE LEARNING PROGRAM FOR CONTINUALLY LEARNING CLASSIFICATION TASK THAT USES DATA OF NOVEL CLASS WITH SMALLER NUMBER OF SAMPLES THAN DATA OF BASE CLASS
In a machine learning apparatus of the present invention, a neural network outputs a base class classification and a novel class classification. A loss calculation part calculates losses in the base class and novel class classification. An updating part updates a weight based on the losses in the base class and novel class classification. The updating part updates the weight by providing the weight with a regularization term and a sum of the losses.
G06F 18/2413 - Techniques de classification relatives au modèle de classification, p. ex. approches paramétriques ou non paramétriques basées sur les distances des motifs d'entraînement ou de référence
12.
IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND IMAGE DECODING PROGRAM
A merge candidate list is generated, a merge candidate is selected from the merge candidate list as a merge candidate, a bitstream is decoded to derive a motion vector difference, and a corrected merge candidate is derived by adding the motion vector difference to a motion vector of the selected merge candidate for a first prediction without scaling and subtracting the motion vector difference from a motion vector of the selected merge candidate for a second prediction without scaling.
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/137 - Mouvement dans une unité de codage, p. ex. différence moyenne de champs, de trames ou de blocs
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/30 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant des techniques hiérarchiques, p. ex. l'échelonnage
Provided are a control system and a control method which are capable of appropriately managing an event file necessary for a user. A control system (10) according to the present disclosure comprises: a calculation unit (11) that, on the basis of an image including an immovable impact object that is present on a road surface and that will cause an impact to a vehicle, calculates an impact object passage time indicating the time from the current time until the vehicle passes over the impact object; and a control unit (12) that, on the basis of the impact object passage time, either controls a recording device after the passage over the impact object so that an event record corresponding to the impact object passage time is deleted, or controls the recording device before the passage over the impact object so that no event recording process is performed.
A recording control apparatus according to an embodiment includes: a video image data acquisition unit configured to acquire video image data obtained by capturing an image of an area including a periphery of a mobile body which flies and travels on land; an event detection unit configured to detect an event that has occurred and record the video image data at a time of the detection of the event as event video image data in a recording unit in accordance with a predetermined condition; and a condition change unit configured to change the predetermined condition depending on whether the mobile body is traveling or flying.
A three-dimensional video imaging device includes an infrared light source unit, an imaging element unit, and a processing unit, the imaging element unit includes a short-range TOF sensor configured to receive a part of infrared light, which is infrared light transmitted at a transmittance of a low value or infrared light reflected at a reflectance of a low value by a prism, a half mirror, another prism, and another half mirror, and a long-range TOF sensor configured to receive a remainder of infrared light, which is infrared light transmitted at a high transmittance value or infrared light reflected at a high reflectance value by the half mirror, and the processing unit calculates first distance data based on an electric charge accumulated in the short-range TOF sensor, and calculates second distance data based on an electric charge accumulated in the long-range TOF sensor.
H04N 13/25 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant plusieurs capteurs d’images aux caractéristiques différentes autres que la position ou le point de vue, p. ex. avec des différences dans la résolution ou les propriétés de saisie de couleursCommande des caractéristiques d’un capteur par les signaux d’images d’un autre capteur
G01S 17/86 - Combinaisons de systèmes lidar avec des systèmes autres que lidar, radar ou sonar, p. ex. avec des goniomètres
G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la télémétrie laser, p. ex. par interférométrieRécupération de la profondeur ou de la forme à partir de la projection de lumière structurée
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
A first acquisition unit (52) acquires a two-frame first training image of a subject captured by a first image sensor. A second acquisition unit (54) acquires a three-dimensional training motion vector derived on the basis of a two-frame second training image of a subject captured, at a timing equivalent to that of the two-frame first training image, by a second image sensor of a type different from that of the first image sensor. A learning unit (58) receives the acquired two-frame first training image as an input and performs machine learning of a model (56) using the acquired training motion vector as a correct vector.
The present invention makes it possible to discriminate the author of content. This data system (1) is provided with: an acquisition unit (131) that acquires target content for synthesizing a plurality of pieces of content (SD), which are music data, into a composite content (SE) synthesized in a hierarchical manner; a data creation unit (133) that creates unique data (20) which is voice data for identifying the author of the target content; and a data synthesis unit (135) that superimposes the unique data (20) on the target content so as to correspond to a layer (L) of the target content when the target content is synthesized into the composite content (SE).
The present invention makes it possible to, when one electronic device is selected and a program thereof is updated, automatically update programs of other electronic devices. This electronic device is provided with: a determination unit that determines whether or not a program of the electronic device has been updated; an update unit that updates the program with an update program on the basis of the result of the determination by the determination unit; and a communication unit that communicates with other electric devices belonging to the same group as this electronic device, and with an external device. When the determination unit determines that the program has not been updated with the update program, the update unit updates the program with an update program obtained from the external device by the communication unit or received by the communication unit. When the update unit has updated the program, the communication unit transmits the update program to other electronic devices belonging to the group.
A terminal device includes a communication unit that communicates with a terminal device of a communication partner, a communication quality information generation unit that measures communication quality with a plurality of server devices and generates communication quality information indicating the communication quality, a connection destination specification unit that specifies a second server device different from a first server device that is a current connection target, a connection management unit that controls a communication unit to establish connection to the first server device, a request information transmission unit that controls, in a case of performing communication from a local terminal device to an other terminal device, the communication unit to transmit, to the other terminal device, multi-session request information including connection information to the second server device, and a response information reception unit that controls the communication unit to acquire multi-session response information including connection information between the other terminal device and a third server device from the other terminal device. The connection management unit controls the communication unit to connect the local terminal device with the second server device and the third server device in a case where multi-session response information is acquired.
A demodulation unit receives a transmission signal of one time slot of timeslots, and demodulates an audio signal from the transmission signal. An audio output unit outputs audio of the demodulated audio signal. A signal detection unit sequentially switches a timeslot for receiving a transmission signal at the time of operating a transmission switch for switching from a reception state to a transmission state of a transmission signal, and detects an empty timeslot in which a transmission signal does not exist for each timeslot. A switching control unit suspends switching from the reception state to the transmission state when an empty timeslot does not exist in timeslots. A switching control unit returns a timeslot in which a transmission signal is received to a timeslot before an operation of the transmission switch.
H04W 72/0446 - Ressources du domaine temporel, p. ex. créneaux ou trames
H04W 72/54 - Critères d’affectation ou de planification des ressources sans fil sur la base de critères de qualité
H04W 76/45 - Gestion de la connexion pour la distribution ou la diffusion sélective pour des services presser-pour-transmettre [PPT] ou presser-pour-transmettre sur cellulaire [PoC]
In the present invention, an RGB image acquisition unit (30) acquires an RGB image of an object. A ranging image acquisition unit (32) acquires, on the same optical axis, a ranging image of the object. An RGB-image movement detection unit (40) detects, with respect to each pixel, movement in the RGB image by acquiring the difference between the current frame of the RGB image and the previous frame thereof. A ranging-image movement detection unit (42) detects, with respect to each pixel, movement in the ranging image by acquiring the difference between the current frame of the ranging image and the previous frame thereof. An interference determination unit (50) determines, per pixel, the presence or absence of infrared interference at the time the ranging image was acquired, on the basis of the results regarding the presence or absence of movement in the RGB image and the presence or absence of movement in the ranging image.
Provided is a teacher dataset generation system comprising: a shape extraction device that acquires a distance measurement image to be learned and extracts a specific shape on the basis of the distance measurement image; and a teacher dataset generation device that associates an RGB image to be learned with the specific shape, thereby generating a teacher dataset.
A terminal apparatus includes a communication interface, a controller, and a storage. The communication interface executes time-division wireless communication with other terminal apparatuses. A storage stores a communication signal transmitted from the communication interface or a communication signal received by the communication interface. A controller controls transmission in the communication interface. The controller calculates a transmission time based on information included in the communication signal. The communication interface transmits the communication signal stored in the storage at the transmission time calculated by the controller.
H04W 72/0446 - Ressources du domaine temporel, p. ex. créneaux ou trames
H04W 64/00 - Localisation d'utilisateurs ou de terminaux pour la gestion du réseau, p. ex. gestion de la mobilité
H04W 92/18 - Interfaces entre des dispositifs hiérarchiquement similaires entre des dispositifs terminaux
24.
MACHINE LEARNING APPARATUS, MACHINE LEARNING METHOD, AND MACHINE LEARNING PROGRAM FOR LEARNING DATA OF A NOVEL CLASS WITH A SMALLER NUMBER OF SAMPLES THAN DATA OF A BASE CLASS BY CONTINUAL LEARNING
In a machine learning apparatus that learns data of a novel class with a smaller number of samples than data of a base class by continual learning, a feature extraction unit is pre-trained using the data of the base class. The feature extraction unit receives an input of the data of the novel class to output a feature vector of the data of the novel class. A weight calculation unit calculates a classification weight of the novel class based on the feature vector. A graph model receives an input of the classification weight of the novel class and classification weights of all classes previously learned to output reconstructed classification weights. The graph model is trained by pseudo continual learning using alternative data of the base class to learn a dependency between the base class and the novel class by meta learning.
A stimulation unit that applies a plurality of stimulations having first relationship, a biological information acquisition unit that acquires a brain wave level for the plurality of stimulations applied by the stimulation unit, and a personal information setting unit that sets, as a personal information determination value, first personal information associating the plurality of stimulations having the first relationship that has been applied by the stimulation unit, and a difference value of a plurality of brain wave levels acquired by the biological information acquisition unit at this time are included.
G06F 21/32 - Authentification de l’utilisateur par données biométriques, p. ex. empreintes digitales, balayages de l’iris ou empreintes vocales
26.
MACHINE LEARNING APPARATUS, MACHINE LEARNING METHOD, AND MACHINE LEARNING PROGRAM FOR LEARNING DATA OF A NOVEL CLASS WITH A SMALLER NUMBER OF SAMPLES THAN DATA OF A BASE CLASS BY CONTINUAL LEARNING
In a machine learning apparatus that learns data of a novel class with a smaller number of samples than data of a base class by continual learning, a feature extraction unit is pre-trained using first data and second data of the base class. The feature extraction unit receives an input of the data of the novel class to output a feature vector of the data of the novel class. A weight calculation unit calculates a classification weight of the novel class based on the feature vector. A graph model receives an input of the classification weight calculated and classification weights of all classes previously learned and outputs reconstructed classification weights. The graph model is trained by pseudo continual learning using third data of the base class. The first data, the second data, and the third data are different data.
The communication terminal apparatus includes a first communication unit that performs communication by a first communication system, a second communication unit that performs communication by a second communication system, a storage unit that stores therein pieces of channel information on channels, a comparison unit that compares the pieces of channel information, and an operation unit that is able to select a piece of channel information displayed on a display unit. The second communication unit receives first channel information on a first channel that is used by a different communication terminal apparatus by the first communication system. The storage unit stores therein the first channel information that is received by the second communication unit. The comparison unit compares whether or not the first channel information that is stored and second channel information on a channel that is used by the subject communication terminal apparatus by the first communication unit coincide with each other. The display unit displays the first channel information when a comparison result obtained by the comparison unit does not indicate coincidence. The first communication unit performs change to the first channel and performs communication when the first channel that is displayed on the display unit is selected.
H04W 4/90 - Services pour gérer les situations d’urgence ou dangereuses, p. ex. systèmes d’alerte aux séismes et aux tsunamis
H04W 4/029 - Services de gestion ou de suivi basés sur la localisation
H04W 4/33 - Services spécialement adaptés à des environnements, à des situations ou à des fins spécifiques pour les environnements intérieurs, p. ex. les bâtiments
A fixing structure according to the present disclosure includes a main body including a cylindrical columnar part, and a fixing part including a tubular recess into which the columnar part is fit. Fitting grooves are formed on an inner peripheral surface of the recess at predetermined angular intervals. Positioning protrusions for positioning, which can be fit into the fitting grooves, and a plurality of first regulating protrusions whose height is lower than that of the positioning protrusions are formed on an outer peripheral surface of the columnar part. The first regulating protrusions are configured to regulate a position of the columnar part relative to the recess in such a way that the positioning protrusions are fit into the predetermined fitting groove when the columnar part is fit into the recess.
Each of a plurality of cameras is arranged to face the inside of a shooting area. An image storage unit stores image data generated by each camera. A composite image generator generates composite image data corresponding to image data obtained by shooting a subject from a position at a predetermined angle in any direction from a center point, based on image data generated by at least two cameras adjacent in a circumferential direction, with a predetermined direction from the center point taken as an angle of 0 degrees. A sales controller sells image data stored in the image storage unit or composite image data generated by the composite image generator as content when a purchaser of content selects an angle in any direction from the center point to purchase the content.
A display device includes: a target object extraction unit configured to extract a target object that is included in a main image captured by an image capturing unit; a first object generation unit configured to generate a first object that is an image obtained by compensating for the target object based on the target object; a superimposed position setting unit configured to set a superimposed position that is a position at which the first object is displayed in the main image; a display object generation unit configured to set a display mode of the first object based on a position of another object that is superimposed onto the target object included in the main image and based on the superimposed position, and to generate a display object; and a display controller configured to cause the display object to be displayed at the superimposed position.
A distance image acquisition unit 22 acquires distance images from a distance measurement sensor unit 12 at a first frame rate. A luminance change image acquisition unit 23 acquires luminance change images from an EVS unit 13 at a second frame rate higher than the first frame rate. An interpolated image generation unit 25 generates an interpolated distance image between two temporally adjacent distance images on the basis of motion information obtained from a luminance change image in the same time slot.
There is provided a technique that includes a triangle merging candidate list constructor structured to construct a triangle merging candidate list including spatial merging candidates, a first triangle merging candidate selector structured to select, from the triangle merging candidate list, a first triangle merging candidate that is uni-prediction, and a second triangle merging candidate selector structured to select, from the triangle merging candidate list, a second triangle merging candidate that is uni-prediction, in which in a region where motion compensation by weighted averaging by the first triangle merging candidate and the second triangle merging candidate is performed, uni-prediction motion information of one of the first triangle merging candidate or the second triangle merging candidate is saved.
The present invention prevents a wiper rubber from sticking to a window, while utilizing a wiper arm drive motor. This wiper device is provided with: a wiper arm (11A) that can be displaced so as to cause a wiper rubber (11C) to contact or separate from a window; a link mechanism (12) that oscillates the wiper arm; a drive motor (13) that applies rotational force to the link mechanism; a rotating body (14) that is provided so as to be rotatable by the drive motor, and that has two fulcrum parts (14A, 14B) having different distances from the center of the rotation (rotating shaft (13A)) and a communication part (14C) providing communication between the fulcrum parts; a moving member (15) that is connected to the link mechanism and is provided so as to be movable along the communication part, the moving member (15) being disposed at one fulcrum part according to forward rotation of the rotating body and at the other fulcrum part according to reverse rotation of the rotating body; and a variable part that, as the moving member moves from one fulcrum part to the other fulcrum part, displaces the wiper arm, which oscillates via the link mechanism, to separate the wiper rubber from the window.
A feature extraction unit extracts a feature of input data and generates a feature map. A prototype generation unit receives the feature map and outputs a prototype of a feature of a class. A base class classification unit receives the feature map of the input data and classifies the input data into the base class based on a weight of base class classification. A novel class classification receives the feature map of the input data and classifies the input data into the novel class based on a weight of novel class classification. A federated classification unit receives the prototype and the feature map of the input data and classifies the input data into classes based on a weight of federated classification derived from federating the weight of base class classification adjusted based on a metamodel and the weight of novel class classification.
A notification system includes a terminal information acquisition unit configured to acquire position information of a parent terminal (see FIG. 2) and position information of a child terminal (see FIG. 3) that performs communication with the parent terminal, and to acquire a positional relationship between the parent terminal and the child terminal, a detection unit configured to detect that the positional relationship between the parent terminal and the child terminal satisfies a condition that is set in advance, a sound acquisition unit configured to acquire a sound of surroundings of the child terminal, in a case where the detection unit detects that the positional relationship between the parent terminal and the child terminal satisfies the condition that is set in advance, and a sound information output unit configured to output, from the parent terminal, information that is based on the sound acquired by the sound acquisition unit.
There is provided a technique that includes a triangle merging candidate list constructor structured to construct a triangle merging candidate list including spatial merging candidates, a first triangle merging candidate selector structured to select, from the triangle merging candidate list, a first triangle merging candidate that is uni-prediction, and a second triangle merging candidate selector structured to select, from the triangle merging candidate list, a second triangle merging candidate that is uni-prediction, in which in a region where motion compensation by weighted averaging by the first triangle merging candidate and the second triangle merging candidate is performed, uni-prediction motion information of one of the first triangle merging candidate or the second triangle merging candidate is saved.
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/139 - Analyse des vecteurs de mouvement, p. ex. leur amplitude, leur direction, leur variance ou leur précision
H04N 19/159 - Type de prédiction, p. ex. prédiction intra-trame, inter-trame ou de trame bidirectionnelle
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/51 - Estimation ou compensation du mouvement
A person search apparatus acquires person identification information adapted to identify each person detected in an image captured by an imaging apparatus, executes a predetermined image process on the captured image so that the person shown in the captured image is not identified, and records the image subjected to the image process and the person identification information corresponding to the image. The person search apparatus acquires searched person information that is information related to a person to be searched and searches for a person indicated by the searched person information from persons indicated by the person identification information recorded in advance. The person search apparatus presents information indicating that the person indicated by the searched person information is shown when the person indicated by the searched person information is extracted from the persons indicated by the person identification information.
A portable power supply includes a secondary battery and a housing. The housing includes a lower housing that accommodates and supports the secondary battery and has a box shape with an open upper surface, and a lid housing that is arranged to cover the lower housing from above. The housing includes handle parts on opposing edge parts of an upper edge part thereof so as to be portable by hand. Each of the handle parts includes a base part made from a metal, as a core material. The lid housing, the lower housing, and the base part of each of the handle parts are held through screwing of a male screw bolt and a female screw bolt which are inserted in the housing in a vertical direction of the housing.
A rear imaging unit captures an image of a scene behind a vehicle. An in-vehicle imaging unit captures an image of a rear seat in the vehicle. An electronic mirror display unit displays a rear image of the vehicle captured by the rear imaging unit. An image recognition unit recognizes a facial expression of a person or an animal in the rear seat in the image captured by the in-vehicle imaging unit. A display control unit superimposes the mage representing the facial expression on the rear image and displays a resultant image on the electronic mirror display unit.
B60R 1/29 - Dispositions de visualisation en temps réel pour les conducteurs ou les passagers utilisant des systèmes de capture d'images optiques, p. ex. des caméras ou des systèmes vidéo spécialement adaptés pour être utilisés dans ou sur des véhicules pour visualiser une zone à l’intérieur du véhicule, p. ex. pour visualiser les passagers ou le chargement
B60R 1/26 - Dispositions de visualisation en temps réel pour les conducteurs ou les passagers utilisant des systèmes de capture d'images optiques, p. ex. des caméras ou des systèmes vidéo spécialement adaptés pour être utilisés dans ou sur des véhicules pour visualiser une zone extérieure au véhicule, p. ex. l’extérieur du véhicule avec un champ de vision prédéterminé vers l’arrière du véhicule
G06V 20/59 - Contexte ou environnement de l’image à l’intérieur d’un véhicule, p. ex. concernant l’occupation des sièges, l’état du conducteur ou les conditions de l’éclairage intérieur
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
A control device serving as a display control device includes a positional information acquisition unit, a determination unit that, based on positional information and speed information, determines whether a time required to travel from the current position of a vehicle to a guide point where the next leading guide is made on a route to a destination that the vehicle travels is equal to or more than a threshold time or whether the current position of the vehicle is within an area of a given distance from the guide point where the next leading guide is made; a display controller that changes a display mode of a display screen based on a result of the determining; and a line-of-sight operation receiver that receives an operation performed using a line of sight of a person on board on the display screen, wherein the display mode includes a first mode in which music information on music that is being reproduced in the vehicle is displayed and a second mode that is different from the first mode and in which information other than the music information is displayed, and the line-of-sight operation receiver receives a saving information on the music that is being reproduced of saving information on the music that is being reproduced using a line of sight in the first mode and causes a storage unit to store the saving operation.
B60K 35/28 - Dispositions de sortie, c.-à-d. du véhicule à l'utilisateur, associées aux fonctions du véhicule ou spécialement adaptées à celles-ci caractérisées par le type d’informations de sortie, p. ex. divertissement vidéo ou informations sur la dynamique du véhiculeDispositions de sortie, c.-à-d. du véhicule à l'utilisateur, associées aux fonctions du véhicule ou spécialement adaptées à celles-ci caractérisées par la finalité des informations de sortie, p. ex. pour attirer l'attention du conducteur
B60K 35/29 - Instruments caractérisés par la manière dont les informations sont traitées, p. ex. présentant des informations sur plusieurs dispositifs d’affichage ou hiérarchisant les informations en fonction des conditions de conduite
There is provided a technique that includes a triangle merging candidate list constructor structured to construct a triangle merging candidate list including spatial merging candidates, a first triangle merging candidate selector structured to select, from the triangle merging candidate list, a first triangle merging candidate that is uni-prediction, and a second triangle merging candidate selector structured to select, from the triangle merging candidate list, a second triangle merging candidate that is uni-prediction, in which in a region where motion compensation by weighted averaging by the first triangle merging candidate and the second triangle merging candidate is performed, uni-prediction motion information of one of the first triangle merging candidate or the second triangle merging candidate is saved.
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/139 - Analyse des vecteurs de mouvement, p. ex. leur amplitude, leur direction, leur variance ou leur précision
H04N 19/159 - Type de prédiction, p. ex. prédiction intra-trame, inter-trame ou de trame bidirectionnelle
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/51 - Estimation ou compensation du mouvement
There is provided a technique that includes a merging candidate list constructor that constructs a merging candidate list including spatial merging candidates, and a triangle merging candidate selector that selects, from the merging candidate list, a first triangle merging candidate that is uni-prediction and a second triangle merging candidate that is uni-prediction, in which the triangle merging candidate selector derives a uni-prediction motion information candidate having a same priority in the first triangle merging candidate and the second triangle merging candidate.
H04N 19/109 - Sélection du mode de codage ou du mode de prédiction parmi plusieurs modes de codage prédictif temporel
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
An electronic mechanism includes: an electronic component part including a microphone and a substrate; a casing accommodating the electronic component part therein; a through hole formed in the casing and placed opposite the microphone; a first groove part connected to the through hole; a second groove part of which one end part is connected to the first groove part; a third groove part that is formed in a wall part positioned adjacent to a wall part of the casing and is connected to the first groove part. The first groove part has a sloped face sloped in such a manner that cross-sectional areas increase as a distance from the through hole becomes longer.
A voice command acceptance apparatus includes a voice command acceptance unit that accepts a voice command, t a detection unit that acquires information on a language that is used by a person who speaks a voice command, and t an execution control unit that, when the voice command acceptance unit accepts a voice command, executes a function with respect to the accepted voice command. When it is determined that the language that is used by the person is a language that is usable as the voice command, the voice command acceptance unit accepts a voice command if a recognition rate of the voice command that is acquired by the voice command acceptance unit is equal to or larger than a first threshold, and when it is determined that the language that is used by the person is not the language that is usable as the voice command, the voice command acceptance unit accepts a voice command if the recognition rate of the voice command that is acquired by the voice command acceptance unit is equal to or larger than a second threshold that is smaller than the first threshold.
There is provided a technique that includes a merging candidate list constructor that constructs a merging candidate list including spatial merging candidates, and a triangle merging candidate selector that selects, from the merging candidate list, a first triangle merging candidate that is uni-prediction and a second triangle merging candidate that is uni-prediction, in which the triangle merging candidate selector derives a uni-prediction motion information candidate having a same priority in the first triangle merging candidate and the second triangle merging candidate.
H04N 19/109 - Sélection du mode de codage ou du mode de prédiction parmi plusieurs modes de codage prédictif temporel
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
There is provided a technique that includes a merging candidate list constructor that constructs a merging candidate list including spatial merging candidates, and a triangle merging candidate selector that selects, from the merging candidate list, a first triangle merging candidate that is uni-prediction and a second triangle merging candidate that is uni-prediction, in which the triangle merging candidate selector derives a uni-prediction motion information candidate having a same priority in the first triangle merging candidate and the second triangle merging candidate.
H04N 19/109 - Sélection du mode de codage ou du mode de prédiction parmi plusieurs modes de codage prédictif temporel
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
A hearing apparatus can adjust the volume level of an output sound by using a plurality of volume steps. The hearing apparatus includes an acquisition unit configured to acquire a hearing level of a user, a setting unit configured to set a volume curve representing a relationship between the plurality of volume steps and the volume level according to the hearing level, and a volume adjusting unit configured to adjust the volume level based on the volume curve. The volume curve has a change point at a predetermined position and is defined in such a manner that inclinations of the volume curve before and after the change point are different from each other.
An object is to achieve stable waterproof capabilities and assembly guarantee for sealing. An assembly structure of a waterproof case is a structure in which bosses each provided for a different one of first and second cases being separate are fixed by a screw, while an annular sealing body is sandwiched at a part where the bosses abut against each other. The assembly structure includes: an annular groove which is provided so as to surround the screw on an abutting face of the boss of the first case and with which the sealing body is arranged; a plurality of branch grooves being contiguous with the annular groove and extending outside an annular shape thereof; tongue pieces provided so as to extend outside an annular shape of the sealing body and inserted into the branch grooves; and a pressing member being attached, as a separate element, to the first case and pressing the tongue pieces inside the branch grooves.
In the present invention, an image feature amount output unit (10) is pre-trained from sentences and images, receives an image as an input, and outputs an image feature amount. An image prototype generation unit (20) calculates the image feature amount for each class, and outputs an image prototype of each class. A sentence feature amount output unit (50) is pre-trained from sentences and images, receives, as an input, a sentence that describes the class, and outputs a sentence feature amount. A similarity degree calculation unit (30) holds the image prototype of a basic class as the weight of the basic class, holds the sentence feature amount of an additional class as the weight of the additional class, receives, as an input, the image feature amount outputted from the image feature amount output unit (10), and calculates the similarity degree. A classification unit (40) receives the similarity degree as an input, and determines the classification of the image.
G06F 18/2413 - Techniques de classification relatives au modèle de classification, p. ex. approches paramétriques ou non paramétriques basées sur les distances des motifs d'entraînement ou de référence
A providing apparatus includes a real space information acquisition unit that acquires position information on a real object present in a real space in which a user is present; a virtual space information acquisition unit that acquires position information on a virtual object present in a virtual space; a user behavior determination unit that determines whether the user has performed a predetermined behavior for the real object; and a virtual space providing unit that makes the position information on the real space and the position information on the virtual space consistent, to generate a transition video indicating a state between the real space and the virtual space when the user behavior determination unit determines that the user has performed the predetermined behavior for the real object, and to provide a transition space including the transition video before providing the virtual space to the user.
A sensory transmission system includes a sending device that encrypts brain activity information, which is based on the brain activity of a test subject, using key information, and sends the brain activity information in the encrypted form; a receiving device that receives the brain activity information sent from the sending device, and decrypts the received brain activity information using the key information; and a delivery device that delivers the key information to the sending device and the receiving device using quantum entanglement.
A space sharing system 1 including a first imaging device 11A and a first display device 17A and a second imaging device 11B and a second display device 17B arranged in different spaces. The space sharing system 1 comprises: an information acquisition unit 21 for acquiring video and voice from the first imaging device 11A and the second imaging device 11B; an output control unit 27 for outputting the video and voice acquired from the first imaging device 11A to the second display device 17B, and outputting the video and voice acquired from the second imaging device 11B to the first display device 17A; an operation reception unit 22 for receiving an operation on the display screen of the second display device 17B; and an identification unit 23 for identifying an information terminal device corresponding to the operation position on the display screen. The information acquisition unit 21 acquires the video and voice acquired from a third imaging device 31 included in the identified information terminal device, and the output control unit 27 outputs the video and voice acquired from the third imaging device 31 to the second display device 17B.
A voice command acceptance apparatus includes a voice command acceptance unit that accepts a voice command, a detection unit that detects biological information on a person who speaks the voice command, and an execution control unit that, when the voice command acceptance unit accepts a voice command, executes a function with respect to the accepted voice command. When the detection unit determines that the biological information on the person indicates a calm state, the voice command acceptance unit accepts a voice command if a recognition rate of the voice command that is acquired by the voice command acceptance unit is equal to or larger than a first threshold, and when the detection unit determines that the biological information on the person indicates other than the calm state, the voice command acceptance unit accepts a voice command if the recognition rate of the voice command that is acquired by the voice command acceptance unit is equal to or larger than a second threshold that is smaller than the first threshold.
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p. ex. dialogue homme-machine
B60K 35/10 - Dispositions d'entrée, c.-à-d. de l'utilisateur au véhicule, associées aux fonctions du véhicule ou spécialement adaptées à celles-ci
B60W 40/08 - Calcul ou estimation des paramètres de fonctionnement pour les systèmes d'aide à la conduite de véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier liés aux conducteurs ou aux passagers
55.
IMAGE GENERATION CONTROL DEVICE AND OPTICAL SHAPING DEVICE
Provided are an image generation control device and an optical shaping device capable of accurately forming a shaped object while enhancing adhesion between cured layers. An image generation control device includes: a data holding unit that holds data on a cross-sectional shape corresponding to each cured layer of a shaped object; an analysis unit that analyzes the number of layers including a photocurable resin layer irradiated with light and cured layers consecutively laminated in a thickness direction of the photocurable resin layer for each region obtained by dividing a cross-sectional shape of the photocurable resin layer; and an image signal generation unit that generates an image signal of light having luminance different for each region according to the analyzed number of layers.
B29C 64/386 - Acquisition ou traitement de données pour la fabrication additive
B29C 64/129 - Procédés de fabrication additive n’utilisant que des matériaux liquides ou visqueux, p. ex. dépôt d’un cordon continu de matériau visqueux utilisant des couches de liquide à solidification sélective caractérisés par la source d'énergie à cet effet, p. ex. par irradiation globale combinée avec un masque
B29C 64/255 - Enceintes pour le matériau de construction, p. ex. récipients pour poudre
Provided are a wireless communication system, a wireless communication method, and a program capable of allowing the wireless terminal to make outgoing and incoming calls even if congestion occurs. The wireless communication system according to the present disclosure is a wireless communication system that includes a base station, a management device, and a wireless terminal. The base station includes a determination unit that determines that the base station is in a congestion state when the number of wireless terminals each having sent a location registration request that is received by the base station within a predetermined period of time exceeds a predetermined number, and a notification unit that transmits, when the determination unit determines that the base station is in the congestion state, a congestion occurrence notification to the wireless terminal and the management device, and the wireless terminal includes a stopping unit that stops the location registration request when the wireless terminal receives the congestion occurrence notification, and that sets a setting of the wireless terminal to be within service range that is available for outgoing and incoming calls.
H04W 28/02 - Gestion du trafic, p. ex. régulation de flux ou d'encombrement
H04W 60/04 - Rattachement à un réseau, p. ex. enregistrementSuppression du rattachement à un réseau, p. ex. annulation de l'enregistrement utilisant des événements déclenchés
57.
INTRACEREBRAL INFORMATION RECOGNITION DEVICE, INTRACEREBRAL INFORMATION RECOGNITION METHOD, AND STORAGE MEDIUM
An intracerebral information recognition device includes: a detection unit that detects intracerebral information on a subject; and a control unit that controls a stimulation provision unit such that, in a case where unconscious thought information generated prior to a point in time of a judgment made consciously upon the subject making the judgment is included in a result of detection of the intracerebral information, a stimulation is provided at a point in time prior to the point in time of the judgment.
This teacher data set generating device comprises: an image acquiring unit that acquires a non-polarized image of an object to be learned; a flaw position information acquiring unit that acquires position information of a flaw in the object to be learned; and a teacher data set generating unit that generates a teacher data set by associating the non-polarized image with the flaw position information.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
59.
CORRECTION SYSTEM, TERMINAL APPARATUS, PROGRAM TO CORRECT OUTPUT OF THREE-AXIS ANGULAR SPEED SENSOR AND THREE-AXIS ACCELERATION SENSOR
A server searches for a first initial value, a second initial value, a third initial value, and a fourth initial value. A terminal apparatus searches for an offset of a three-axis angular speed sensor based on the first initial value, searches for a sensitivity coefficient of the three-axis angular speed sensor based on the second initial value, searches for an offset of a three-axis acceleration sensor based on the third initial value, and searches for the sensitivity coefficient of the three-axis acceleration sensor based on the fourth initial value. The terminal apparatus derives an angular speed based on the offset of the three-axis angular speed sensor, the sensitivity coefficient of the three-axis angular speed sensor, the offset of the three-axis acceleration sensor, and the sensitivity coefficient of the three-axis acceleration sensor.
G01S 19/45 - Détermination de position en combinant les mesures des signaux provenant du système de positionnement satellitaire à radiophares avec une mesure supplémentaire
G01S 19/40 - Correction de position, de vitesse ou d'attitude
60.
NOTIFICATION CONTROL APPARATUS AND NOTIFICATION CONTROL METHOD FOR VEHICLES
A notification control apparatus detects, based on an image capturing a scene outside a vehicle, a person in a vicinity of the vehicle. The notification control apparatus detects a behavior of the person detected. The notification control apparatus outputs, when it is determined based on the behavior of the person detected that it is necessary to notify the person from the vehicle, a notification to the person.
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p. ex. véhicules ou piétonsReconnaissance des objets de la circulation, p. ex. signalisation routière, feux de signalisation ou routes
61.
IMAGING APPARATUS, IMAGING METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM HAVING PROGRAM
A pan-tilt drive unit executes at least one of a pan operation or a tilt operation of the imaging unit. A drive control unit controls the pan-tilt drive unit. A detection unit detects an animal in a video shot by the imaging unit. A determination unit determines whether the animal detected by the detection unit is alert to the imaging apparatus. When the determination unit determines that the animal is alert to the imaging apparatus while the pan-tilt drive unit is executing a pan operation or a tilt operation in accordance with a movement of the animal, the drive control unit suspends the pan operation or the tilt operation executed by the pan-tilt drive unit.
H04N 23/61 - Commande des caméras ou des modules de caméras en fonction des objets reconnus
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
H04N 23/695 - Commande de la direction de la caméra pour modifier le champ de vision, p. ex. par un panoramique, une inclinaison ou en fonction du suivi des objets
62.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
An information processing apparatus is provided that includes: an acquisition unit configured to acquire an image including a specific object as a subject; and a generation unit configured to generate a three-dimensional model of the specific object, based on the image acquired by the acquisition unit, information indicating a shape according to the specific object, and an angle at which a surface of each portion of the specific object is captured.
An avatar control method includes generating each avatar by drawing an avatar corresponding to each user. In addition, the avatar control method includes setting such that, when the number of the generated avatars is more than a predetermined number, motions of some of the users selected based on the number of other avatars looking at an avatar in question are not reflected on the avatars.
A map generation device includes: an information acquisition unit that acquires positional information on a specific vehicle and positional information on a surrounding vehicles that is positioned around the specific vehicle from on-board devices that are arranged respectively on the specific vehicle and the surrounding vehicle, respectively; a map integration unit that integrates an information being on a surrounding object and having been acquired from the on-board device determined on the basis of degrees of agreement between the positional information on the specific vehicles detected by the on-board devices mounted in the specific vehicles and the positional information on the specific vehicles detected by the on-board devices mounted in the surrounding vehicles.
A block partitioner includes a quad splitter structured to partition a target block obtained by recursive partitioning in half in both a horizontal direction and a vertical direction to generate four blocks, and a binary/ternary splitter structured to partition the target block obtained by recursive partitioning into two or three in the horizontal direction or the vertical direction to generate two or three blocks, and the binary/ternary splitter disallows partitioning of the target block in the horizontal direction when partitioning of the target block in the horizontal direction causes the target block obtained by partitioning to be located beyond a right side of a picture boundary, and disallows partitioning of the target block in the vertical direction when partitioning of the target block in the vertical direction causes the target block obtained by partitioning to be located beyond a lower side of the picture boundary.
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/107 - Sélection du mode de codage ou du mode de prédiction entre codage prédictif spatial et temporel, p. ex. rafraîchissement d’image
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
66.
PICTURE DECODING DEVICE, PICTURE DECODING METHOD, AND PICTURE DECODING PROGRAM
A block partitioner includes a quad splitter structured to partition a target block obtained by recursive partitioning in half in both a horizontal direction and a vertical direction to generate four blocks, and a binary/ternary splitter structured to partition the target block obtained by recursive partitioning into two or three in the horizontal direction or the vertical direction to generate two or three blocks, and the binary/ternary splitter disallows partitioning of the target block in the horizontal direction when partitioning of the target block in the horizontal direction causes the target block obtained by partitioning to be located beyond a right side of a picture boundary, and disallows partitioning of the target block in the vertical direction when partitioning of the target block in the vertical direction causes the target block obtained by partitioning to be located beyond a lower side of the picture boundary.
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/107 - Sélection du mode de codage ou du mode de prédiction entre codage prédictif spatial et temporel, p. ex. rafraîchissement d’image
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
67.
PICTURE DECODING DEVICE, PICTURE DECODING METHOD, AND PICTURE DECODING PROGRAM
A block partitioner includes a quad splitter structured to partition a target block obtained by recursive partitioning in half in both a horizontal direction and a vertical direction to generate four blocks, and a binary/ternary splitter structured to partition the target block obtained by recursive partitioning into two or three in the horizontal direction or the vertical direction to generate two or three blocks, and the binary/ternary splitter disallows partitioning of the target block in the horizontal direction when partitioning of the target block in the horizontal direction causes the target block obtained by partitioning to be located beyond a right side of a picture boundary, and disallows partitioning of the target block in the vertical direction when partitioning of the target block in the vertical direction causes the target block obtained by partitioning to be located beyond a lower side of the picture boundary.
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/107 - Sélection du mode de codage ou du mode de prédiction entre codage prédictif spatial et temporel, p. ex. rafraîchissement d’image
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
An avatar control device for controlling an avatar to be displayed within a virtual space acquires a face image of a user captured by a camera, acquires initial setting information indicating a reference posture of a face of the user, acquires posture information indicating a posture of the face of the user for the camera on the basis of the face image, and controls forward or backward movement of the avatar within the virtual space on the basis of a distance between the camera and the face of the user indicated in the initial setting information and the posture information; and controls a movement direction of the avatar within the virtual space on the basis of an orientation of the face of the user for the camera.
A suction attachment device (1) according to the present disclosure comprises: a fixing unit (12) for use in fixing the suction attachment device (1) to a mobile device (2); a suction attachment unit (13) for use in attaching an object to the suction attachment device (1) through suction; a stealing determination unit (17) for determining, on the basis of the suction attachment state of the suction attachment unit (13), whether or not the mobile device (2) is stolen; and a notification control unit (18) for giving out a notification when it is determined that the mobile device (2) is stolen.
A speaker includes a cabinet that includes a cover portion in which a sound hole is arranged, a speaker unit that is housed in the cabinet, a sealing portion that is arranged so as to surround the sound hole and seals a gap between the cover portion and the speaker unit, and slit portion that is arranged from an inner peripheral portion to an outer peripheral portion of the sealing portion such that an inner side and an outer side surrounded by the sealing portion communicate with each other.
A map generation device includes an information acquisition unit that acquires positional information on a specific vehicle and positional information on a surrounding vehicle that is positioned around the specific vehicle from on-board devices that are arranged respectively on the specific vehicle and the surrounding vehicle, respectively; a distance calculator that calculates a distance between the specific vehicle and the surrounding vehicle based on the positional information on the specific vehicle and the positional information on the surrounding vehicle; an integration area setting unit that sets an integration area of integration into a map based on a result of calculating the distance and an area of detection by the specific vehicle; and a map integration unit that integrates the positional information on the surrounding object that is acquired from the on-board device in the integration area into a map.
G01C 21/00 - NavigationInstruments de navigation non prévus dans les groupes
G01B 21/16 - Dispositions pour la mesure ou leurs détails, où la technique de mesure n'est pas couverte par les autres groupes de la présente sous-classe, est non spécifiée ou est non significative pour mesurer la distance ou le jeu entre des objets espacés
74.
MEASURING APPARATUS, MEASURING METHOD, AND PROGRAM
A measuring apparatus according to the present disclosure measures the hearing ability of a user by performing a binary tree search based on responses of the user to measurement sounds. The measuring apparatus includes a first search processing unit configured to select, as a selected group, one of a plurality of groups obtained by dividing a measurement range of the hearing ability into the plurality of groups; and a second search processing unit configured to repeat the binary tree search in the selected group until the binary tree search converges, and thereby determine the hearing ability.
An imaging disturbance detection device (100) detects disturbance of imaging in a camera unit (20) and comprises a detection unit (314). The detection unit (314) outputs a first detection signal when the sharpness of the captured image that has been captured by the camera unit (20) changes by a prescribed amount or more from a predetermined reference sharpness at preset positions (72-74). The first detection signal indicates detection of a decrease in the sharpness of the captured image from the camera unit (20) at the preset positions (72-74).
H04N 23/60 - Commande des caméras ou des modules de caméras
G02B 7/28 - Systèmes pour la génération automatique de signaux de mise au point
G02B 7/36 - Systèmes pour la génération automatique de signaux de mise au point utilisant des techniques liées à la netteté de l'image
G03B 13/36 - Systèmes de mise au point automatique
H04N 23/67 - Commande de la mise au point basée sur les signaux électroniques du capteur d'image
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
H04N 23/695 - Commande de la direction de la caméra pour modifier le champ de vision, p. ex. par un panoramique, une inclinaison ou en fonction du suivi des objets
76.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
A controller according to the present invention: determines, for a virtual viewpoint image viewed from a prescribed virtual viewpoint of a virtual space, the virtual viewpoint image being generated on the basis of first objects (321, 322) arranged in the virtual space and a second object (310) arranged in the virtual space excluding the first objects, whether one first object (321) among the first objects is arranged at a position where the field of view from the prescribed virtual viewpoint is obstructed; determines the relevance of the other first object (322) among the first objects to the one first object; groups the one first object and the other first object determined to have relevance to the one first object as an object group (380); and moves the first objects in the object group to positions in the virtual viewpoint image at which the field of view from the prescribed virtual viewpoint is not obstructed.
A state management unit in a base station apparatus transmits an information request signal requesting terminal information to a plurality of terminal apparatuses from a communication unit and receives, using the communication unit, the terminal information from the terminal apparatus in response to the information request signal, the terminal information including information on a transmission power and position information on the terminal apparatus. The role determination unit determines the terminal apparatus having a transmission power larger than a preset threshold value and located in a predetermined area within a preset communication area of a host base station apparatus to have a forwarding role for forwarding a signal, based on the received terminal information. The role determination unit transmits, from the communication unit, a role change signal notifying the terminal apparatus, whose role is determined to be the forwarding role, that the forwarding role is assigned to the terminal apparatus.
H04W 40/08 - Sélection d'itinéraire ou de voie de communication, p. ex. routage basé sur l'énergie disponible ou le chemin le plus court sur la base des ressources nodales sans fil sur la base de la puissance d'émission
H04W 64/00 - Localisation d'utilisateurs ou de terminaux pour la gestion du réseau, p. ex. gestion de la mobilité
H04W 88/04 - Dispositifs terminaux adapté à la retransmission à destination ou en provenance d'un autre terminal ou utilisateur
78.
MACHINE LEARNING APPARATUS, MACHINE LEARNING METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM HAVING MACHINE LEARNING PROGRAM
A machine learning apparatus that continually learns a novel class with fewer samples than a base class is provided. A feature extraction unit extracts a feature of input data by using a weight trained based on a divided feature of the input data. A base class classification unit classifies into a base class based on the feature of the input data. A novel class classification unit classifies into a novel class based on the feature of the input data. An attention attractor unit regularizes a weight of base class and a weight of novel class.
G06F 18/2132 - Extraction de caractéristiques, p. ex. en transformant l'espace des caractéristiquesSynthétisationsMappages, p. ex. procédés de sous-espace basée sur des critères de discrimination, p. ex. l'analyse discriminante
A monitoring unit 250 infers an anomaly in a wireless mobile station on the basis of the history of location information which has been received by a communication unit 210 from the wireless mobile station. When the monitoring unit 250 has inferred the anomaly in the wireless mobile station, a first instruction unit 252 causes the communication unit 210 to transmit a first instruction signal that instructs the wireless mobile station to transmit an audio signal. An analysis unit 254 analyzes a response signal with respect to the first instruction signal. When the analysis unit 254 has determined by the analysis that a prescribed condition is met, a second instruction unit 256 causes the communication unit 210 to transmit a second instruction signal for instructing a change in the operation of the wireless mobile station.
H04M 11/00 - Systèmes de communication téléphonique spécialement adaptés pour être combinés avec d'autres systèmes électriques
G08B 25/04 - Systèmes d'alarme dans lesquels l'emplacement du lieu où existe la condition déclenchant l'alarme est signalé à une station centrale, p. ex. systèmes télégraphiques d'incendie ou de police caractérisés par le moyen de transmission utilisant une ligne de signalisation unique, p. ex. en boucle fermée
80.
MANAGEMENT DEVICE AND COMMUNICATION CONTROL METHOD
This management device responds when a new connection request is received in a control slot in a state in which all call slots are used in a call, the management device comprising: a slot division determination unit that, when no call slots are available and a terminal currently using a slot to be queried among the call slots in use is capable of changing the bit rate, determines the slot to be queried as a dividable slot; and a slot allocation unit that allocates, to the divided slots, each of the currently using terminal and a requesting terminal that has newly requested a connection. When the slot to be queried is being used in a group call, the slot division determination unit determines the slot to be queried as a dividable slot if the number of terminals that have responded that the bit rate can be changed from the currently using terminal meets a prescribed condition.
09 - Appareils et instruments scientifiques et électriques
Produits et services
(1) Audio players for automobiles; compact disc players for automobiles; audio disc players for automobiles; digital audio players for automobiles; radio receivers for automobiles; multimedia players for automobiles; media players for automobiles; amplifiers for automobiles; digital audio players for automobiles; loudspeakers for automobiles; subwoofers for automobiles; stereo adapters for automobiles; tuners for automobiles; equalizers for automobiles; display monitors for automobiles; TV sets for automobiles; cameras for automobiles; video cameras for automobiles; event recorders; dashboard cameras; video disc players for automobiles; DVD-players for automobiles; car radios; navigation systems for automobiles; satellite navigational apparatus for automobiles; CD/DVD writers; video disc recorders; CCTV (closed circuit television) systems; car driving recorders; security cameras; car sensors; laser radars for automobiles; millimeter-wave radars for automobiles; radar apparatus for automobiles; electronic control devices for automobiles; motion sensors for automobiles; apparatus for detecting the human body, shape of objects and positions for automobiles; speed sensors for automobiles; computer application software downloadable, through computer terminals and mobile phones, featuring general information on automobiles; computer application software for medical purpose; application software for audio equipment; application software for video equipment; application software for automobiles; application software for medical equipment; application software for optical devices; application software for measuring equipment; application software for smartphones and mobile telephones; telecommunication machines and devices for automobiles; telecommunication machines and devices for use in assisting safety of car driving; electronic machines and devices for use in assisting safety of car driving; measuring apparatus and equipment for use in assisting safety of car driving; photographic machines and devices for use in assisting safety of car driving; computer software for audio equipment; computer software for video equipment; computer software for automobiles; computer software for medical equipment; computer software for optical devices; computer software for use with satellite and GPS navigation systems for navigation, route guidance and electronic mapping; computer software to control and improve sound quality for audio equipment; computer software for image processing; computer software and application software to control and improve audio equipment sound quality; medical software for diagnosis apparatus; measuring apparatus for automobiles; measuring apparatus for detecting distance between vehicles; automobile image sensors by complementary metal oxide semiconductor (CMOS); image sensors by charge coupled device (CCD); automobile image sensors by charge coupled device (CCD); image sensors by complementary metal oxide semiconductor (CMOS); image sensors for digital cameras; image sensors for video cameras; image sensors for surveillance cameras; image sensors for security cameras; image sensors for video camera modules; image sensors for television conferences; image sensors for movie films; image sensors for micro cameras; antennas for automobiles; remote controls for automobiles; compact disc players; audio disc players; audio disc recorders; multimedia players; digital audio players; mp3 players; record players; portable audio players and recorders; stereo; sound recording machines and apparatus; audio-frequency apparatus; amplifiers; woofers; subwoofers; loudspeakers; horns for loudspeakers; cabinets for loudspeakers; racks for audio apparatus and equipment; stereo tuners; equalizers; sound, video and image editing apparatus; apparatus for transmitting or reproduction of sound or images; encoders and/or decoders for audio and video; phonograph records; sound recording carriers; electronic still cameras; video cameras; camcorders; video camera modules; camera modules for automobiles; camera modules for surveillance cameras; camera modules for security cameras; camera modules for movie films; video disc players; DVD-players; DVD recording apparatus; projectors; video projectors; parts and accessories for video projector, namely, stands, ceiling mount kit, transmitters; projection screens; video screens; audio and video receivers; television tuners; television receivers; television sets; radio tuners; radio receivers; telephone sets; portable telephones; smartphones; radiotelephones; radiotelephony sets; radiotelegraphy sets; radio transmitters; repeaters for radio stations; facsimile machines; video telephones; a global positioning system (GPS); satellite navigational apparatus; receivers for satellites; walkie-talkies; wireless communications equipment; transceivers; intercom system; mobile transceivers and parts and accessories thereof; handheld transceivers and parts and accessories thereof; microphones and external speakers for repeaters for radio stations; microphones and external speakers for wireless communications equipment; batteries; antennas and microphones for mobile transceivers and handheld transceivers; chargers for wireless communication device; chargers for handheld transceivers; computers; laptop computers; desktop computers; liquid crystal displays; computer monitors; computer mouse; mouse pads; computer keyboards; printers for use with computers; scanners; computer programs for editing images, sound and video; computer game programs; central processing units; computer memories; large scale integrations (LSI); semi-conductors; integrated circuits; magnetic data carriers; prerecorded magnetic data carriers featuring sound recordings; storage media; blank video tapes; blank audio tapes; blank video discs; blank audio discs; prerecorded video tapes; prerecorded audio tapes; prerecorded video discs; prerecorded audio discs; electronic pens; ac adapters for electronic machines and apparatus; ac adapters for telecommunication machines and apparatus; telecommunication cables; electric wires and cables; headsets not for gaming; headsets, other than for use with game consoles; headsets for use with computers; headsets for telephone; earphones; headphones; ear monitor headphones; battery chargers for electronic machines and apparatus; battery chargers for telecommunication machines and apparatus; electric connectors for telecommunication apparatus; antennas; microphones; remote controls; displays and monitors used with eye-gaze measuring and diagnosis devices for research, laboratory and scientific use; developmental disorder diagnosis apparatus for research, laboratory and scientific use, namely, computer monitors, displays, keyboards, mice and printers; tablet computers; USB flash drives; blank flash memory cards; downloadable music files; downloadable image files; portable power supplies; solar panels; alcohol concentration measuring machines and appliances; alcohol detectors; broadcasting machines and appliances for professional use; television receivers; smart television; display monitors; downloadable virtual goods, namely computer programs featuring earphones, headphones, digital audio players, video cameras and digital audio players for automobiles for use online and in online virtual worlds.
82.
IN-VEHICLE SYSTEM AND ASSEMBLING METHOD OF IN-VEHICLE SYSTEM
An in-vehicle system includes: an in-vehicle device that includes a device main unit with a panel provided on a front surface thereof; a cluster panel that is arranged on the panel; a positioning mechanism which mutually positions the panel and the cluster panel; a vehicle mounting bracket that is fixed to the cluster panel and the device main unit, and that is mounted on a vehicle; and multiple mounting holes that are aligned in a vehicle longitudinal direction in the vehicle mounting bracket and which correspond to screw holes which are provided on a side surface of the device main unit, each of which has a larger diameter than each of screw holes, and a vertical diameter of the mounting hole is formed to be larger as the mounting hole is positioned towards a vehicle front side.
B60R 11/02 - Autres aménagements pour tenir ou monter des objets pour postes radio, de télévision, téléphones, ou objets similairesDisposition de leur commande
B60R 11/00 - Autres aménagements pour tenir ou monter des objets
B62D 65/14 - Assemblage de sous-ensembles ou de composants avec la caisse ou entre eux, ou positionnement de sous-ensembles ou de composants par rapport à la caisse ou à d'autres sous-ensembles ou d'autres composants les sous-ensembles ou composants étant des accessoires des compartiments pour passagers, p. ex. des sièges, des garnitures, un décor, des tableaux de bord
83.
IMAGE RECOGNITION ASSISTANCE APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An image recognition assistance apparatus according to the present disclosure includes: a recognition result acquisition unit configured to acquire a recognition result of image recognition carried out by an image recognition engine on a target image output by an image output unit using a predetermined set value; and a setting unit configured to determine a set value with which the recognition result meets a predetermined criterion and set the determined set value in an image output unit. Accordingly, by adjusting a target image to be input to the image recognition engine in consideration of the recognition results obtained by the image recognition engine, improvement of a recognition accuracy is assisted.
G06V 10/98 - Détection ou correction d’erreurs, p. ex. en effectuant une deuxième exploration du motif ou par intervention humaineÉvaluation de la qualité des motifs acquis
G06V 10/776 - ValidationÉvaluation des performances
H04N 23/61 - Commande des caméras ou des modules de caméras en fonction des objets reconnus
84.
IMAGING DEVICE AND DISTANCE MEASUREMENT IMAGE RELIABILITY DETECTION METHOD
A distance measurement sensor (10) irradiates a distance measurement target with light and captures reflected light from the distance measurement target to acquire a distance measurement image. An event-based vision sensor (20) images the distance measurement target and detects brightness changes for each pixel. A control unit (30) controls a reset timing and an output timing of the event-based vision sensor in synchronization with an exposure period of the distance measurement sensor. A detection unit (40) detects, as pixels having low reliability, pixels in the distance measurement image for which a brightness change equal to or greater than a predetermined threshold was detected by the event-based vision sensor.
Provided are an image display device, an image display method, and a program with which it is possible to appropriately suppress occurrence of crosstalk. An image display device according to the present disclosure comprises: a line-of-sight detection unit that detects a user's line-of-sight direction; a rendering unit that causes the display device to render a stereoscopic image in accordance with the user's line-of-sight direction; a light source range identification unit that identifies an unnecessary light source range indicating a range of candidate pixels for serving as a light source of crosstalk light indicating a light beam from a light source other than the light source of the rendered stereoscopic image, the light beam passing through the position of the stereoscopic image; a pixel identification unit that identifies light source pixels that indicate the pixels serving as the light source of the crosstalk light, from among the pixels in the unnecessary light source range; and a light beam processing unit that performs prescribed processing on the identified light source pixels or the light beam from the light source pixels to prevent the light beam from the light source pixels from passing through the position of the stereoscopic image.
G02B 30/10 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p. ex. des effets stéréoscopiques en utilisant des méthodes d'imagerie intégrale
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
H04N 13/307 - Reproducteurs d’images pour visionnement sans avoir recours à des lunettes spéciales, c.-à-d. utilisant des affichages autostéréoscopiques utilisant des lentilles du type œil de mouche, p. ex. dispositions de lentilles circulaires
A display device (10) comprises: a display body (12) that includes a phosphor; a light source (30) that outputs excitation light (20) for exciting the phosphor; a first lens (38) and a second lens (40) that condense the excitation light (20) toward a light collection position (22) inside the display body (12); a first drive mechanism (42) that drives the first lens (38) to change the light collection position (22); and a second drive mechanism (44) that drives the second lens (40) to reduce aberration caused by the change of the light collection position (22).
G02B 30/50 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p. ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p. ex. des voxels
G09F 13/20 - Enseignes lumineusesPublicité lumineuse avec des surfaces ou des pièces luminescentes
87.
PROJECTION-TYPE DISPLAY DEVICE AND PROJECTION-TYPE DISPLAY DEVICE CONTROL METHOD
A reentered illumination light intensity calculation unit (311) calculates the intensity of reentered illumination light that is first polarization light: which is, in illumination light including first and second polarization light, light incident on a reflective liquid crystal display element; and for which a portion thereof is reflected without being modulated by a reflective liquid crystal display element, returns to a light source unit (1) side, reflects, and reenters the reflective liquid crystal display element. The reentered illumination light intensity calculation unit also calculates a total value by adding together the reentered illumination light intensities of all the pixels constituting a frame. A reference reentered illumination light intensity storage unit (32) stores reference total values for reference frame images from minimum gradation to maximum gradation. A light source control unit (33) selects a reference total value of any of the reference frame images, compares the calculated total value with the selected reference total value, and controls the amount of light which is emitted by a light source (11) provided in the light source unit (1).
G09G 3/36 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
G02F 1/133 - Dispositions relatives à la structureExcitation de cellules à cristaux liquidesDispositions relatives aux circuits
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
H04N 5/74 - Dispositifs de projection pour reproduction d'image, p. ex. eidophor
88.
CONTRIBUTION DEGREE CALCULATION DEVICE, CONTRIBUTION DEGREE CALCULATION METHOD, AND PROGRAM
Provided are a contribution degree calculation device, a contribution degree calculation method, and a program with which it is possible to appropriately evaluate the contribution of producers of individual pieces of content in composite content. A contribution degree calculation device according to the present disclosure calculates a contribution degree for producers of individual pieces of content in composite content, which is content obtained by superimposing, on original content, one or more pieces of content different from the original content, wherein the contribution degree calculation device comprises: an acquisition unit that acquires evaluation factor information indicating information for calculating a contribution degree for producers of individual pieces of content in the composite content; and a calculation unit that, on the basis of the evaluation factor information, calculates a contribution degree for the producers of the individual pieces of content superimposed on the original content.
This recognition processing device (10A) comprises: a video acquisition unit (12) that acquires a captured video; an object detection unit (14A) that uses a detection model obtained by machine-learning an image of an object to detect an object included in the captured video; a lower end estimation unit (30) that, when an object included in a range overlapping the lower edge of the captured video is detected by the object detection unit (14A), estimates a lower end position of the object that can be positioned below the lower edge of the captured video; and a distance calculation unit (16A) that calculates the distance information of the object using the lower end position estimated by the lower end estimation unit (30).
A projection display device according to the present embodiment includes a light source (101) which emits laser light for use in projection of an image and has a non-display function to stop light emission of the light source (101) to temporarily hide the projection of the image. The projection display device comprises: a light source temperature sensor (172) which detects the temperature of the light source (101); a light source heater (171) which heats the light source (101); a control unit (12) which controls the operations of at least the light source (101) and the light source heater (171); and a changeover switch (165) which gives instructions to execute or cancel the non-display function. When the changeover switch (165) has given instructions to execute the non-display function, the control unit (12) stops light emission of the light source (101), acquires the temperature detected by the light source temperature sensor (172), and controls the light source heater (171) so that the light source (101) maintains the acquired temperature.
G03B 21/14 - Projecteurs ou visionneuses du type par projectionLeurs accessoires Détails
G02F 1/13 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p. ex. cellules d'affichage individuelles à cristaux liquides
G02F 1/133 - Dispositions relatives à la structureExcitation de cellules à cristaux liquidesDispositions relatives aux circuits
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p. ex. des polariseurs ou des réflecteurs
A head-mounted display includes a combiner, a beam splitter, and a light-shielding unit. The combiner is configured to combine display light with an external scene in front of a user wearing the head-mounted display. The beam splitter is arranged between the combiner and an eye of the user, and is configured to reflect the display light toward the combiner, and to transmit the display light reflected by the combiner. The light-shielding unit shields external light that is to be reflected by the beam splitter and be directed toward the eye of the user.
An image encoding device adapted to segment an image into blocks and encode the image in units of blocks resulting from segmenting the image is provided. A block segmentation unit recursively segments the image into rectangles of a predetermined size to generate a block subject to encoding. A bitstream generation unit encodes block segmentation information of the block subject to encoding. The block segmentation unit includes: a quartering unit that quarters a target block in recursive segmentation in a horizontal direction and a vertical direction to generate four blocks; and a halving unit that halves a target block in recursive segmentation in a horizontal or vertical direction to generate two blocks. When previous recursive segmentation is halving, the halving unit prohibits a target block subject to current recursive segmentation from being segmented in the same direction as a direction in which the block was segmented in the previous recursive segmentation.
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/192 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par le procédé d’adaptation, l’outil d’adaptation ou le type d’adaptation utilisés pour le codage adaptatif le procédé d’adaptation, l’outil d’adaptation ou le type d’adaptation étant itératif ou récursif
H04N 19/196 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par le procédé d’adaptation, l’outil d’adaptation ou le type d’adaptation utilisés pour le codage adaptatif étant spécialement adaptés au calcul de paramètres de codage, p. ex. en faisant la moyenne de paramètres de codage calculés antérieurement
H04N 19/46 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/96 - Codage au moyen d'une arborescence, p. ex. codage au moyen d'une arborescence quadratique
93.
LISTENING APPARATUS, CONTROL METHOD OF LISTENING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
On/off of a wireless function of a listening apparatus is automatically controlled. A listening apparatus according to a present embodiment is a listening apparatus that is of a type worn on an ear and that includes at least one of a noise cancelling processing unit and a collected sound processing unit, the listening apparatus including a wireless communication unit configured to be connected to a wireless device, the wireless communication unit being capable of receiving an audio signal from the wireless device, an ear-worn detection unit configured to detect whether the listening apparatus is worn on an ear or not, and a control processing unit configured to turn off the wireless communication unit in a case where it is detected that the listening apparatus is worn on an ear before the wireless communication unit is connected to the wireless device.
A trend determination unit determines whether the content registered in a registration unit or an item related to the content is a trend, and determines whether the content or the item related to the content has become a trend in a short period of time that is a predetermined time when determining that the content or the item related to the content is a trend. An auction process unit sets a first auction starting price when the content or the item related to the content is a trend and has not become a trend in a short period of time. An auction process unit sets a second auction starting price higher than the first auction starting price when the content or the item related to the content is a trend and has become a trend in a short period of time, and puts the content up for auction.
G06Q 30/0201 - Modélisation du marchéAnalyse du marchéCollecte de données du marché
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
95.
IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, AND IMAGE ENCODING PROGRAM, AND IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND IMAGE DECODING PROGRAM
An image encoding device adapted to segment an image into blocks and encode the image in units of blocks resulting from segmenting the image is provided. A block segmentation unit recursively segments the image into rectangles of a predetermined size to generate a block subject to encoding. A bitstream generation unit encodes block segmentation information of the block subject to encoding. The block segmentation unit includes: a quartering unit that quarters a target block in recursive segmentation in a horizontal direction and a vertical direction to generate four blocks; and a halving unit that halves a target block in recursive segmentation in a horizontal or vertical direction to generate two blocks. When previous recursive segmentation is halving, the halving unit prohibits a target block subject to current recursive segmentation from being segmented in the same direction as a direction in which the block was segmented in the previous recursive segmentation.
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/192 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par le procédé d’adaptation, l’outil d’adaptation ou le type d’adaptation utilisés pour le codage adaptatif le procédé d’adaptation, l’outil d’adaptation ou le type d’adaptation étant itératif ou récursif
H04N 19/196 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par le procédé d’adaptation, l’outil d’adaptation ou le type d’adaptation utilisés pour le codage adaptatif étant spécialement adaptés au calcul de paramètres de codage, p. ex. en faisant la moyenne de paramètres de codage calculés antérieurement
H04N 19/46 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/96 - Codage au moyen d'une arborescence, p. ex. codage au moyen d'une arborescence quadratique
96.
IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, AND IMAGE ENCODING PROGRAM, AND IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND IMAGE DECODING PROGRAM
An image encoding device adapted to segment an image into blocks and encode the image in units of blocks resulting from segmenting the image is provided. A block segmentation unit recursively segments the image into rectangles of a predetermined size to generate a block subject to encoding. A bitstream generation unit encodes block segmentation information of the block subject to encoding. The block segmentation unit includes: a quartering unit that quarters a target block in recursive segmentation in a horizontal direction and a vertical direction to generate four blocks; and a halving unit that halves a target block in recursive segmentation in a horizontal or vertical direction to generate two blocks. When previous recursive segmentation is halving, the halving unit prohibits a target block subject to current recursive segmentation from being segmented in the same direction as a direction in which the block was segmented in the previous recursive segmentation.
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/192 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par le procédé d’adaptation, l’outil d’adaptation ou le type d’adaptation utilisés pour le codage adaptatif le procédé d’adaptation, l’outil d’adaptation ou le type d’adaptation étant itératif ou récursif
H04N 19/196 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par le procédé d’adaptation, l’outil d’adaptation ou le type d’adaptation utilisés pour le codage adaptatif étant spécialement adaptés au calcul de paramètres de codage, p. ex. en faisant la moyenne de paramètres de codage calculés antérieurement
H04N 19/46 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/96 - Codage au moyen d'une arborescence, p. ex. codage au moyen d'une arborescence quadratique
97.
SYNCHRONIZATION DETECTION APPARATUS, SYNCHRONIZATION DETECTION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
Provided are a synchronization detection apparatus, a synchronization detection method, and a synchronization detection program that make it possible to follow, in a short time, a frequency deviation of an intermediate frequency after a radio frequency is down-converted. A synchronization detection apparatus according to the present invention includes: a band limitation unit in which, for an externally received signal, each of a plurality of filters each having a different center frequency limits a band of the received signal to output a plurality of band-limited signals; a detection and demodulation unit that detects and demodulates each of the plurality of band-limited signals to output a plurality of detected signals; and a synchronization detection unit that performs synchronization detection of each of the plurality of detected signals to output a plurality of correlation values.
H04L 27/148 - Circuits de démodulationCircuits récepteurs avec démodulation utilisant les propriétés spectrales du signal reçu, p. ex. en utilisant des éléments sélectifs de la fréquence ou sensibles à la fréquence utilisant des filtres, y compris des filtres du type PLL
H04L 7/027 - Commande de vitesse ou de phase au moyen des signaux de code reçus, les signaux ne contenant aucune information de synchronisation particulière en extrayant le signal d'horloge ou de synchronisation du spectre du signal reçu, p. ex. en utilisant un circuit résonnant ou passe-bande
98.
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
This information processing device comprises: an image acquisition unit that acquires an image obtained by capturing an object; an image analysis unit that analyzes frequency components of the image and generates high-frequency component information indicating a high-frequency component from among the frequency components; a distance information acquisition unit that acquires distance information regarding the distance to the object; and a determination unit that determines whether the object is flat or not on the basis of the distance information and the high-frequency component information.
H04N 19/85 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c.-à-d. systèmes dans lesquels le signal vidéo n'est pas diffusé
H04N 19/115 - Sélection de la taille du code pour une unité de codage avant le codage
H04N 21/24 - Surveillance de procédés ou de ressources, p. ex. surveillance de la charge du serveur, de la bande passante disponible ou des requêtes effectuées sur la voie montante
H04N 21/2385 - Allocation de canauxAllocation de bande passante
99.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM
To complement a sharing sense of a meeting scene in an online meeting using a plurality of terminal apparatuses connected through a network. An information processing apparatus according to the present embodiment includes a display control unit configured to display, in an online meeting in which a plurality of users participate by using a plurality of user terminals connected through a network, an icon display area in each of the user terminals, the icon display area displaying a list of an icon of each of the users, and a virtual image generation unit configured to display a virtual image on an icon of at least one of the plurality of users, the virtual image being in common in each of the user terminals.
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p. ex. glissement, rotation, agrandissement ou changement de couleur
A deep-layer feature vector extraction unit extracts a low-resolution deep-layer feature vector of an input image. A shallow-layer feature vector extraction unit extracts a high-resolution shallow-layer feature vector of the input image. A concatenation unit concatenates the deep-layer feature vector and the shallow-layer feature vector and outputs a concatenated feature vector. A similarity calculation unit retains a weight matrix of respective classes and calculates similarities from the concatenated feature vector and the weight matrix of respective classes. The shallow-layer feature vector extraction unit shares at least one convolutional layer with the deep-layer feature vector extraction unit.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/74 - Appariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques
G06V 10/77 - Traitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 10/80 - Fusion, c.-à-d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux