09 - Appareils et instruments scientifiques et électriques
Produits et services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
Systems and methods are provided for implementing an improve mapping process to help identify disparate information associated with a software application in separately stored files. In this way, the information may remain separate and distinct, often times assigned to different teams, devices, and locations, and still be used to create a software application from the disparate information. For example, the system can generate a graph that comprises nodes that identify various information/functions from disparate data sources and edges that identify relationships between this information. Using the graph, the system may receive a query from a user device and generate a response to the query, where the graph can help narrow the search space in determining the response to the query.
09 - Appareils et instruments scientifiques et électriques
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Entertainment services, namely, providing on-line computer games; entertainment services, namely, providing computer games accessed and played via mobile and cellular phones and other wireless devices; provision of information relating to electronic computer games provided via the Internet; organizing, conducting and operating video game competitions and tournaments; entertainment services in the nature of arranging of electronic sports and video game contests, games, tournaments and competition; entertainment services, namely, providing a website featuring non-downloadable videos featuring live video game tournaments played by video game players; Sporting and cultural activities.
5.
SYSTEM FOR IDENTIFYING VISUAL ANOMALIES AND CODING ERRORS WITHIN A VIDEO GAME
A visual anomaly detection system can test a video game under test. The testing can involve applying captured video frames to a large language model using dynamically generated prompts. The captured video frames can be obtained directly from an output port of a user computing system enabling the video frames to be applied to the machine learning model without modification and with minimal to no user intervention. Additionally, the systems disclosed herein can control the user computing system hosting the video game under test enabling the test system to react to test results in real-time or near real-time (e.g., within milliseconds, while the video game is executing, before a next action is performed with respect to the video game, and the like) and to modify the testing process as tests are being performed.
A method of generating a three-dimensional (3D) model includes obtaining a set of two-dimensional (2D) images of a scene acquired by one or more cameras from a plurality of camera angles at a plurality of camera positions. Each 2D image corresponds to a respective camera angle and a respective camera position. The method further includes obtaining the respective camera angle and the respective camera position for each 2D image, and generating one or more semantic masks from the set of 2D images. Each semantic mask corresponds to a class of one or more objects in the scene. The method further includes training a neural radiance field (NeRF) model, using the set of 2D images and the one or more semantic masks as a training dataset, to obtain a trained NeRF model. The trained NeRF model is an implicit 3D model of the one or more objects in the scene.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques
A system may perform motion capture using motion capture targets with concave reflector structures. For example, the motion capture target include a target body and a plurality of tracking markers located on respective portions of the surface of the target body. At least one tracking marker of the plurality of tracking markers may be a concave reflector structure including a tapered hole in the surface of the target body and at least a portion of a surface of the tapered hole may be reflective.
A system may provide gameplay complexity assistance in gaming. The system operate, during gameplay of a game including a set of controls for a player of the game, a simulated player model to provide gameplay complexity assistance for the player including inputting a game state of the game to the simulated player model to cause the simulated player model to generate at least one simulated control corresponding to at least one control of the set of controls for the player and receiving the at least one simulated control input from the simulated player model. The system may then utilize, in the gameplay of the game, the at least one simulated control input from the simulated player model as a player input of the corresponding one of the set of controls of the player.
A63F 13/422 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p. ex. les signaux générés par le joueur ou dérivés de l’environnement par mappage des signaux d’entrée en commandes de jeu, p. ex. mappage du déplacement d’un stylet sur un écran tactile en angle de braquage d’un véhicule virtuel mappage automatique pour assister le joueur, p. ex. freinage automatique dans un jeu de conduite automobile
9.
SYSTEMS AND METHODS FOR EYE MODELING AND IRIS TEXTURING
A method for generating a three-dimensional (3D) model of a head is disclosed. One or more images of the head are obtained and the head includes eyes. A parametric model for the eyes that includes a set of parameters is retrieved. Values are assigned for each parameter in the set of parameters of the parametric model for the eyes based on the one or more images. Eye patch areas of areas surrounding the eyes are generated based on the values of the parameters in the set of parameters of the parametric model for the eyes. The 3D model of the head that includes the eyes and the eye patch areas is generated. The eyes are normalized to be spaced a fixed distance apart from one another in the 3D model, and a size of the head in the 3D model is scaled based on the fixed distance between the eyes.
This specification describes a method for generating background audio in a video game. The method is implemented by one or more processors and the method comprises: obtaining, by one or more of the processors, text data comprising text for speech audio that is to be present in the background audio; obtaining, by one or more of the processors, contextual data comprising data descriptive of an environment in the video game; and generating, by one or more of the processors, the background audio based upon processing the text data and the contextual data using one or more machine learning models.
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
G10L 15/02 - Extraction de caractéristiques pour la reconnaissance de la paroleSélection d'unités de reconnaissance
G10L 15/18 - Classement ou recherche de la parole utilisant une modélisation du langage naturel
G10L 15/183 - Classement ou recherche de la parole utilisant une modélisation du langage naturel selon les contextes, p. ex. modèles de langage
This specification describes systems, methods and apparatus for generating computer game levels using machine learning. According to a first aspect of this specification, there is described a computer implemented method comprising: extracting, from a known computer game level in a training dataset of known computer game levels for a computer game, a set of level features; processing, using an encoder neural network model, the known computer game level to generate an embedding of the known computer game level; processing, using a decoder neural network model, the embedding of the known game level and the set of level features to generate data indicative of a candidate computer game level for the computer game; determining a value of an objective function based on the data indicative of the candidate computer game level; and updating parameters of the encoder model and/or decoder model based at least in part on the value of the objective function.
A63F 13/60 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu
A63F 13/52 - Commande des signaux de sortie en fonction de la progression du jeu incluant des aspects de la scène de jeu affichée
This specification describes a method for generating audio for a video game. The method is implemented by one or more processors. The method comprises: obtaining, by one or more of the processors, acoustic feature data comprising a value for one or more audio characteristics; selecting, by one or more of the processors, a first latent embedding from a codebook of latent embeddings based upon processing the acoustic feature data using an acoustic machine learning model; and generating, by one or more of the processors, an output audio sample based upon the selected first latent embedding.
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
G10L 19/00 - Techniques d'analyse ou de synthèse de la parole ou des signaux audio pour la réduction de la redondance, p. ex. dans les vocodeursCodage ou décodage de la parole ou des signaux audio utilisant les modèles source-filtre ou l’analyse psychoacoustique
G10L 19/032 - Quantification ou dé-quantification de composantes spectrales
13.
VIDEO GAME TESTING AND GAMEPLAY FEEDBACK USING EYE TRACKING
A device may as implemented by an interactive computing system configured with specific computer-executable instructions, capturing one or more image frames of a video game, receiving, from one or more sensors, eye tracking information associated with a user playing the video game, associating the eye tracking information with the one or more image frames, identifying at least a first frame based at least in part on the eye tracking information, identifying at least one feature of interest within the first frame based on the eye tracking information, and outputting an indication associated with the at least one feature of interest.
G06F 11/36 - Prévention d'erreurs par analyse, par débogage ou par test de logiciel
A63F 13/77 - Aspects de sécurité ou de gestion du jeu incluant les données relatives aux dispositifs ou aux serveurs de jeu, p. ex. données de configuration, version du logiciel ou quantité de mémoire
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A gaming system may allow for a user to capture simulation state data of gameplay in a video game such that, upon occurrence of a cinematic rendering event, a cinematic rendered views of the gameplay may be rendered. Specifically, the gaming system may receive simulation state data and determine based thereon that a cinematic rendering event occurred. The gaming system may then receive previously stored simulation state data and render and output a plurality of cinematic rendered views based at least in part on a cinematic rendering timeline, the one or more simulation states of the simulation state data, and the one or more prior simulation states of the previously stored simulation state data. The cinematic rendering timeline may include a first shot and a second shot which include different configurations for rendering corresponding portions the plurality of cinematic rendered views.
A video game animation method comprises generating, for each of one or more entities to be animated, a position sequence for use in animating the movement of the entity along one or more paths in a virtual environment. The position sequence defines a position in the virtual environment at each of a plurality of time steps. Generating the position sequence comprises accessing one or more regions of position data, each region of position data comprising position data items for successive positions along a respective one of the one or more paths.
A63F 13/56 - Calcul des mouvements des personnages du jeu relativement à d’autres personnages du jeu, à d’autres objets ou d'autres éléments de la scène du jeu, p. ex. pour simuler le comportement d’un groupe de soldats virtuels ou pour l’orientation d’un personnage
Systems and methods are provided for performing patch calculation and block scanning both a current/existing build and a target build at a server or other location that is remote from a client computing device, where patch calculations/generation is conventionally performed. Patch calculation may be performed using scan block sizes that are smaller than what is conventionally used, and is not limited files (source/target) having the same name. Additionally, adjacent blocks of data representative of binary resources may be concatenated, and block edges can be scanned to determine if still other data/resources could be used for the target build.
G06F 8/658 - Mises à jour par incrémentMises à jour différentielles
A63F 13/77 - Aspects de sécurité ou de gestion du jeu incluant les données relatives aux dispositifs ou aux serveurs de jeu, p. ex. données de configuration, version du logiciel ou quantité de mémoire
A method for matchmaking of game players includes receiving player data associated with a user account, receiving match state data associated with a match of the game, and based on the player data and the match state data, extracting engagement prediction features. The method further includes providing, as an input to an engagement prediction model, the engagement prediction features, and receiving, as an output from the engagement prediction model, a predicted engagement metric, the predicted engagement metric being based on the engagement prediction features. The method further includes providing the predicted engagement metric as an input to a matchmaking system and receiving from the matchmaking system a decision whether to match the user account to the match of the game for gameplay.
A63F 13/795 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour trouver d’autres joueursAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour constituer une équipeAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour fournir une "liste d’amis"
A63F 13/798 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour évaluer les compétences ou pour classer les joueurs, p. ex. pour créer un tableau d’honneur des joueurs
18.
GENERATIVE MODEL FOR CANONICAL AND LOCALIZED GAME CONTENT
The present disclosure provides a system for generating gameplay content by a generative modeling system. The system can generate gameplay content via one or more machine-learning models trained using game and player data. The system can add content generated by the one or more machine-learning models to the game and player data and retrain the models using the generated content. The system can also localize generated content based on player locations and language preferences.
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
The present disclosure provides a system for customizing virtual entities via a multi-track system. The system can generate a user interface that uses multiple tracks to manage and display virtual entities and virtual objects. The virtual entity can be in-line with a track, and virtual display objects can move along tracks that intersect with the virtual entity track. When a particular virtual display object intersects with the virtual entity track, a three-dimensional representation of an item associated with the particular virtual display object can be rendered with the virtual entity.
A63F 13/63 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu par le joueur, p. ex. avec un éditeur de niveaux
20.
FACILITATING COMMUNICATION BETWEEN COMPUTING PLATFORMS
Facilitating communication between computing platforms, including establishing, via a game agnostic communication service (GACS), communication with a first and a second computing platform; receiving i) first game state data associated with a first video game application executing at the first computing platform and ii) second game state data associated with a second video game application executing at the second computing platform; receiving a request to display at the first computing platform a first communication interface of the GACS; generating content for the first communication interface based at least in part on the first or the second game state data; causing to display the generated content via the first communication interface; receiving selection of one of the communication messages to transmit to the second computing platform; transmitting the selected communication message to the second computing platform for display via a second communication interface of the GACS at the second computing platform.
This specification provides a system comprising: or more computing devices; and one or more storage devices communicatively coupled to the one or more computing devices. The one or more storage devices store instructions that, when executed by the one or more computing devices, cause the one or more computing devices to perform operations comprising: receiving input data derived from speech audio; generating facial animation data, comprising processing the input data and a conditioning input using a machine-learned generative model; generating further animation data, comprising processing the input data using a further machine-learned generative model; and generating animation data for at least a face in a video game using the facial animation data and the further animation data, wherein the animation data animates at least the face in the video game in accordance with speech sounds of the speech audio.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
G10L 25/30 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par la technique d’analyse utilisant des réseaux neuronaux
A system may provide for the converting pull-based program code into push-based program code. The system may receive source code comprising pull-based programming language instructions, wherein the pull-based programming language instructions comprise a plurality of nodes connected by a plurality of edges to form a directed graph and convert the source code into push-based programming language instructions at least in part by traversing the pull-based programming language instructions to determine one or more scopes of the directed graph including respective groups of one or more nodes of the plurality of nodes, the determining of the one or more scopes associated with one or more conditional nodes of the plurality of nodes and generating the push-based programming language instructions for the source code based on the one or more scopes of the directed graph.
A gaming device may provide for. The gaming device may receive video game execution data of a video game including data generated by instrumented code of the video game during execution of the video game, a rendered output of the video game during the execution of the video game, and telemetry data of the video game generated during the execution of the video game. The gaming device may then configure a ML model to at least one of detect or predict a type of events in the execution of the video game using at least the data generated by instrumented code, the rendered output of the video game, and the telemetry data as training data for the ML model.
The systems and processes described herein can provide dynamic and realistic route generation based on actual route data within the game environment. The system provides for generating a route database for use with a sports simulation game application. The present disclosure also provides for generation of routes during runtime of the game application. The route generation system can help address the problem of generating realistic and lifelike routes based on real life movements of athletes.
A63F 13/56 - Calcul des mouvements des personnages du jeu relativement à d’autres personnages du jeu, à d’autres objets ou d'autres éléments de la scène du jeu, p. ex. pour simuler le comportement d’un groupe de soldats virtuels ou pour l’orientation d’un personnage
A63F 13/57 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile
A63F 13/65 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu automatiquement par des dispositifs ou des serveurs de jeu, à partir de données provenant du monde réel, p. ex. les mesures en direct dans les compétitions de course réelles
A63F 13/812 - Jeux de ballon, p. ex. football ou baseball
25.
Deep learning system for data-driven skill estimation
Various aspects of the subject technology relate to systems, methods, and machine-readable media for determining player skill for video games. The method includes aggregating a plurality of player statistics for match outcomes from a plurality of video games. The method also includes calculating, for each player in a pool of players, a matchmaking rating for each player based on the plurality of player statistics, the matchmaking rating for each player comprising a predicted number of points each player will contribute to a match. The method also includes selecting, based on the matchmaking rating for each player, players from the pool of players. The method also includes matching the players based on the matchmaking rating for each player, a sum of the matchmaking ratings comprising a total predicted team score for the match.
A63F 13/798 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour évaluer les compétences ou pour classer les joueurs, p. ex. pour créer un tableau d’honneur des joueurs
A63F 13/795 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour trouver d’autres joueursAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour constituer une équipeAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour fournir une "liste d’amis"
A gaming system may allow for a user to capture and/or edit simulation state data of gameplay in a video game such that a replay of the gameplay may be rendered and/or shared. The gaming system may receive simulation state data and a request. The simulation state data may include simulation state(s) which include a model and pose state of an avatar corresponding to a player in a game simulation of a video game previously rendered as rendered view(s). The request may request a replay of the simulation state data with modification(s). The gaming system may modify the simulation state data to generate modified simulation state data and render, based on the modified simulation state data, replay view(s) that differ from the previously rendered view(s). The gaming system may then output the replay view(s) to a display of a computing device.
A63F 13/497 - Répétition partielle ou entière d'actions de jeu antérieures
A63F 13/525 - Changement des paramètres des caméras virtuelles
A63F 13/577 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant la détermination de la zone de contact entre les personnages ou les objets du jeu, p. ex. pour éviter une collision entre des voitures de course virtuelles
27.
SYSTEMS AND METHODS FOR HANDLING BEVELS IN MESH SIMPLIFICATION
A method, device, and computer-readable storage medium for simplifying a mesh including bevels. The method includes: receiving a polygonal mesh representing a three-dimensional (3D) object; identifying a set of edges in the polygonal mesh as bevel edges; performing a mesh simplification operation on the polygonal mesh to generate a simplified mesh, wherein the mesh simplification operation removes at least one edge that includes a vertex that is of a bevel edge, and wherein two vertices in the polygonal mesh are collapsed to a collapse vertex in the simplified mesh; and updating stored normals of the collapse vertex based on copying stored normals of the two vertices removed from the polygonal mesh to the collapse vertex.
A method, device, and computer-readable storage medium for simplifying a convex hull are disclosed. A first queue of candidate vertices of a convex hull for vertex removal is generated, wherein the candidate vertices are sorted in the first queue by ascending values of a first cost metric associated with removal of the candidate vertex. A second queue of candidate faces of the convex hull for face removal is generated, wherein the candidate faces are sorted in the second queue by ascending values of a second cost metric associated with removal of the candidate face. A simplification operation is performed on the convex hull to generate a simplified version of the convex hull by performing a vertex removal operation on the candidate vertex in the first queue with lowest first cost metric or performing a face removal operation on the candidate face in the second queue with lowest second cost metric.
G06F 30/15 - Conception de véhicules, d’aéronefs ou d’embarcations
G06F 30/23 - Optimisation, vérification ou simulation de l’objet conçu utilisant les méthodes des éléments finis [MEF] ou les méthodes à différences finies [MDF]
09 - Appareils et instruments scientifiques et électriques
Produits et services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
09 - Appareils et instruments scientifiques et électriques
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
33.
SYSTEM FOR GENERATING ANIMATION WITHIN A VIRTUAL ENVIRONMENT
The present disclosure discloses the use of machine learning to address the process of motion synthesis and generation of intermediate poses for virtual entities. A transformer-based model can be used to generate intermediate poses for an animation based on a set of key frames.
A63F 13/52 - Commande des signaux de sortie en fonction de la progression du jeu incluant des aspects de la scène de jeu affichée
A63F 13/56 - Calcul des mouvements des personnages du jeu relativement à d’autres personnages du jeu, à d’autres objets ou d'autres éléments de la scène du jeu, p. ex. pour simuler le comportement d’un groupe de soldats virtuels ou pour l’orientation d’un personnage
34.
SYSTEM FOR GENERATING ANIMATION WITHIN A VIRTUAL ENVIRONMENT
The present disclosure discloses the use of machine learning to address the process of motion synthesis and generation of intermediate poses for virtual entities. A transformer-based model can be used to generate intermediate poses for an animation based on a set of key frames.
Embodiments of systems and methods for enabling access to an online game, modifying user progress within the online game, monitoring user interactions with the online game, or adjusting user gameplay with the online game, via multiple platforms. The multiple platforms may include virtual reality platforms and non-virtual reality platforms.
A63F 13/00 - Jeux vidéo, c.-à-d. jeux utilisant un affichage à plusieurs dimensions généré électroniquement
A63F 13/285 - Génération de signaux de retour tactiles via le dispositif d’entrée du jeu, p. ex. retour de force
A63F 13/73 - Autorisation des programmes ou des dispositifs de jeu, p. ex. vérification de l’authenticité
A63F 13/77 - Aspects de sécurité ou de gestion du jeu incluant les données relatives aux dispositifs ou aux serveurs de jeu, p. ex. données de configuration, version du logiciel ou quantité de mémoire
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A63F 13/211 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs d’inertie, p. ex. des accéléromètres ou des gyroscopes
This specification describes a method for testing a user interface of a video game, the method implemented by one or more processors, the method comprising: obtaining, by one or more of the processors, a screenshot of the video game; processing, by one or more of the processors, the screenshot of the video game to detect one or more user interface elements; and performing, by one or more of the processors, one or more actions in the video game based upon the detected one or more user interface elements for testing the user interface of the video game.
G06F 11/36 - Prévention d'erreurs par analyse, par débogage ou par test de logiciel
A63F 13/533 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir pour inciter une interaction avec le joueur, p. ex. en affichant le menu d’un jeu
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 20/62 - Texte, p. ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
37.
Expressive speech audio generation for video games
This specification describes a computer-implemented method of training a machine-learned speech audio generation system to generate predicted acoustic features for generated speech audio for use in a video game. The training comprises receiving one or more training examples. Each training example comprises: (i) ground-truth acoustic features for speech audio, (ii) speech content data representing speech content of the speech audio, and (iii) speech expression data representing speech expression of the speech audio. Parameters of the machine-learned speech audio generation system are updated by: (i) minimizing a measure of difference between the predicted acoustic features for a training example and the corresponding ground-truth acoustic features of the training example, and (ii) minimizing a measure of difference between the predicted prosodic features for the training example and the corresponding ground-truth prosodic features for the training example.
G10L 13/00 - Synthèse de la paroleSystèmes de synthèse de la parole à partir de texte
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
The systems and methods described herein provide for an automated photosensitivity detection system (PDS) configured to automatically execute processes for flash detection and pattern detection of a video. PDS outputs an analysis result for each type of pattern detection analysis for the video. The PDS can execute each type of pattern detection analysis independently of the other pattern detection processes. Each pattern detection process is a distinct process that can be calculated without reference to the other processes. The final analysis result can aggregate the results of each detection process executed by the PDS.
The systems and methods described herein provide for an automated photosensitivity detection system (PDS) configured to automatically execute processes for flash detection and pattern detection of a video. PDS outputs an analysis result for each type of pattern detection analysis for the video. The PDS can execute each type of pattern detection analysis independently of the other pattern detection processes. Each pattern detection process is a distinct process that can be calculated without reference to the other processes. The final analysis result can aggregate the results of each detection process executed by the PDS.
G06T 7/136 - DécoupageDétection de bords impliquant un seuillage
G06T 7/37 - Détermination des paramètres de transformation pour l'alignement des images, c.-à-d. recalage des images utilisant des procédés de transformation de domaine
G06T 7/90 - Détermination de caractéristiques de couleur
40.
AWARENESS-BASED NON-PLAYER CHARACTER DECISION TECHNIQUES
A gaming system may provide for interactable environment geometry (IEG) detection. The gaming system may determine a sensory perspective of a sense of a non-player character (NPC) in a virtual environment of a game simulation, generate perception data of the NPC from the sensory perspective of the sense, input the perception data of the NPC into a detection model associated with the sense and receive, from the detection model, detection data for a detected item. The gaming system may then generate an awareness-based character decision for the NPC based on the detection data of the detected item.
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
A63F 13/812 - Jeux de ballon, p. ex. football ou baseball
This specification describes a computer-implemented method of generating context-dependent speech audio in a video game. The method comprises obtaining contextual information relating to a state of the video game. The contextual information is inputted into a prosody prediction module. The prosody prediction module comprises a trained machine learning model which is configured to generate predicted prosodic features based on the contextual information. Input data comprising the predicted prosodic features and speech content data associated with the state of the video game is inputted into a speech audio generation module. An encoded representation of the speech content data dependent on the predicted prosodic features is generated using one or more encoders of the speech audio generation module. Context-dependent speech audio is generated, based on the encoded representation, using a decoder of the speech audio generation module.
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
G10L 13/02 - Procédés d'élaboration de parole synthétiqueSynthétiseurs de parole
G10L 19/04 - Techniques d'analyse ou de synthèse de la parole ou des signaux audio pour la réduction de la redondance, p. ex. dans les vocodeursCodage ou décodage de la parole ou des signaux audio utilisant les modèles source-filtre ou l’analyse psychoacoustique utilisant des techniques de prédiction
G10L 25/30 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par la technique d’analyse utilisant des réseaux neuronaux
G10L 25/51 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation
42.
SYSTEM FOR RENDERING SKIN TONE WITHIN A GAME APPLICATION ENVIRONMENT
The present disclosure provides a system for rendering skin tones of virtual entities using dynamic lighting systems within the virtual environment. The dynamic lighting system can be used to modify parameters of light sources within a game environment to increase the range of renderable skin tones of a virtual entity.
A63F 13/60 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable computer software for identifying, monitoring, and reporting cheating in video games; downloadable computer software for ensuring compliance and integrity in video games; downloadable computer software for monitoring and analyzing video game play; downloadable computer software for monitoring and analyzing computer systems; downloadable computer software for monitoring and analyzing video game systems; downloadable computer software for monitoring and managing a gaming community to prevent cheating; downloadable computer software for preventing cheating in video games; downloadable computer software for preventing video game players from utilizing cheat codes; downloadable computer software for preventing video game players from obtaining an unfair advantage by using third-party tools; downloadable computer software for preventing video game players from using unauthorized third-party tools; downloadable anti-cheat game software; downloadable computer software for detecting, eradicating and preventing computer viruses; downloadable computer software for ensuring the security of software applications, games, and video and audio files; downloadable computer software packages for ensuring the security of software applications, games, and video and music files; downloadable computer software for game security and to prevent hacking; downloadable computer software for protecting video and computer games from security breaches Providing temporary use of non-downloadable computer software for identifying, monitoring, and reporting cheating in video games; providing temporary use of non-downloadable computer software for ensuring compliance and integrity in video games; providing temporary use of non-downloadable computer software for monitoring and analyzing video game play; providing temporary use of non-downloadable computer software for monitoring and analyzing computer systems; providing temporary use of non-downloadable computer software for monitoring and analyzing video game systems; providing temporary use of non-downloadable computer software for monitoring and managing a gaming community to prevent cheating; providing temporary use of non-downloadable computer software for preventing cheating in video games; providing temporary use of non-downloadable computer software for preventing video game players from utilizing cheat codes; providing temporary use of non-downloadable computer software for preventing video game players from obtaining an unfair advantage by using third-party tools; providing temporary use of non-downloadable computer software for preventing video game players from using unauthorized third-party tools; providing temporary use of non-downloadable anti-cheat game software; providing temporary use of non-downloadable computer software for detecting, eradicating and preventing computer viruses; providing temporary use of non-downloadable computer software for ensuring the security of software applications, games, and video and audio files; providing temporary use of non-downloadable computer software packages for ensuring the security of software applications, games, and video and music files; providing temporary use of non-downloadable computer software for game security and to prevent hacking; providing temporary use of non-downloadable computer software for protecting video and computer games from security breaches
09 - Appareils et instruments scientifiques et électriques
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
45.
Enhanced system for generation and optimization of facial models and animation
Systems and methods are provided for enhanced animation generation based on generative modeling. An example method includes training models based on faces and information associated with persons. The modeling system being trained to reconstruct expressions, textures, and models of persons.
G06T 13/80 - Animation bidimensionnelle [2D], p. ex. utilisant des motifs graphiques programmables
G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
46.
Skin microstructure texture filtering for runtime facial animations
A method of skin microstructure texture filtering for facial animation includes obtaining a plurality of one-dimensional (1D) filtered tiles corresponding to a plurality of filter axis angles and a plurality of filter parameters applied to a neutral tile, and at runtime, for each pixel representing a region of human skin, determining a principal direction of deformation, a principal filter parameter corresponding to the principal direction of deformation, and a secondary filter parameter corresponding to a secondary direction of deformation orthogonal to the principal direction of deformation, and selecting a first 1D filtered tile among the plurality of 1D filtered tiles, the first 1D filter tile corresponding to the secondary direction of deformation and the secondary filter parameter, and generating a respective two-dimensional (2D) filtered tile by convolving the first 1D filtered tile with a second 1D filter kernel corresponding to the principal direction of deformation and the principal filter parameter.
In response to receiving user input command for sending a contextually aware communication, a computer system is configured to use game state data to determine a target location that a player is focusing on in a virtual environment in a video game, identify a unit that the player likely wants to communicate about based on at least priorities of unit types and proximities of units to the target location, and select a communication action for performance. Different communication actions can be performed in response to the same user input command when the game state data indicates different game states.
A63F 13/5372 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p. ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour marquer les personnages, les objets ou les lieux dans la scène de jeu, p. ex. en affichant un cercle autour du personnage commandé par le joueur
A63F 13/23 - Dispositions d'entrée pour les dispositifs de jeu vidéo pour l'interfaçage avec le dispositif de jeu, p. ex. des interfaces spécifiques entre la manette et la console de jeu
A63F 13/5378 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p. ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour afficher une vue supplémentaire du dessus, p. ex. écrans radar ou cartes
A system is disclosed that is able to combine motion capture data with volumetric capture data to capture player style information for a player. This player style information or player style data may be used to modify animation models used by a video game to create a more realistic look and feel for a player being emulated by the video game. This more realistic look and feel can enable the game to replicate play style of a player. For example, one soccer player may run with his elbows closer to his body and his forearm may swing across his torso. While another soccer player who is perhaps more muscular may run with his elbows and arms further from his body and his forearms may not cross in front of his torso when running.
A63F 13/56 - Calcul des mouvements des personnages du jeu relativement à d’autres personnages du jeu, à d’autres objets ou d'autres éléments de la scène du jeu, p. ex. pour simuler le comportement d’un groupe de soldats virtuels ou pour l’orientation d’un personnage
A63F 13/52 - Commande des signaux de sortie en fonction de la progression du jeu incluant des aspects de la scène de jeu affichée
The disclosure provides a video playback system for use within a game application and/or other interactive computing environments. The video playback system can be used to capture gameplay during execution of a game application. The captured gameplay video can be processed and stored within the game application or in a network accessible location.
A63F 13/86 - Regarder des jeux joués par d’autres joueurs
A63F 13/20 - Dispositions d'entrée pour les dispositifs de jeu vidéo
A63F 13/25 - Dispositions de sortie pour les dispositifs de jeu vidéo
A63F 13/30 - Dispositions d’interconnexion entre des serveurs et des dispositifs de jeuDispositions d’interconnexion entre des dispositifs de jeuDispositions d’interconnexion entre des serveurs de jeu
A63F 13/32 - Dispositions d’interconnexion entre des serveurs et des dispositifs de jeuDispositions d’interconnexion entre des dispositifs de jeuDispositions d’interconnexion entre des serveurs de jeu utilisant des connexions de réseau local [LAN]
A63F 13/33 - Dispositions d’interconnexion entre des serveurs et des dispositifs de jeuDispositions d’interconnexion entre des dispositifs de jeuDispositions d’interconnexion entre des serveurs de jeu utilisant des connexions de réseau étendu [WAN]
A63F 13/332 - Dispositions d’interconnexion entre des serveurs et des dispositifs de jeuDispositions d’interconnexion entre des dispositifs de jeuDispositions d’interconnexion entre des serveurs de jeu utilisant des connexions de réseau étendu [WAN] utilisant des réseaux sans fil, p. ex. les réseaux téléphoniques cellulaires
A63F 13/335 - Dispositions d’interconnexion entre des serveurs et des dispositifs de jeuDispositions d’interconnexion entre des dispositifs de jeuDispositions d’interconnexion entre des serveurs de jeu utilisant des connexions de réseau étendu [WAN] utilisant l’Internet
This specification describes a computer-implemented method of training a machine-learned speech audio generation system for use in video games. The training comprises: receiving one or more training examples. Each training example comprises: (i) ground-truth acoustic features for speech audio, (ii) speech content data representing speech content of the speech audio, and (iii) a ground-truth speaker identifier for a speaker of the speech audio. Parameters of the machine-learned speech audio generation system are updated to: (i) minimize a measure of difference between the predicted acoustic features of a training example and the corresponding ground-truth acoustic features of the training example, (ii) maximize a measure of difference between the first speaker classification for the training example and the corresponding ground-truth speaker identifier of the training example, and (iii) minimize a measure of difference between the second speaker classification for the training example and the corresponding ground-truth speaker identifier of the training example.
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
G10L 13/02 - Procédés d'élaboration de parole synthétiqueSynthétiseurs de parole
G10L 17/04 - Entraînement, enrôlement ou construction de modèle
This specification described systems, methods, and apparatus for policy models for selecting an action in a game environment based on persona data, as well as the use of said models. According to one aspect of this specification, there is described a computer implemented method of controlling an agent in an environment, the method comprising: for a plurality of timesteps in a sequence of timesteps: inputting, into a machine-learned policy model, input data comprising a current state of the environment and an auxiliary input, the auxiliary input indicating a target action style for the agent; processing, by the machine-learned policy model, the input data to select an action for a current timestep; performing, by the agent in the environment, the selected action; and determining, subsequent to the selected action being performed, an update to the current state of the environment.
Various aspects of the subject technology relate to systems, methods, and machine-readable media for preventing rendering of a character in a video game. The method includes receiving an action regarding a first character rendered in a first-person point of view (POV), wherein the POV of the first character is changed from the first-person POV to a third-person POV. The method includes detecting a change in the POV of the first character. The method includes determining characters are outside the first character field of view (FOV) in the first-person POV and would be within the FOV of the first character in the third-person POV. The method includes changing the POV of the first character from the first-person POV to a third person POV. The method includes causing rendering of the video game in a third-person POV of the first character, the rendering preventing rendering of other characters.
An imitation learning system may learn how to play a video game based on user interactions by a tester or other user of the video game. The imitation learning system may develop an imitation learning model based, at least in part, on the tester's interaction with the video game and the corresponding state of the video game to determine or predict actions that may be performed when interacting with the video game. The imitation learning system may use the imitation learning model to control automated agents that can play additional instances of the video game. Further, as the user continues to interact with the video game during testing, the imitation learning model may continue to be updated. Thus, the interactions by the automated agents with the video game may, over time, almost mimic the interaction by the user enabling multiple tests of the video game to be performed simultaneously.
A63F 13/60 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu
G06F 11/36 - Prévention d'erreurs par analyse, par débogage ou par test de logiciel
G06N 3/088 - Apprentissage non supervisé, p. ex. apprentissage compétitif
Various aspects of the subject technology relate to systems, methods, and machine-readable media for generating insights for video games. The method includes gathering information regarding a player for a plurality of video games, the information comprising at least one of in-world state data, player action data, player progression data, and/or real-world events relevant to each video game. The method also includes tracking events in at least one video game of the plurality of video games, the events comprising an action event or a standby event. The method also includes determining that an event of the tracked events is an action event. The method also includes generating insights regarding the action event based on the information gathered regarding the player, the insights for improving the player's performance in the video game. The method also includes relaying the insights to the player to improve the player's performance in the video game.
A63F 13/58 - Commande des personnages ou des objets du jeu en fonction de la progression du jeu en calculant l’état des personnages du jeu, p. ex. niveau de vigueur, de force, de motivation ou d’énergie
A63F 13/215 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types comprenant des moyens de détection des signaux acoustiques, p. ex. utilisant un microphone
A63F 13/424 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p. ex. les signaux générés par le joueur ou dérivés de l’environnement par mappage des signaux d’entrée en commandes de jeu, p. ex. mappage du déplacement d’un stylet sur un écran tactile en angle de braquage d’un véhicule virtuel incluant des signaux d’entrée acoustiques, p. ex. en utilisant les résultats d’extraction de hauteur tonale ou de rythme ou de reconnaissance vocale
A63F 13/795 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour trouver d’autres joueursAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour constituer une équipeAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour fournir une "liste d’amis"
A63F 13/798 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour évaluer les compétences ou pour classer les joueurs, p. ex. pour créer un tableau d’honneur des joueurs
This specification describes a computing system for generating visual assets for video games. The computing system comprises an image segmentation model, a first 3D generation model, and a second 3D generation model. At least one of the first 3D generation model and the second 3D generation model comprises a machine-learning model. The system is configured to obtain: (i) a plurality of images corresponding to the visual asset, each image showing a different view of an object to be generated in the visual asset, and (ii) orientation data for each image that specifies an orientation of the object in the image. A segmented image is generated for each image. This comprises processing the image using the image segmentation model to segment distinct portions of the image into one or more classes of a predefined set of classes. For each image, 3D shape data is generated for a portion of the object displayed in the image. This comprises processing the segmented image of the image, the orientation data of the image, and style data for the visual asset using the first 3D generation model. 3D shape data is generated for the visual asset. This comprises processing the generated 3D shape data of each image using the second 3D generation model.
Various aspects of the subject technology relate to systems, methods, and machine-readable media for rendering audio via a game engine for a game. Various aspects may include determining sound source reverb metrics and listener reverb metrics. Aspects may include determining reverbs within a reverb possibility space for all rooms or spaces of the game rendered by the game engine. Aspects may also include determining sound tuning parameters describing reverb attenuation over distance. Aspects may include calculating acoustic parameters based on the reverb metrics, relative positions, and sound tuning parameters. Aspects may include rendering audio according to a fit of determined reverbs to the acoustic parameters.
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
H04S 7/00 - Dispositions pour l'indicationDispositions pour la commande, p. ex. pour la commande de l'équilibrage
09 - Appareils et instruments scientifiques et électriques
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
A player profile management system collects player data from various systems and generates and manages player profiles. A snapshot pipeline of the player profile management system generates a snapshot player profile associated with a player. The player profile management system receives, after generating the snapshot player profile associated with the player, player data associated with the player. An update pipeline of the player profile management system generates, based on the snapshot player profile and the player data associated with the player, an update player profile associated with the player.
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
A63F 13/355 - Réalisation d’opérations pour le compte de clients ayant des capacités de traitement restreintes, p. ex. serveurs transformant une scène de jeu qui évolue en flux vidéo codé à transmettre à un téléphone portable ou à un client léger
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
09 - Appareils et instruments scientifiques et électriques
Produits et services
recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; Downloadable computer game software; recorded video game software
The specification relates to the generation of in-game animation data and the evaluation of in-game animations. According to a first aspect of the present disclosure, there is described a computer implemented method comprising: inputting, into one or more neural network models, input data comprising one or more current pose markers indicative of a current pose of an in-game object, one or more target markers indicative of a target pose of an in-game object and an object trajectory of the in-game object; processing, using the one or more neural networks, the input data to generate one or more intermediate pose markers indicative of an intermediate pose of the in-game object positioned between the current pose and the target pose; outputting, from the one or more neural networks, the one or more intermediate pose markers; and generating, using the one or more intermediate pose markers, an intermediate pose of the in-game object, wherein the intermediate pose of the in-game object corresponds to a pose of the in-game object at an intermediate frame of in-game animation between a current frame of in-game animation in which the in-game object is in the current pose and a target frame of in-game animation in which the in-game object is in the target pose.
A63F 13/57 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
The present disclosure provides a system for generating and rendering virtual objects, such as mesh particles, using dynamic color blending within the virtual environment. Mesh particles may be divided up into portions. For example, the portions of the mesh particle may be a single pixel or a group of pixels. The color of the mesh particles can be dynamically determined for the portions of a mesh particle.
Systems and methods are provided for enhanced animation generation based on generative control models. An example method includes accessing an autoencoder trained based on character control information generated using motion capture data, the character control information indicating, at least, trajectory information associated with the motion capture data, and the autoencoder being trained to reconstruct, via a latent feature space, the character control information. First character control information associated with a trajectory of an in-game character of an electronic game is obtained. A latent feature representation is generated and the latent feature representation is modified. A control signal is output to a motion prediction network for use in updating a character pose of the in-game character.
A63F 13/57 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile
G06N 3/04 - Architecture, p. ex. topologie d'interconnexion
A collusion detection system may detect collusion between entities participating in online gaming. The collusion detection system may identify a plurality of entities associated with and opponents within an instance of an online game, determine social data associated with the plurality of entities, determine in-game behavior data associated with the plurality of entities, and determine, for one or more pairings of the plurality of entities, respective pairwise feature sets based at least in part on the social data and the in-game behavior data. The collusion detection system may then perform anomaly detection on the respective pairwise feature sets and, in response to the anomaly detection detecting one or more anomalous pairwise feature sets, output one or more suspect pairings of the plurality of entities corresponding to the one or more anomalous pairwise feature sets as suspected colluding pairings.
A persona system determines a player persona for a player of a gaming system based on gameplay information for the user and, for example, performs dynamic content generation or additional product recommendations based on the player persona. The persona system may receive a request for content based on a persona of a player and receive gameplay data associated with gameplay of the player in a plurality of games. The persona system may then generate a player persona of the player based on the gameplay data associated with the gameplay of the player in the plurality of games, determine persona based content based at least in part on a portion of the player persona, and output the persona based content in response to the request.
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
A63F 13/69 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en permettant l'utilisation ou la mise à jour d'éléments spécifiques du jeu, p. ex. déblocage d’options, d’éléments, de niveaux ou de versions cachés
67.
CHARACTER CONTROLLERS USING MOTION VARIATIONAL AUTOENCODERS (MVAES)
Some embodiments herein can include methods and systems for predicting next poses of a character within a virtual gaming environment. The pose prediction system can identify a current pose of a character, generate a gaussian distribution representing a sample of likely poses based on the current pose, and apply the gaussian distribution to the decoder. The decoder can be trained to generate a predicted pose based on a gaussian distribution of likely poses. The system can then render the predicted next pose of the character within the three-dimensional virtual gaming environment. Advantageously, the pose prediction system can apply a decoder that does not include or use input motion capture data that was used to train the decoder.
In a video game, a player's character can start in a normal state, receive first damage, and change to an incapacitated state. The player's character can be revived from the incapacitated state back to the normal state. The player's character can be changed from the incapacitated state to a preliminarily defeated state, and in response, a player respawn activation item can be generated. The player respawn activation item can be used by the player's teammates to respawn the player's character at one or more respawn locations.
A63F 13/58 - Commande des personnages ou des objets du jeu en fonction de la progression du jeu en calculant l’état des personnages du jeu, p. ex. niveau de vigueur, de force, de motivation ou d’énergie
A63F 13/5378 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p. ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour afficher une vue supplémentaire du dessus, p. ex. écrans radar ou cartes
09 - Appareils et instruments scientifiques et électriques
35 - Publicité; Affaires commerciales
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Administration of loyalty programs. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
Methods, apparatus and systems are provided for generating an interactive non-player character (NPC) scene for a computer game environment of a video game. Changes are detected in relation to a script associated with the interactive NPC scene. For each NPC, a set of NPC data associated with the interactions said each NPC has within the script is generated corresponding to the changes. The generated set of NPC data is processed with an NPC rig associated with said each NPC to generate an NPC asset. A camera solver is applied to a region of the computer game environment associated with the script for determining locations of NPC assets and one or more cameras within said region in relation to said interactive NPC scene. Data representative of said each NPC asset and said determined NPC asset and camera locations for use by a game development engine for generating said interactive NPC scene.
A63F 13/56 - Calcul des mouvements des personnages du jeu relativement à d’autres personnages du jeu, à d’autres objets ou d'autres éléments de la scène du jeu, p. ex. pour simuler le comportement d’un groupe de soldats virtuels ou pour l’orientation d’un personnage
A63F 13/5258 - Changement des paramètres des caméras virtuelles par adaptation dynamique de la position de la caméra virtuelle pour maintenir un personnage ou un objet de jeu dans son cône de vision, p. ex. pour suivre un personnage ou une balle
A63F 13/57 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile
A63F 13/60 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
G06V 40/20 - Mouvements ou comportement, p. ex. reconnaissance des gestes
G10L 13/027 - Synthétiseurs de parole à partir de conceptsGénération de phrases naturelles à partir de concepts automatisés
71.
VERSION AGNOSTIC CENTRALIZED STATE MANAGEMENT IN VIDEO GAMES
A state system providing version agnostic centralized state management can use node graphs corresponding to virtual entities to maintain a world state among any version of a video game. As states to virtual entities change, corresponding nodes of the node graph are updated in response to the state change to account for and store the state change. As a data structure referencing, associating, and/or corresponding to virtual entities themselves, the node graph can facilitate centralized state management for a video game in a version agnostic manner. Additionally, the state system is also configured to validate node dependencies of a node graph when a corresponding change in state of a corresponding virtual entity occurs during gameplay, to avoid and/or prevent game state errors.
G06F 8/71 - Gestion de versions Gestion de configuration
A63F 13/77 - Aspects de sécurité ou de gestion du jeu incluant les données relatives aux dispositifs ou aux serveurs de jeu, p. ex. données de configuration, version du logiciel ou quantité de mémoire
A party system of a gaming environment provides to users of the gaming environment parties of sizes that extend beyond the party size limitations of a video game. An extendable party can dynamically create subset parties of users for gameplay as needed, without requiring the party to be ceased. Therefore, an extendable party allows a subset of users, or multiples thereof, to enter into gameplay while maintaining the party as a whole. A party system, video game, or gaming environment can be configured to apply one or more rules or policies to a party that limit or alter a feature or function of an extended party, such as to prevent players among extended parties from providing or receiving competitive advantages.
A63F 13/795 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour trouver d’autres joueursAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour constituer une équipeAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour fournir une "liste d’amis"
A63F 13/80 - Adaptations particulières pour exécuter un genre ou un mode spécifique de jeu
73.
SYSTEM FOR MOTION RETARGETING WITH OBJECT INTERACTION
A system may perform animation retargeting that may allow an existing animation to be repurposed for a different skeleton and/or a different environment geometry from that associated with the existing animation. The system may input, to a machine learning (ML) retargeting model, an input animation, a target skeleton and environment geometry data of an environment for a predicted animation, wherein the ML retargeting model is configured to generate the predicted animation based on the input animation, the target skeleton and the environment geometry data of the environment for the predicted animation and receive, from the ML retargeting model, the predicted animation based on the input animation, the target skeleton and the environment geometry data of the environment for the predicted animation.
Embodiments of the present application provide systems and methods for world prediction within a game application environment. The systems and methods can include a world prediction module for predicting collisions between virtual objects in the game application environment. The world prediction module can use game state data to simulate the virtual objects into future instances. The world prediction module can parse the future instances to find collisions in a farfield representation of the virtual objects and collisions in a nearfield representation of the virtual objects. The world prediction module can use collision information to update a game engine of the game application.
A63F 13/577 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant la détermination de la zone de contact entre les personnages ou les objets du jeu, p. ex. pour éviter une collision entre des voitures de course virtuelles
A63F 13/57 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile
75.
SYSTEM FOR AUTOMATED GENERATION OF FACIAL SHAPES FOR VIRTUAL CHARACTER MODELS
Systems and methods are provided for enhanced face shape generation for virtual entities based on generative modeling techniques. An example method includes training models based on synthetically generated faces and information associated with an authoring system. The modeling system being trained to reconstruct face shapes for virtual entities based on a latent space embedding of a face identity.
G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
76.
QUALITY ANALYSIS OF VISUAL PROGRAMMING SCRIPTING LANGUAGE USING MACHINE LEARNING TECHNIQUES
A quality analysis tool for visual-programming scripting languages uses machine learning to process changes from visual-programming environments. The quality analysis tool can receive data associated with a code submission via a visual-programming scripting language, process the data to identify features in the data that correspond to previously identified defects, apply a pattern matching algorithm to the identified features, determine a risk prediction based on a learned pattern recognition model associated with a pattern in the features, and transmit a notice of predicted risk. The quality analysis tool can train models for use with visual-programming scripting languages and visual-programming environments.
Systems and methods are presented herein from providing assist indication in an interactive virtual environment. Game data of a game session of a virtual interactive environment is received. Based in part on the game data, a navigation assist used in the game session is identified. An assist indication to render is determined based on the game data and the navigation assist. The assist indication is configured for rendering during runtime. The assist indication is rendered in the virtual interactive environment of the game session.
A63F 13/5375 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p. ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour suggérer graphiquement ou textuellement une action, p. ex. en affichant une flèche indiquant un tournant dans un jeu de conduite
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
A method comprises determining positions of render strands based on a simulation model of simulation strands. Each simulation strand corresponds to a render strand. For a first range of values of a metric up to a threshold value, the simulation model is determined in a first simulation level using a first set of simulation strands. For a second range of values of the metric from a second threshold value, the simulation model is determined in a second simulation level using a subset of the first set of simulation strands. For metric values between the first and second threshold values, a transition between first and second simulation levels comprises computing the simulation model in the first level of detail. Positions of the render strands during the transition are derived from the first set of simulation strands having a first weight, and the second set of simulation strands having a second weight.
A method, device, and computer-readable storage medium for generating a proxy mesh are disclosed. The method includes: receiving a reference mesh, wherein the reference mesh comprises a polygonal mesh that is a computer representation of a three-dimensional (3D) object; computing quadrics corresponding to the reference mesh; receiving a second polygonal mesh, wherein the second polygonal mesh comprises a polygonal mesh generated based on the reference mesh; transferring the quadrics corresponding to the reference mesh to the second polygonal mesh; and generating a proxy mesh based on the quadrics corresponding to the reference mesh transferred to the second polygonal mesh.
A gaming system may provide for interactable environment geometry (IEG) detection. The gaming system detect one or more IEG features in an area of a virtual environment of a game including an avatar of a player, determine, for an IEG feature of the one or more potentially IEG features, one or more unprocessed potential interactions that are valid for the IEG feature, the determining that an individual unprocessed potential interaction of the one or more unprocessed potential interactions is a valid interaction for the IEG feature being based on corresponding criteria of the individual unprocessed potential interactions, and determine, based at least in part on a position of the avatar in the virtual environment, whether the valid interaction for the IEG feature is available for the avatar.
A63F 13/63 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu par le joueur, p. ex. avec un éditeur de niveaux
A63F 13/55 - Commande des personnages ou des objets du jeu en fonction de la progression du jeu
A63F 13/573 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant les trajectoires des objets du jeu, p. ex. d’une balle de golf en fonction du point d’impact
A63F 13/577 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant la détermination de la zone de contact entre les personnages ou les objets du jeu, p. ex. pour éviter une collision entre des voitures de course virtuelles
A63F 13/60 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu
81.
SYSTEM AND METHODS FOR AUTOMATED LIGHT RIGGING IN VIRTUAL INTERACTIVE ENVIRONMENTS
An automated light rigging system (ALRS) that adjusts, or creates, light rigs that more accurately causes lighting from light rigs and light objects to conform to, or more closely conform with, target lux values of a target lux map for one or more regions of a virtual interactive environment. The ALRS can receive light rig data to sample lux values of the virtual interactive environment to determine where a loss or discrepancy of luminance occurs among the virtual interactive environment, based at least in part on a target lux map.
H05B 47/155 - Commande coordonnée de plusieurs sources lumineuses
H05B 47/11 - Commande de la source lumineuse en réponse à des paramètres détectés en détectant la luminosité ou la température de couleur de la lumière ambiante
H05B 47/165 - Commande de la source lumineuse en suivant une séquence programmée pré-assignéeCommande logique [LC]
H05B 47/175 - Commande de la source lumineuse par télécommande
H05B 47/29 - Circuits prévoyant le remplacement de la source lumineuse en cas de panne
Systems and methods for user interface navigation may include a computing system which causes display of a user interface including a set of first user interface elements. The computing system receives a request to expand a switcher menu including a set of second user interface elements including at least one of the first user interface elements. Each second user interface element, when selected within the switcher menu, is displayed with one or more respective third user interface elements. The computing system causes display of a set of third user interface elements corresponding to one of the second user interface elements, and causes, responsive to a second request to navigate from the switcher menu to one of the set of third user interface elements, the switcher menu to collapse and display the second user interface element with at least some of the third user interface elements.
The systems and methods described herein provide a bakeless keyframe animation solver that enables creation of keyframe poses by manipulation of a skeleton and authors animations through interpolation. A manipulation module enables the manipulation of joints of a skeleton to produce keyframes without the need to bake animations that are driven by a rig. An interpolation module uses the manipulation module to change the kinematic properties (e.g., FK and IK) of one or more joints when interpolating between one or more keyframes to create an animation.
Gameplay API (G-API) calls are embedded by an anomaly system to detect anomalous gameplay among a video game. Anomalous gameplay is detected by identifying anomalous sequences of G-API calls made during gameplay. Anomalous gameplay can correspond to issues that disrupt and/or degrade the user experience of a video game; such as the existence of a bug or exploit, or the use of cheats, and/or bots by users of a video game. A machine learning embedding model among an anomaly system is trained to embed G-API calls corresponding to a video game. Once trained, a distance analysis and distribution analysis is performed by the anomaly system on embedded G-API calls to detect anomalies among the G-API calls made by a video game. Data corresponding to the detected anomalies can be included among a generated anomaly detection report by the anomaly system for further analysis, such as by video game developers.
An animation system is configured to accessibly curate selectable animations and/or stylized animations based in part on vocal audio data provided by a user during gameplay of a video game application. The vocal audio data is encoded by way of a machine learning model to produce and/or extract feature embeddings corresponding to the utterances among the vocal audio data. The feature embeddings are used in part to create a list of selectable animations and to create stylized animations that can be displayed to the user. In turn, the animation system enables users to use their voice to personalize their gameplay experience.
A body mesh to be collided with a cloth mesh are received, together with collider objects (that correspond to or approximate the body mesh) divided into cells. Polygons of the body mesh are projected onto the surface of the collider objects from a location within the collider object to identify cells of the collider object that overlap the projection of the polygons. A set of cloth features that collide with the collider object are projected onto the surface of the collider object to identify cells onto which the cloth features are projected. For each cell that includes a projection of a cloth feature, collision tests are performed between the cloth feature and the polygons whose projections also overlap the same cell. Using the collider object as an acceleration structure allows for cloth simulation to be performed while reducing collision tests for each cloth feature to a limited number of polygons.
An example method of simulating dribbling ball behavior in interactive videogames includes: determining, a current spatial position of a simulated ball rolling on a surface of a simulated terrain; determining, based on a slope of the surface of the simulated terrain, a likelihood of ball dribbling; identifying a segment of a path of the simulated ball over the surface from the current spatial position of the simulated ball, such that a dribbling criterion based on the likelihood of ball dribbling is satisfied on the segment of the path; determining, based on a speed of the simulated ball, a dribble-simulating surface angle adjustment range; choosing a dribble-simulating surface angle adjustment value from the dribble-simulating surface angle adjustment range; adjusting, based on the dribble-simulating surface angle adjustment value, a surface normal of a segment of the surface on the path; and determining, based on the adjusted surface normal, a next spatial position of the simulated ball.
A63F 13/573 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant les trajectoires des objets du jeu, p. ex. d’une balle de golf en fonction du point d’impact
A63F 13/52 - Commande des signaux de sortie en fonction de la progression du jeu incluant des aspects de la scène de jeu affichée
A63F 13/57 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile
A63F 13/812 - Jeux de ballon, p. ex. football ou baseball
A63F 13/44 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p. ex. les signaux générés par le joueur ou dérivés de l’environnement incluant la durée ou la synchronisation des opérations, p. ex. l’exécution d’une action dans une certaine fenêtre temporelle
A device may access a feature vector generated based on interactions by a user with a video game. The device may access a cluster map comprising a mapping of user clusters, wherein each location within the cluster map is associated with a set of users whose feature vectors are within a threshold degree of similarity of each other. The cluster map may be generated using a plurality of extracted feature vectors obtained from interaction information. A device may determine a map location within the cluster map associated with the user based at least in part on the feature vector. A device may determine a target map location within the cluster map. A device may determine a guidance action based at least in part on the target map location and the map location associated with the user. A device may execute the guidance action.
A63F 13/5375 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p. ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour suggérer graphiquement ou textuellement une action, p. ex. en affichant une flèche indiquant un tournant dans un jeu de conduite
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
Embodiments of the present application provide an interactive computing system with a game development server that can host multiple game editing sessions and allow multiple game developer systems to work on the same game assets at the same time. The game development server can manage some or all change requests from game developers to make changes to the game data.
A63F 13/60 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu
G06F 16/176 - Support d’accès partagé aux fichiersSupport de partage de fichiers
The techniques described herein include using a system for enabling assisted gameplay in a computer game using real-time detection of predefined scene features and mapping of the detected features to recommended actions. For example, the system may generate a scanning query (e.g., a segment cast) toward a target area within a virtual scene, determine a geometric feature based on the scanning query, determine a scene feature based on the geometric feature, determine an action associated with the scene feature, and control an avatar based on the action. Examples of scene features that may have mappings to recommended actions include obstacles within a predicted trajectory of the avatar and transitions in the ground level of the virtual scene.
A63F 13/577 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant la détermination de la zone de contact entre les personnages ou les objets du jeu, p. ex. pour éviter une collision entre des voitures de course virtuelles
A63F 13/573 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant les trajectoires des objets du jeu, p. ex. d’une balle de golf en fonction du point d’impact
A personalization system determines a playstyle associated with a player of a gaming system based on gameplay information for the player and, for example, generates personalized animation for the player based on the player's playstyle. The personalization system can receive gameplay data associated with a playstyle of a player in one or more games and receive persona data associated with the player and the gameplay. The persona system can generate an animation for the player based on the gameplay data associated with the playstyle of the player in the one or more games, dynamically generate, based at least in part on a portion of the playstyle of the player, content including personalized animation, wherein the content including personalized animation is dynamically generated personalized content associated with the player, and transmit the content including personalized animation for presentation in a game associated with the player.
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
A63F 13/52 - Commande des signaux de sortie en fonction de la progression du jeu incluant des aspects de la scène de jeu affichée
A63F 13/56 - Calcul des mouvements des personnages du jeu relativement à d’autres personnages du jeu, à d’autres objets ou d'autres éléments de la scène du jeu, p. ex. pour simuler le comportement d’un groupe de soldats virtuels ou pour l’orientation d’un personnage
A63F 13/65 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu automatiquement par des dispositifs ou des serveurs de jeu, à partir de données provenant du monde réel, p. ex. les mesures en direct dans les compétitions de course réelles
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
A video game system and method analyze virtual contact between an avatar and a virtual object within a video game. The point of contact of the virtual contact on the virtual object and/or the intensity of contact of the virtual contact may then be used to determine a subsequent virtual action to be performed within the video game. The virtual action, with any virtual movement thereof, may be carried out in a realistic manner within the video game by determining a virtual trajectory of the motion. The virtual trajectory may be determined using a motion model. The motion model may provide the virtual trajectory of the virtual object based at least in part on one or more parameters of the virtual object, such as a weight parameter. The motion model may be trained using training video clips with realistic motion of virtual objects.
A63F 13/573 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant les trajectoires des objets du jeu, p. ex. d’une balle de golf en fonction du point d’impact
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
09 - Appareils et instruments scientifiques et électriques
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
94.
DETECTING HIGH-SKILLED ENTITIES IN LOW-LEVEL MATCHES IN ONLINE GAMES
A high-skilled-low-level detection system may detect high-skilled entities in low-level matches of an online gaming. The system may identify a plurality of entities that are within a first category of entities eligible to be matched by a matchmaking algorithm. The system may then determine respective feature sets based at least in part on gameplay data associated with the plurality of entities and perform anomaly detection on the respective feature sets. The system may then determine, based on the anomaly detection, an anomalous entity of the plurality of entities and cause the matchmaking algorithm to match the anomalous entity with other entities that are in a second category of entities.
A63F 13/798 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour évaluer les compétences ou pour classer les joueurs, p. ex. pour créer un tableau d’honneur des joueurs
A method, device, and computer-readable storage medium for generating a shadow mesh. The method includes: receiving a graphics mesh; computing a set of LOD versions for each component of the graphics mesh, where each successive LOD version in the set of LOD versions includes fewer polygons than the preceding LOD version; computing a set of shadow versions for each component of the graphics mesh, where each successive shadow version in the set of shadow versions includes fewer polygons than the preceding shadow version, and each successive shadow version includes vertices that lie within a mesh defined by the preceding shadow version; generate N LOD meshes for the graphics mesh by selecting, for each LOD, a LOD version of each component to include in the LOD mesh; and generating a shadow mesh by selecting a shadow version of each component to include in the shadow mesh.
A method, computer-readable storage medium, and device for generating a master representation of input models. The method comprises: receiving a first base mesh and a second base mesh, wherein the first base mesh has a first topology and is associated with a first set of blendshapes to deform the first base mesh, the second base mesh has a second topology and is associated with a second set of blendshapes to deform the second base mesh, and the second topology is different from the first topology; combining the first topology and the second topology into a combined mesh topology representation; combining the first set of blendshapes and the second set of blendshapes into a combined blendshape representation; and outputting the combined mesh topology representation and the combined blendshape representation as a master representation, wherein the master representation can be queried with a target topology and blendshape.
A spectator system may provide for spectating in online gaming. The spectator system may receive, at a spectator server, game state data from a game simulation server hosting an online game for one or more players, generate one or more spectator game state data corresponding to one or more spectator devices and output the one or more spectator game state data to the spectator devices. The spectator server may further output the game state data to another spectator server.
A computing system may provide functionality for controlling an animated model to perform actions and to perform transitions therebetween. The system may determine, from among a plurality of edges from a first node of a control graph to respective other nodes of the control graph, a selected edge from the first control node to a selected node. The system may then determine controls for an animated model in a simulation based at least in part on the selected edge, control data associated with the selected node, a current simulation state of the simulation, and a machine learned algorithm, determine an updated simulation state of the simulation based at least in part on the controls for the animated model, and adapt one or more parameters of the machine learned algorithm based at least in part on the updated simulation state and a desired simulation state.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
A63F 13/57 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile
A video game includes a single player mode where completion of storyline objectives advances the single player storyline. The video game also includes a multiplayer mode where a plurality of players can play on an instance of a multiplayer map. Storyline objectives from the single player mode are selected and made available for completion to players in the multiplayer mode, and the single player storylines can be advanced by players completing respective storyline objectives while playing in the multiplayer mode. Combinations of storyline objectives are selected from pending storyline objectives for players connecting to a multiplayer game for compatibility with multiplayer maps. Constraints can be used to determine compatibility.
A63F 13/795 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour trouver d’autres joueursAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour constituer une équipeAspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu pour fournir une "liste d’amis"
A63F 13/335 - Dispositions d’interconnexion entre des serveurs et des dispositifs de jeuDispositions d’interconnexion entre des dispositifs de jeuDispositions d’interconnexion entre des serveurs de jeu utilisant des connexions de réseau étendu [WAN] utilisant l’Internet
A63F 13/47 - Commande de la progression du jeu vidéo incluant des points de branchement, p. ex. la possibilité à un moment donné de choisir l’un des scénarios possibles
A63F 13/48 - Démarrage d’un jeu, p. ex. activation du dispositif de jeu ou attente que d’autres joueurs se joignent à une session multi-joueurs
A63F 13/493 - Reprise du jeu, p. ex. après une pause, un dysfonctionnement ou une panne de courant
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Providing information on-line relating to computer games; providing a website featuring information regarding automobile and motor sports culture, competitions, and current events featuring automobiles and motor sports; Entertainment services, namely, providing information and news online relating to automobiles and motor sports