Sony Interactive Entertainment Europe Limited (United Kingdom)
Inventor
Armstrong, Calum
Manika, Maria Pilataki
Cockram, Philip
Abstract
A computer-implemented method of training a deep learning model for use in synthesis of a head-related transfer function, HRTF, is disclosed. The method comprises: providing a training dataset comprising a plurality of timbre features, each timbre feature comprising an HRTF measurement of a subject at a particular measurement angle, where the HRTF measurement has been processed to remove localisation perception features of the HRTF; training an autoencoder model, that is conditioned using the measurement angle, to encode the input timbre feature into a latent vector space and reconstruct the input timbre feature from the latent vector space, thereby learning a latent vector space that encodes timbre information independent of the measurement angle, such that the latent vector space is usable to synthesise a timbre component of an HRTF. The method allows for generating a personalised timbre component of an HRTF to provide better personalisation of an HRTF, thereby providing improved binaural audio.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
Provided is a signal processing device including a transmission determining section that determines whether or not to transmit an event signal output from a vision sensor which is of an event-driven type and which includes a plurality of sensors constituting a sensor array, on the basis of position information for each of the sensors in the sensor array.
H04N 25/47 - Image sensors with pixel address outputEvent-driven image sensorsSelection of pixels to be read out based on image data
H04N 25/62 - Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
H04N 25/779 - Circuitry for scanning or addressing the pixel array
3.
HEAD MOUNTED DISPLAY AND INFORMATION PROCESSING METHOD
Methods and apparatus provide for processing information for a head-mounted display for blocking out an outside world from a user's vision when worn by the user to present a video, by carrying out actions comprising: measuring outside world information related to a boundary of an area within which user movement is acceptable; presenting, at a user interface, the outside world information; detecting, based on the outside world information, a change to the boundary; generating, based on the change, notification information about at least one of the boundary or the user movement relative to the boundary; and presenting, at the user interface or a different user interface, the notification information as a notification.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/147 - Digital output to display device using display panels
G08B 21/02 - Alarms for ensuring the safety of persons
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
G09G 5/377 - Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
H04N 5/64 - Constructional details of receivers, e.g. cabinets or dust covers
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
4.
INFORMATION PROCESSING DEVICE, CONTROL METHOD OF INFORMATION PROCESSING DEVICE, PROGRAM, AND RECORDING MEDIUM
There is provided an information processing device that accepts an input of an emulation setting relating to a function of a display monitor as a target of emulation and executes a display output of an image rendered by executing an application program to a display monitor actually connected, in such a manner as to emulate a state in which a display monitor with a function represented by the accepted emulation setting is virtually connected.
An electronic device and a method for generation of three-dimensional (3D) blend-shapes from 3D scans using neural network is disclosed. The electronic device acquires a set of 3D scans including a body portion of an object. The electronic device determines a set of segments of the body portion from each 3D scan. The electronic device applies a neural network model on the acquired set of 3D scans. The electronic device determines a set of vertex difference vectors. Each vector of the determined set of vertex difference vectors corresponds to a 3D blend-shape. Each segment of the determined set of segments is configured to move independently in the 3D blend-shape. The electronic device reconstructs a 3D mesh sequence. The electronic device re-trains the neural network model. The re-trained neural network model is configured to determine a set of 3D blend-shapes based on a set of input 3D scans.
An image processing system (S) for generating a display image that is displayed on a display panel (10) provided in a head-mounted display (1) and that is enlarged and viewed through a lens unit (20) which is provided so as to correspond to the display panel (10), the image processing system including: an inclination correction unit (62) for correcting image data input into the display panel (10) on the basis of an inclination correction value relating to the display image in the three-dimensional space; and an image generation unit (63) for generating the display image on the basis of the image data corrected on the basis of the inclination correction value.
H04N 5/66 - Transforming electric information into light information
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
G09G 5/37 - Details of the operation on graphic patterns
G09G 5/373 - Details of the operation on graphic patterns for modifying the size of the graphic pattern
G09G 5/38 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory with means for controlling the display position
The present invention biases a member to be operated to an initial position without using an elastic member. An input device (1) comprises a first magnetic member (30) and a second magnetic material (40) which are accommodated in a housing (1) and which are made of a magnetic material. The first magnetic member (30) attaches, to the input device (1), a lower button (20) via a magnetic force between the first magnetic member (30) and an attachment part (22) which is on the lower button (20) and which is made of a magnetic material. When attached to the input device (1), the lower button (20) can move from an initial position together with the first magnetic member (30) upon receiving an operation by a user, and can be removed from the input device (1) via an operation from outside the housing (10). The second magnetic member (40) attracts the first magnetic member (30) and thereby biases the lower button (20) to the initial position.
Sony Interactive Entertainment Europe Limited (United Kingdom)
Inventor
Jones, Michael Lee
Buchanan, Christopher George
Armstrong, Calum
Smith, Alexei Ashton Derek
Manika, Maria Pilataki
Abstract
The application provides a computer-implemented method for simulating a reflected audio path in a virtual environment, the virtual environment comprising a source, a receiver, an obstacle, and a sound-reflective boundary, the reflected audio path associated with reflection at the sound-reflective boundary, and the method comprising: simulating, by an image-source method, the reflected audio path between the source and the receiver, the image-source method comprising: generating, by mirroring the source in the sound-reflective boundary, a mirror image source, and determining a simulated reflected audio path along a line segment between the receiver and the mirror image source; generating, by mirroring the obstacle in the sound-reflective boundary, a mirror image obstacle; performing a line-of-sight check between the receiver and the mirror image source; and, when at least one of the obstacle and the mirror image obstacle lies along the line segment between the mirror image source and the receiver, performing an adjustment on the simulated reflected audio path.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
Sony Interactive Entertainment Europe Limited (United Kingdom)
Inventor
Armstrong, Calum
Manika, Maria Pilataki
Cockram, Philip
Abstract
A computer-implemented method of synthesising an HRTF is disclosed. The method comprising: providing the HRTF of a subject measured at a particular measurement angle; processing the HRTF to remove localisation perception features of the HRTF, where the processing comprises: removing spectral notches from the measured HRTF, the resulting processed HRTF referred to as the HRTF′; and calculating a subject's HRTF timbre by subtracting a baseline HRTF at the measurement angle from the subject's HRTF, the baseline HRTF comprising a generalised response component such that the HR TF timbre comprises subject-specific variations in the HRTF. The method further comprises using the HRTF timbre to synthesise an HRTF. The method allows for generating a personalised timbre component of an HRTF to provide better personalisation of an HRTF, thereby providing improved binaural audio.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
H04S 7/00 - Indicating arrangementsControl arrangements, e.g. balance control
10.
INFORMATION PROCESSING APPARATUS AND DEVICE POSITION ESTIMATION METHOD
A photographed image acquisition unit 212 acquires an image obtained by photographing a device. A sensor data acquisition unit 214 acquires sensor data indicating an angular speed of the device. A position and posture deriving unit 244 derives the position of the device in a three-dimensional space from a position coordinate of the device in the photographed image when the device is included in the photographed image. A part position estimation unit 246 estimates a position of a predetermined part in a body of a user on the basis of the estimated position of the device. A position and posture deriving unit 244 derives, as the position of the device, a position rotated by a rotation amount corresponding to the sensor data with the position of the part estimated by the position and posture deriving unit 244 used as a rotation center when the device is not included in the photographed image.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
11.
Systems and Methods for Artificial Intelligence (AI)-Driven 2D-to-3D Video Stream Conversion
A system is disclosed for three-dimensional (3D) conversion of a video stream. The system includes an input processor configured to receive an input video stream that includes a first series of video frames. The system also includes a 3D virtual model generator configured to select video frames from the input video stream and generate a 3D virtual model for content depicted in the selected video frames. The system also includes a frame generator configured to generate a second series of video frames for an output video stream depicting content within the 3D virtual model at a specified frame rate. The system also includes an output processor configured to encode and transmit the output video stream to a client computing system.
Techniques are described for an encoder or decoder to adaptively change coding based on a specific user's sensitivity to flickering, or flashing, or blockiness, or other visual artifacts. Alternatively, video may be pre-processed based on the user's sensitivity to suppress artifacts prior to encoding and/or post-processed to suppress artifacts after decoding.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
13.
INFORMATION PROCESSING APPARATUS, CONTROLLING METHOD THEREFOR, AND PROGRAM
An information processing apparatus connected to a display apparatus capable of updating a frame image at a variable refresh rate executes a drawing process of a frame image, transmits the drawn frame image to the display apparatus, measures a drawing time period required for a drawing process of the frame image, and changes an operation condition of the drawing process according to the measured drawing time period.
Viewers of a computer game can send reactions such as graphic-based reactions to another person playing the computer game. The reactions are then used to trigger in-game power-ups such as health power-ups and character ability power-ups. In some examples, meeting certain criteria for the reactions may trigger the power-up, such as a threshold number of reactions being received within a threshold period of time and/or a particular sequence of different reactions being received.
A63F 13/86 - Watching games played by other players
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
15.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
Provided is an information processing apparatus which executes an application program to draw, according to a processing content of this application program, a frame image to be displayed on a screen of a display apparatus capable of updating the frame image at a variable refresh rate and determines a permissible rate range permitted as a refresh rate, according to an operation condition of the display apparatus assumed by the application program, to cause the display apparatus to execute refresh at a variable refresh rate within the determined permissible rate range.
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
16.
INFORMATION PROCESSING DEVICE, CONTROL METHOD FOR SAME, AND PROGRAM
Provided is an information processing device comprising: an integrated circuit; and a bus connected to the integrated circuit. The integrated circuit comprises: a bus controller that controls communication via a bus and transitions to any of a plurality of states including an active state in which communication is possible and a power saving state in which communication is restricted; and a monitoring circuit that records history data of communication via the bus at a prescribed time interval. On the basis of the history data of the communication recorded by the monitoring circuit, it is estimated whether communication occurs at a future estimation target time point, and when it is estimated that the estimation result communication occurs, the bus controller is transitioned to the active state.
Provided is an information processing device including a processor, the processor acquiring moving image data including sound data, acquiring range-specifying information specifying a reproduction time range satisfying a predetermined condition related to a speaker or a scene by using the sound data included in the moving image data, and using the acquired range-specifying information to execute predetermined guide processing for guiding the reproduction of a portion satisfying the predetermined condition for the acquired moving image data.
Methods and systems for defining a theme for a user to share with other users includes presenting a plurality of images available to a user account of a user, wherein each image includes features to distinctly identify different portions of the image. Select ones of the plurality of images selected by the user are provided to a generative AI, which analyzes the features included in the different portions to determine a theme and generate an output image for the theme. Additional inputs received from user selection of content adjusters provided alongside the output image are provided to the generative AI to further refine the output image. The refined output image is used to define a representative image for the theme and is provided to the user to specify usage of the refined output image during online interactions of the user.
An image generation apparatus generates an adjustment screen 300 that is for allowing a user who is wearing a head-mounted display to adjust an inter-lens distance for the head-mounted display, and causes the head-mounted display to display the adjustment screen 300. In the adjustment screen 300, disposed is a lens image (for example, a left lens image 304a and a right lens image 304b) that indicates a lens in the head-mounted display and also disposed is a pupil image (for example, the left eye image 306a and the right eye image 306b) that indicates a pupil of the user in reference to an eye tracking result.
A method for executing a game by a computing system that uses a central processing unit (CPU) and graphics processing unit (GPU) for generating video frames. A draw call is generated for a video frame by the CPU. At bind time, i.e. writing of the GPU commands by the CPU using a GPU API, asset aware data (AAD) is written to the command buffer, and loading of one or more level of detail (LOD) data from an asset store to system memory is requested. The GPU executes the draw call for the frame using LOD data written to the system memory, the GPU using at least a minimum of LOD data based on the AAD. Additionally, the GPU uses information regarding the LOD load state when executing the draw call, in order to avoid access to LODs not yet loaded.
A wireless protocol for providing smooth roaming when a non-Access Point (non-AP) Multi-Link Device (MLD) roams between Basic Service Sets (BSSs). One or more links of a roaming non-AP MLD can be in the process of communicating latency sensitive traffic during a R-TWT SP of a first BSS while roaming to a target BSS. Negotiation is made with the AP MLD of the target BSS so that upon completion of roaming to the target BSS, one or more links of the roaming non-AP MLD are allowed to use a predetermined/enhanced R-TWT SP of the target BSS without further negotiation.
Sony Music Entertainment, Sony Music Entertainment is a Partnership organized under the laws of Delaware. It is composed of Sony Music Holdings Inc., Corporation, Delaware; USCO Sub LLC, Limited liability company, Delaware ()
40 - Treatment of materials; recycling, air and water treatment,
Goods & Services
Clothing, namely, t-shirts, long sleeve t-shirts, sweatshirts, crewneck sweatshirts, hooded sweatshirts, pullovers, jackets, sweatpants, pants, sweat shorts, shorts, and socks; bandanas; face masks being headwear; footwear; hats and headwear Manufacture of apparel to the order and specification of others
Sony Music Entertainment, Sony Music Entertainment is a Partnership organized under the laws of Delaware. It is composed of Sony Music Holdings Inc., Corporation, Delaware; USCO Sub LLC, Limited liability company, Delaware ()
40 - Treatment of materials; recycling, air and water treatment,
Goods & Services
Clothing, namely, t-shirts, long sleeve t-shirts, sweatshirts, crewneck sweatshirts, hooded sweatshirts, pullovers, jackets, sweatpants, pants, sweat shorts, shorts, and socks; bandanas; face masks being headwear; footwear; hats and headwear Manufacture of apparel to the order and specification of others
25.
INFORMATION PROCESSING APPARATUS, DEVICE SPEED ESTIMATION METHOD, AND DEVICE POSITION ESTIMATION METHOD
A sensor data acquisition unit 214 acquires sensor data indicating an acceleration of a device including a vibrator. A second estimation processing unit 250 estimates a speed of the device on the basis of the sensor data. A vibration determination unit 262 determines whether or not the vibrator is vibrating on the basis of the sensor data. A stationary determination unit 264 determines whether or not the device is stationary on the basis of the sensor data. A second estimation processing unit 250 reduces the estimated speed of the device in a case where it is determined that the vibrator is vibrating and the device is stationary.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
G01P 7/00 - Measuring speed by integrating acceleration
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
26.
INFORMATION PROCESSING APPARATUS AND REPRESENTATIVE COORDINATE DERIVATION METHOD
A first extraction processing unit 234 extracts a plurality of sets of first connected components of eight neighboring pixels from a photographed image. A second extraction processing unit 236 extracts a plurality of sets of second connected components from the first connected components extracted by the first extraction processing unit 234. A representative coordinate derivation unit 238 derives representative coordinates of a marker image on the basis of the pixels of the first connected components extracted by the first extraction processing unit 234 and/or the pixels of the second connected components extracted by the second extraction processing unit 236.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
27.
CONTROL DEVICE FOR CONTROLLING AN INFORMATION PROCESSING DEVICE, A METHOD, A SYSTEM AND A COMPUTER PROGRAM
A control device for controlling an information processing device is provided, the control device comprising: a plurality of input units configured to be operated by a user; a generating unit configured to generate an input signal when at least one of the plurality of input units is operated by the user; an adjustment unit configured to adjust at least one of the plurality of input units; and a control unit configured to control the adjustment unit to restrict operation of the at least one of the input units in accordance with adjustment information indicating an availability of each of the plurality of input units for controlling the information processing device.
An apparatus comprises receiving circuitry to receive user information indicative of an inactivity period for one or more video games previously played by a user, prediction circuitry to predict a mitigation action associated with a respective video game of the one or more video games previously played by the user in dependence on at least an inactivity period for the respective video game and generate video game mitigation information for the mitigation action associated with the respective video game, the prediction circuitry comprising one or more trained machine learning models to predict the mitigation action in dependence on at least the inactivity period for the respective video game, and output circuitry to output the video game mitigation information.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
29.
MEDICAL CONTROL DEVICE AND MEDICAL OBSERVATION SYSTEM
A medical control device includes: a captured image acquisition unit configured to acquire a captured image generated by an image sensor capturing a subject image introduced by an endoscope; a luminance calculation unit configured to calculate a luminance level of the subject image included in the captured image; and a dimming controller configured to control a light amount of irradiation light onto a subject and an electronic shutter of the image sensor based on the luminance level, and execute first dimming control of narrowing the electronic shutter before reducing the light amount as the luminance level increases.
A61B 1/00 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor
A61B 1/04 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor combined with photographic or television appliances
A61B 1/06 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor with illuminating arrangements
A61B 1/12 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor with cooling or rinsing arrangements
A61B 1/227 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor for ears, i.e. otoscopes
A61B 1/313 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes
According to the present invention, a posture estimation system for estimating a posture using key points more appropriately acquires information indicating a portion constituting an object and hidden by a hand (S204, S402), determines three-dimensional positions of a plurality of key points for estimating the posture of the object on the basis of the information (S208, S405), trains a machine learning model for estimating the positions of a plurality of key points determined in an input image (S203, S407), acquires the estimated positions of key points in an image capturing the object and the hand on the basis of an output produced by the trained machine learning model in response to receiving the image, and determines the estimated posture of the object in a three-dimensional space on the basis of the estimated positions of the key points.
Sony Music Entertainment, Sony Music Entertainment is a Partnership organized under the laws of Delaware. It is composed of Sony Music Holdings Inc., Corporation, Delaware; USCO Sub LLC, Limited liability company, Delaware ()
Sony Music Entertainment, Sony Music Entertainment is a Partnership organized under the laws of Delaware. It is composed of Sony Music Holdings Inc., Corporation, Delaware; USCO Sub LLC, Limited liability company, Delaware ()
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
Goods & Services
clothing, namely, t-shirts, long sleeve t-shirts, sweatshirts, crewneck sweatshirts, hooded sweatshirts, pullovers, jackets, sweatpants, pants, sweat shorts, shorts, and socks; bandanas; face masks being headwear; footwear; hats and headwear Branding services, namely, consulting, development, management and marketing of brands for businesses and/or individuals; Product merchandising for others; Creative marketing plan development services; Consultation services, namely, creative and strategic consultation regarding development and production of marketing campaigns for others; Providing marketing consulting in the field of social media; On-line customer-based social media brand marketing services; Advertising and marketing services provided by means of indirect methods of marketing communications, namely, social media, search engine marketing, inquiry marketing, internet marketing, mobile marketing, blogging and other forms of passive, sharable or viral communications channels; Promoting the music of others by means of providing online portfolios; Arranging and conducting special events for business purposes; Media relations services; Personal management consulting services for musical performers and entertainment artists; Management of performing and recording artists; Retail clothing stores; Pop-up retail store services featuring clothing Provision of information relating to live performances, road shows, live stage events, theatrical performances, live music concerts and audience participation in such events; music production services; arranging and conducting entertainment events in the nature of concerts, social entertainment events, special events for social entertainment purposes Graphic design; Design of artwork; packaging design; design of music album artwork and covers
Sony Music Entertainment, Sony Music Entertainment is a Partnership organized under the laws of Delaware. It is composed of Sony Music Holdings Inc., Corporation, Delaware; USCO Sub LLC, Limited liability company, Delaware ()
42 - Scientific, technological and industrial services, research and design
Goods & Services
Retail clothing stores; Pop-up retail store services featuring clothing Graphic design; Design of artwork; packaging design; design of music album artwork and covers
Sony Music Entertainment, Sony Music Entertainment is a Partnership organized under the laws of Delaware. It is composed of Sony Music Holdings Inc., Corporation, Delaware; USCO Sub LLC, Limited liability company, Delaware ()
41 - Education, entertainment, sporting and cultural services
Goods & Services
clothing, namely, t-shirts, long sleeve t-shirts, sweatshirts, crewneck sweatshirts, hooded sweatshirts, pullovers, jackets, sweatpants, pants, sweat shorts, shorts, and socks; bandanas; face masks being headwear; footwear; hats and headwear Branding services, namely, consulting, development, management and marketing of brands for businesses and/or individuals; Product merchandising for others; Creative marketing plan development services; Consultation services, namely, creative and strategic consultation regarding development and production of marketing campaigns for others; Providing marketing consulting in the field of social media; On-line customer-based social media brand marketing services; Advertising and marketing services provided by means of indirect methods of marketing communications, namely, social media, search engine marketing, inquiry marketing, internet marketing, mobile marketing, blogging and other forms of passive, sharable or viral communications channels; Promoting the music of others by means of providing online portfolios; Arranging and conducting special events for business purposes; Media relations services; Personal management consulting services for musical performers and entertainment artists; Management of performing and recording artists Provision of information relating to live performances, road shows, live stage events, theatrical performances, live music concerts and audience participation in such events; music production services; arranging and conducting entertainment events in the nature of concerts, social entertainment events, special events for social entertainment purposes
35.
LOCAL ENVIRONMENT SCANNING TO CHARACTERIZE PHYSICAL ENVIRONMENT FOR USE IN VR/AR
A user's environment is scanned and an augmented reality game such as a treasure is set up based on the scan. The user needs to use a phone to uncover clues in a game environment that is customized to user's own personal real-world environment, which is discovered using SLAM or GPS so that a map of furniture can be built. The game hides a virtual object behind a virtualized image of the real-world furniture. Machine learning may be used to train a model with common objects and where interesting hidden spaces could exist. Given the user's inputted data, real world physical room data and objects are used to determine a likely location to hide a virtual object.
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
A wireless protocol for providing smooth roaming when a non-Access Point (non-AP) Multi-Link Device (MLD) roams between Basic Service Sets (BSSs). One or more links of a roaming non-AP MLD can be in the process of communicating latency sensitive traffic during a R-TWT SP of a first BSS while roaming to a target BSS. Negotiation is made with the AP MLD of the target BSS so that upon completion of roaming to the target BSS, one or more links of the roaming non-AP MLD are allowed to use a predetermined/enhanced R-TWT SP of the target BSS without further negotiation.
Techniques for optimizing which LEDs in a HMD to use, the brightness of those LEDs, and camera exposure are divulged based on the particular function to be performed. For instance, one set of optimization parameters may be implemented for eye tracking purposes while a different set of optimization parameters may be implemented for eye-based authentication purposes.
A method for cloud gaming. The method including receiving one or more encoded slices of a video frame at a client, wherein the video frame was generated at a server while executing a video game, and encoded by an encoder at the server into the one or more encoded slices. The method including decoding a first encoded slice at a decoder of the client before fully receiving the one or more encoded slices of the video frame.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
A63F 13/50 - Controlling the output signals based on the game progress
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video streamRemultiplexing of multiplex streamsInsertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rateAssembling of a packetised elementary stream
H04N 21/4385 - Multiplex stream processing, e.g. multiplex stream decrypting
39.
USING TIMING SIGNALS TO ADJUST RELATIVE TIMING OF VSYNC SIGNALS BETWEEN CLIENT DEVICES TO SYNCHRONIZE DISPLAY OF VIDEO FRAMES
A method is disclosed including setting, at a plurality of devices, a plurality of VSYNC signals to a plurality of VSYNC frequencies, wherein a corresponding device VSYNC signal of a corresponding device is set to a corresponding device VSYNC frequency. The method including sending a plurality of signals between the plurality of devices, which are analyzed and used to adjust the relative timing between corresponding device VSYNC signals of at least two devices.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
A63F 13/335 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
G07F 17/32 - Coin-freed apparatus for hiring articlesCoin-freed facilities or services for games, toys, sports, or amusements
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
H04L 67/131 - Protocols for games, networked simulations or virtual reality
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video streamElementary client operations, e.g. monitoring of home network or synchronizing decoder's clockClient middleware
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
H04N 21/8547 - Content authoring involving timestamps for synchronizing content
40.
COMMUNICATIONS DEVICE, INFRASTRUCTURE EQUIPMENT AND METHODS
A communications device transmits data in preconfigured resources of an uplink of a wireless communications network by performing a procedure to determine whether the communications device can transmit signals in the preconfigured resources of the uplink, and if the communications device determines that it can transmit signals in the preconfigured resources, transmitting signals representing the data in the preconfigured resources. The procedure to determine whether the communications device can transmit signals in the preconfigured resources of the uplink includes a transmission parameter confirmation procedure which confirms that a value of one or more transmission parameters to be used for transmitting the signals representing the data can be used for the signals representing the data to be detected by an infrastructure equipment of the wireless communications network.
A medical imaging device includes: a plurality of image sensors each configured to capture a subject image to output a pixel signal; and a signal integration unit configured to convert a plurality of the pixel signals output from the plurality of image sensors into pixel signals corresponding to a specific transmission standard.
A61B 1/00 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor
42.
IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND PROGRAM
The present invention provides an image processing system that maintains spatial accuracy and improves time-series stability. The image processing system comprises at least one processor which inputs each of first to n-th input frames (n is a natural number of 2 or greater) to a machine-learning model and acquires each of first to n-th estimation frames (26). The at least one processor: acquires each of first to n-th frames-to-be-processed (20); acquires (n-1)-th cumulative feature information (28) indicating the features, output from the machine-learning model, of the first to (n-1)-th input frames (24); acquires (n-1)-th auxiliary information (30) on the basis of the (n-1)-th cumulative feature information (28); and acquires the n-th input frame (24) having per-pixel information elements more than per-pixel information elements included in the n-th frame-to-be-processed (20) on the basis of the n-th frame-to-be-processed (20) and the (n-1)-th auxiliary information 30.
G06T 3/4053 - Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Sony Interactive Entertainment Europe Limited (United Kingdom)
Inventor
Leung, Jun Yen
Cerrato, Maurizio
Green, Lawrence Martin
Abstract
A controller for a video game system is provided. The controller comprises an exterior case including a handle, the handle comprising a first region and a second region, wherein the handle is configured to be held by a user. The controller further comprises a heat transfer module comprising a cooling region and a heating region, wherein the heat transfer module is configured to transfer heat from the cooling region to the heating region. The cooling region of the heat transfer module is arranged at the first region of the handle, and the heating region of the heat transfer module is arranged at the second region of the handle. This controller locally heats and cools a user when they hold the handle of the controller.
A63F 13/28 - Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
44.
METHOD FOR LOCATION BASED PLAYER FEEDBACK TO IMPROVE GAMES
A system for location-based player feedback for video games may include a data collection module, a pattern recognition module, a localization module and a feedback module. The collection module collects gameplay data for a video game. The pattern recognition module analyzes the collected gameplay data to identify a pattern associated with player difficulty. The localization module associates a game world location with the identified pattern. The feedback module presents a message to players at the game world location associated with the identified pattern requesting feedback. The data collection, pattern recognition, and localization modules may include neural networks trained with machine learning algorithms.
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
45.
SYSTEMS AND METHODS FOR GENERATING NONPLAYER CHARACTERS ACCORDING TO GAMEPLAY CHARACTERISTICS
Systems and methods for generating nonplayer characters are described. An artificial intelligence (AI) model is trained based on gameplay by one or more users to generate the nonplayer characters. The nonplayer characters have similar gameplay characteristics as that of a game character controlled by one of the users. The AI model is trained to have a percentage of gameplay characteristics learned from gameplay by the one the users and a percentage of gameplay characteristics from gameplay by another one of the users.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
The rigidity of an input device is ensured. The input device includes an internal structure including a main frame and a reinforcing frame formed by a material having a higher rigidity than the main frame and attached to the main frame. A lower case covers the lower side of the internal structure, and is attached to the internal structure. An upper case covers the upper side of the internal structure, and is attached to at least one of the internal structure and the lower case.
A wearable device equipped with a pair of airflow control units or fan units provided respectively to the ears of a wearer, said wearable device controlling the airflow control units or fan units on the basis of an airflow control instruction received from an information processing device 1, and causing the wearer to sense airflow at the auricle.
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effectsMasking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
A plurality of power generation elements 30 generate electric power, and a plurality of holding units 32 hold the electric power generated by the plurality of power generation elements 30. A measurement unit 34 measures the power generation amount of each of the power generation elements 30. An information acquisition unit 42 acquires power generation amount information related to the measured power generation amount.
H02K 35/02 - Generators with reciprocating, oscillating or vibrating coil system, magnet, armature or other part of the magnetic circuit with moving magnets and stationary coil systems
A processing device 20 includes the function of inferring the status of an electronic device 10 for executing an application. An acquisition unit 42 acquires a power generation amount of power generated using energy which is generated by the operation of the electronic device 10. An inference unit 44 infers the status of the electronic device 10 on the basis of the acquired power generation amount.
A63F 13/90 - Constructional details or arrangements of video game devices not provided for in groups or , e.g. housing, wiring, connections or cabinets
A game system 1 comprises a management server 12, an information processing device 10, and power companies 16a, 16b. The management server 12 acquires power supply/demand information from the power companies 16a, 16b and identifies an information processing device 10 existing in an area in which the degree of tightness between power supply and demand is a prescribed threshold value or greater. The management server 12 provides a power reduction instruction to the identified information processing device 10. The information processing device 10 operates in a power-saving mode upon acquiring the power reduction instruction.
A medical image processing device 9 comprises an image processing unit 92 for processing pixel signals acquired from the pixels of an imaging element 513. The pixel signals include: a first pixel signal having an unnecessary light component including autofluorescence generated by light from a member forming an optical path for observation when light emitted from a light source device propagates through the optical path for observation, and an observation target fluorescence component emitted from a substance included in an observation target that is excited by the light; and a second pixel signal having at least an unnecessary light component. The image processing unit 92 comprises a signal correction unit 923 that generates a corrected pixel signal on the basis of the first and second pixel signals and a correction coefficient set on the basis of the spectral characteristics of the observation target fluorescence.
A61B 1/00 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor
The present invention makes it possible to appropriately evaluate whether or not a translated sentence corresponds to the movements of the mouth of a character. At least one processor (11) generates a level of similarity indicating similarities of the movements of the mouth of the character on the basis of the shape of the mouth of the character corresponding to each of phonemes (33a to 33c) included in a phoneme string (33) before translation and the shape of the mouth of the character corresponding to each of phonemes (34a to 34e, 35a, 35b) included in phoneme strings (34, 35) after the translation.
A program according to an embodiment of the present technology causes a computer to execute processing for: logging a cheering activity by a user; rewarding the user with an NFT for performing the cheering activity; and changing a log display at a terminal used by the user before and after a real event related to a cheering target. The present technology can be applied to teams in sports such as baseball, soccer, rugby and basketball, and to a service for certifying a cheering activity with respect a target who is a famous person such as an individual player of a team, an actor, an idol, or an entertainer.
System, process and device configurations are provided for detecting negative gameplay behavior and gameplay control. A method can include receiving gameplay data for a plurality of players for a game, wherein the gameplay data includes player input controls for the plurality of players. The process may use player input controls to detect negative gameplay behavior, such as a toxic behavior, including but not limited to a team kill, item/equipment trade fraud. Player input controls may be identified and evaluated using a model. Processes can include warning players based on patterns observed by the model and blocking toxic player input controls meant to harm other players. Processes can include determining game function responses of the game to limit outcome of the at least one player input control for the game and controlling output of the game.
System, process and device configurations are provided for detecting negative gameplay behavior and gameplay control. A method can include receiving gameplay data for a plurality of players for a game, wherein the gameplay data includes player input controls for the plurality of players. The process may use player input controls to detect negative gameplay behavior, such as a toxic behavior, including but not limited to a team kill, item/equipment trade fraud. Player input controls may be identified and evaluated using a model. Processes can include warning players based on patterns observed by the model and blocking toxic player input controls meant to harm other players. Processes can include determining game function responses of the game to limit outcome of the at least one player input control for the game and controlling output of the game.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/75 - Enforcing rules, e.g. detecting foul play or generating lists of cheating players
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
An operation device 6 comprises a holding part held by the hand of a user, and an input unit operated by the user. In the operation device 6, a plurality of photovoltaic elements 110 are arranged in a housing, and a power storage unit 112 stores electric power generated by the plurality of photovoltaic elements 110.
This information processing device: sequentially acquires screen data related to details of a screen of a target content having details that change with time, while the target content is being outputted; sequentially acquires model output data corresponding to the screen data related to details of the screen of the target content by using a machine learning model obtained by learning the relationship between the screen data and description information describing details of the screen data in a linguistic manner; and executes prescribed processing on the basis of the acquired model output data.
H04N 5/92 - Transformation of the television signal for recording, e.g. modulation, frequency changingInverse transformation for playback
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Systems and methods of intelligent reporting within online communities are provided. A current communication session associated with a plurality of user device may be monitored. The current communication session may include a stream of audio- visual content generated in real-time based on interactions between the user devices. A recording trigger may be detected within tire current communication session. A recording of a portion of the stream of audio-visual content may be recorded in response to the detected trigger event. The recording may be analyzed to attribute one or more sub-portions within the recording to one or more of the user devices. At least one of the sub-portions attributed to an identified one of the user devices may be determined to meet a moderation event. A report regarding the identified user device may be generated that includes the at least one sub-portion that meets the moderation event.
Systems and methods for generating nonplayer characters are described. An artificial intelligence (Al) model is trained based on gameplay by one or more users to generate the nonplayer characters. The nonplayer characters have similar gameplay characteristics as that of a game character controlled by one of the users. The Al model is trained to have a percentage of gameplay characteristics learned from gameplay by the one the users and a percentage of gameplay characteristics from gameplay by another one of the users.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
41 - Education, entertainment, sporting and cultural services
Goods & Services
ARRANGING AND CONDUCTING BUSINESS SEMINARS FOR BUSINESS ENTREPRENEURS IN THE FIELD OF BUSINESS DEVELOPMENT, BUSINESS MENTORING, BUSINESS COACHING, TRAINING, TECHNICAL ASSISTANCE, AND HOW TO CAPITALIZE NEW BUSINESS ENTERPRISES
An information processing apparatus that controls a display to display an operation target; determines a contact size of an object on the display; and enables or disables an operation input for the operation target based on the contact size.
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
H04M 1/72469 - User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
Systems and methods of intelligent reporting within online communities are provided. A current communication session associated with a plurality of user device may be monitored. The current communication session may include a stream of audio-visual content generated in real-time based on interactions between the user devices. A recording trigger may be detected within the current communication session. A recording of a portion of the stream of audio-visual content may be recorded in response to the detected trigger event. The recording may be analyzed to attribute one or more sub-portions within the recording to one or more of the user devices. At least one of the sub-portions attributed to an identified one of the user devices may be determined to meet a moderation event. A report regarding the identified user device may be generated that includes the at least one sub-portion that meets the moderation event.
An execution environment of a game application is enabled to be changed without hindrance to the operation of the game application. An input device has function buttons. A processor of the input device receives operations on the function buttons, and changes the execution environment of the game application. The function buttons are disposed at a position lower than the upper surface of a right portion of the input device and the upper surface of a left portion thereof.
A button that can be operated promptly according to a necessity of a user is provided to an input device. The input device has a function button. The function button is located rearward of a plurality of operation members provided to the input device, and projects outward from the peripheral edge of an upper cover as viewed in plan of the input device.
G06F 3/0338 - Pointing devices displaced or positioned by the userAccessories therefor with detection of limited linear or angular displacement of an operating part of the device from a neutral position, e.g. isotonic or isometric joysticks
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
G06F 3/02 - Input arrangements using manually operated switches, e.g. using keyboards or dials
68.
HEAD-MOUNTED DISPLAY, DISPLAY CONTROL METHOD, AND PROGRAM
A head-mounted display, a display control method, and a program that facilitate a user to understand proximity between the user and an object around the user are provided. A display block is arranged in front of the eyes of the user wearing a HMD. In accordance with proximity between the user and an object around the user, the HMD controls the display block so as to have the user visually recognize a forward direction of the display block.
Tracking a control object on a stage for a virtual production, including: tracking a control object on the stage with a system that tracks at least one camera used in the virtual production; identifying a location in a virtual environment using tracking information of the control object; and placing virtual assets at the identified location in the virtual environment.
Sony Interactive Entertainment Europe Limited (United Kingdom)
Inventor
Cerrato, Maurizio
Gupta, Rajeev
Henderson, Christopher William
Villanueva-Barreiro, Marina
Barcias, Jesus Lucas
Conde, Marcos
Sanders, Matthew William
Leonardi, Rosario
Abstract
The present disclosure describes a method and system for adaptively streaming multimedia content. Data relating to the multimedia content is separated into a plurality of components, each of the components corresponding to one or more features of the multimedia content. The plurality of components are prepared for transmission to a client device, wherein a different preparation is applied to each component depending on the one or more features of a respective component. The prepared components are transmitted from the server to the client device.
Sony Interactive Entertainment Europe Limited (United Kingdom)
Inventor
Leung, Jun Yen
Cerrato, Maurizio
Green, Lawrence Martin
Abstract
The present disclosure provides a computer implemented method of correcting a physical input to an input device of a controllable device. The method comprises: receiving a physical input from the input device; receiving orientation information from a gyroscope of the input device; providing the physical input and orientation information to a trained machine learning model, wherein the trained machine learning model is configured to output a corrected input based on the physical input and the orientation information; and receiving from the trained machine learning model, a corrected input corresponding to the physical input. Further, the present disclosure provides a computer-implemented method of training a machine learning model to correct a physical input to an input device of a controllable device.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
A63F 13/20 - Input arrangements for video game devices
Sony Interactive Entertainment Europe Limited (United Kingdom)
Inventor
Walker, Andrew William
Suganuma, Atsushi
Armstrong, Calum
O'Sullivan, Damian
Abstract
A computer-implemented method of interacting with a video game system comprising a user input device, the method comprising: determining movement of a user using a sensor; predicting a future actuation of the user input device based on the determined movement, wherein the actuation triggers a game event; outputting an effect based on the predicted actuation of the user input device. This provides accurate, pre-emptive effects and improves the interaction between a video game system and a user.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/218 - Input arrangements for video game devices characterised by their sensors, purposes or types using pressure sensors, e.g. generating a signal proportional to the pressure applied by the player
73.
Varying Just Noticeable Difference (JND) Threshold or Bit Rates for Game Video Encoding Based on Power Saving Requirements
Techniques are described for reducing latency in computer game network streaming using a machine learning (ML) model to determine an optimal bite rate/frame rate/resolution for encoding the video of the computer game to satisfy a just noticeable difference (JND) threshold while minimizing the amount of data being sent.
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
The presence or absence of an operated member on an input device is enabled to be selected at the discretion of a user. The input device has a supporting member. The supporting member includes a shaft portion, and moves about an axis defined in the shaft portion. A sensor is separated from the axis in a first direction orthogonal to the axis, and outputs a signal corresponding to movement of the supporting member. An operated member extends in a second direction orthogonal to the axis and intersecting the first direction, and projects from a lower case. The operated member is attached to the supporting member so as to move together with the supporting member, and is removable from the supporting member by an operation from the outside of the lower case.
An input device is provided which enables a user to adjust a movable range of a trigger button. The input device includes a stopper member movable between a first position that allows movement of a trigger button in a first range and a second position that abuts against a stopper target portion of the trigger button and limits the movable range of the trigger button to a second range smaller than the first range, and an operation member that engages with the stopper member and can move in a direction different from that of the stopper member. The operation member moves the stopper member between the first position and the second position.
Provided is an attachment device that is able to improve the stability of connection between a cable connector and a connector of an input device and also facilitate the operation for engaging an engagement section with the input device. The attachment device includes a connector retaining section and an engagement member. The engagement member is provided on the connector retaining section, and includes an engagement section for engaging with the input device. The engagement member is movable between an engagement position and an accommodation position. The engagement position is a position where the engagement section is protruded from the connector retaining section. The accommodation position is a position where the engagement section is accommodated in the connector retaining section.
H01R 13/635 - Additional means for facilitating engagement or disengagement of coupling parts, e.g. aligning or guiding means, levers, gas pressure for disengagement only by mechanical pressure, e.g. spring force
77.
DISPLAY CONTROL DEVICE, HEAD-MOUNTED DISPLAY, AND DISPLAY CONTROL METHOD
A stereo camera of a head-mounted display photographs left-viewpoint and right-viewpoint photographed images and the like at a frame rate of 1/Δt as depicted in (a). By using either one of the left-viewpoint and right-viewpoint photographed images, a display control device alternately generates a left-eye or right-eye display image (display image, for example) for each frame at a rate equal to the frame rate used in photographing, and displays the generated display image in a corresponding region of a display panel while not displaying any image in the other region, as depicted in (c).
H04N 13/122 - Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
H04N 13/111 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
H04N 13/133 - Equalising the characteristics of different image components, e.g. their average brightness or colour balance
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
An imaging apparatus for endoscope includes: a coupler; an exterior casing connected to the coupler and extending along a first axis crossing an optical axis of an endoscope, the exterior casing having outer dimensions in a direction along the first axis greater than outer dimensions in a direction along the optical axis of the endoscope; an optical system; and an imaging unit. The exterior casing includes a first exterior part, a second exterior part, and a connector part. The optical system and the imaging unit are housed, in the exterior casing, side by side on the optical axis of the endoscope such that light of an image of a subject guided by the optical system is captured by the imaging unit.
G02B 23/24 - Instruments for viewing the inside of hollow bodies, e.g. fibrescopes
A61B 1/00 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor
A61B 1/06 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor with illuminating arrangements
A61B 18/00 - Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body
A method of communicating data between a radio network infrastructure and a terminal device in a wireless telecommunications network. The method comprises establishing at a radio network infrastructure element there is data available for communication between the radio network infrastructure and the terminal device and transmitting a paging message for the terminal device from the radio network infrastructure element. The paging message comprises an indication of an identifier for the terminal device and an indication of a network allocated resource for use in subsequently communicating the data between the radio network infrastructure element and the terminal device. In response to receiving the paging message the terminal device transmits to the radio network infrastructure element a paging response indicating the terminal device received the paging message, after which the data is communicated between the radio network infrastructure element and the terminal device using the network allocated resource.
Provided is an information processing device comprising: a first integrated circuit; a second integrated circuit; and a first bus and a second bus that independently connect the first integrated circuit and the second integrated circuit to each other. The first integrated circuit includes a first bus controller that controls communication via the first bus and a second bus controller that controls communication via the second bus. The first integrated circuit performs data transmission to the second integrated circuit via the second bus using the second bus controller when communication via the first bus is not usable.
The present invention makes it easier to recognize, in the appearance of an input device, the correspondence between the tilt direction of an operation stick and a direction of instruction to an application. An input device (1) may comprise a top member (40) which has an upper surface that is touched by a user's finger and a base member (50) to which the top member (40) is attached. One from among the base member (50) and an insertion part (42) of the top member (40) may have a plurality of engagement parts (70) that surround the axis (L1) of the insertion part (42), and the other one may have at least one engagement part (80) that engages with at least one of the plurality of engagement parts (70) and that restricts movement of the insertion part (42) in a direction around the axis (L1). A mark (45) that indicates a direction intersecting the axis (L1) may be formed in an outer surface of the top member (40).
G06F 3/0338 - Pointing devices displaced or positioned by the userAccessories therefor with detection of limited linear or angular displacement of an operating part of the device from a neutral position, e.g. isotonic or isometric joysticks
82.
VARYING JUST NOTICEABLE DIFFERENCE (JND) THRESHOLD OR BIT RATES FOR GAME VIDEO ENCODING BASED ON POWER SAVING REQUIREMENTS
Techniques are described for reducing latency in computer game network streaming using a machine learning (ML) model (402) to determine (606, 608) an optimal bite rate/frame rate/resolution for encoding the video of the computer game to satisfy a just noticeable difference (JND) threshold while minimizing the amount of data being sent.
H04N 19/169 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
H04N 19/103 - Selection of coding mode or of prediction mode
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
83.
COORDINATED R-TWT SP SCHEDULING AMONG MULTIPLE ADJACENT BSS ACCORDING TO NON-AP STA NEEDS
A coordinated form of R-TWT SP scheduling taking into account the needs of non-AP STAs in adjacent BSS. A flag in the BSSID field is used to indicate if the AP is represented by the BSSID is a UHR AP. Mechanisms are described to support coordinated R-TWTs and the operation of UHR devices supporting these coordinated R-TWT. The protocol for UHR AP and UHR non-AP STAs allows for identifying possible interference with the OBSS, directly rescheduling, or performing negotiation with the UHR AP of the OBSS for scheduling the R-TWT SP.
Tracking a control object on a stage for a virtual production, including: tracking a control object on the stage with a system that tracks at least one camera used in the virtual production; identifying a location in a virtual environment using tracking information of the control object; and placing virtual assets at the identified location in the virtual environment.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/04842 - Selection of displayed objects or displayed text elements
Systems and methods for providing gameplay assistance are described. One of the methods includes monitoring gameplay of a user of a game. The monitoring occurs to identify interactive skills of gameplay by the user during a session of the game. The method further includes determining that the interactive skills of gameplay have fallen below a threshold level for progressing the game and initiating gameplay assistance responsive to the interactive skills falling below the threshold level. The gameplay assistance includes a blended bifurcation of user inputs to complete one or more interactive tasks of the game. The blended bifurcation of user inputs include an amount of assistance inputs that override selected ones of the user inputs. The amount of assistance inputs vary over time during the gameplay of the game to maintain the interactive skills of the gameplay above the threshold level of interactive skills for playing the game.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
86.
CUSTOM CHARACTER CREATION BASED ON BODY MOVEMENT AND USER INPUT DATA
A method for generating a character for use in a video game is provided, including: receiving captured video of a user; analyzing the captured video to identify movements of the user; using the identified movements of the user to define one or more animations of a character; receiving descriptive input generated by the user; determining game-specific constraints for the character in the video game; using the descriptive input and the game-specific constraints to prompt a generative artificial intelligence (AI) to generate visual elements of the character; using the character for gameplay of a video game, wherein using the character includes rendering the character having the generated visual elements and triggering the animations during the gameplay.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
87.
CALIBRATION DEVICE, DISPLAY DEVICE, CALIBRATION METHOD, AND IMAGE DISPLAY METHOD
A head mounted display includes a first display unit for displaying an image of a center part of a display image and a second display unit for displaying an image of the outside thereof, and the images are combined by a half mirror for visual recognition. On the basis of the chromaticities of the first display unit and the second display unit measured by a chromoscope from a position similar to the viewpoint of a user, a calibration device calculates a color conversion matrix by which colors in a common color gamut are visually recognized, and outputs it in association with the display unit.
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G01J 3/50 - Measurement of colourColour measuring devices, e.g. colorimeters using electric radiation detectors
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
The rigidity of an input device is increased while an effect on the external appearance of the input device is suppressed. The input device has a device front portion as well as a right grip and a left grip. An upper case and a lower case of the input device form a part of each of the device front portion, the right grip, and the left grip, and house a frame. The lower case is fixed to at least one of the frame and the upper case by a plurality of screws. A lower cover is attached to the lower surface of the lower case, covers the plurality of screws, and constitutes at least a part of each of the lower surface of the device front portion, the left side surface of the right grip, and the right side surface of the left grip.
There is provided a method for rendering a virtual environment. The method comprises identifying one or more static elements in the virtual environment, determining, for a first frame having a first virtual camera position, a geometry of the static elements in the virtual environment, storing the geometry of the static elements for the first frame, determining, for a second frame having the first virtual camera position, a geometry of at least part of the virtual environment based, at least in part, on the stored geometry of the static elements for the first frame, and determining, for the second frame, lighting for the at least part of the virtual environment at least in part based on the geometry of the at least part of the virtual environment determined for the second frame, to render the at least part of the virtual environment
A system for dynamically mixing audio content, the system comprising: a receiving unit configured to receive input audio; an analysis unit configured to analyse the input audio to determine one or more masking patterns; an attenuation unit configured to attenuate one or more channels of the input audio in accordance with the one or more masking patterns, and an output unit configured to output attenuated audio.
Methods and systems for cooperative or coached gameplay in virtual environments are disclosed. Memory may store a content control profile regarding a set of control input associated with an action in a virtual environment of a digital content title. A request may be received from a set of one or more users associated with different source devices regarding cooperative gameplay of the digital content title. At least one virtual avatar may be generated for an interactive session of the digital content title in response to the request. A plurality of control inputs may be received from the plurality of different source devices and combined into a combination set of control inputs. Generating the combination set of control input may be based on the content control profile. Virtual actions associated with the virtual avatar may be controlled within the virtual environment in accordance with the combination set of control inputs.
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
92.
SYSTEMS AND METHODS FOR PROVIDING ASSISTANCE TO A USER DURING GAMEPLAY
Systems and methods for providing gameplay assistance are described. One of the methods includes monitoring gameplay of a user of a game. The monitoring occurs to identify interactive skills of gameplay by the user during a session of the game. The method further includes determining that the interactive skills of gameplay have fallen below a threshold level for progressing the game and initiating gameplay assistance responsive to the interactive skills falling below the threshold level. The gameplay assistance includes a blended bifurcation of user inputs to complete one or more interactive tasks of the game. The blended bifurcation of user inputs include an amount of assistance inputs that override selected ones of the user inputs. The amount of assistance inputs vary over time during the gameplay of the game to maintain the interactive skills of the gameplay above the threshold level of interactive skills for playing the game.
A63F 13/422 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
93.
CUSTOM CHARACTER CREATION BASED ON BODY MOVEMENT AND USER INPUT DATA
A method for generating a character for use in a video game is provided, including: receiving captured video of a user; analyzing the captured video to identify movements of the user; using the identified movements of the user to define one or more animations of a character; receiving descriptive input generated by the user; determining game-specific constraints for the character in the video game; using the descriptive input and the game-specific constraints to prompt a generative artificial intelligence (Al) to generate visual elements of the character; using the character for gameplay of a video game, wherein using the character includes rendering the character having the generated visual elements and triggering the animations during the gameplay.
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A part provided with an operation member in an input device is enabled to be replaced easily. The input device includes a stick unit including an operation stick and a circuit for detecting movement of the operation stick. A main body of the input device has a housing recessed portion that opens upward and rearward and houses the stick unit. The stick unit is attachable to and detachable from the housing recessed portion.
G06F 3/0338 - Pointing devices displaced or positioned by the userAccessories therefor with detection of limited linear or angular displacement of an operating part of the device from a neutral position, e.g. isotonic or isometric joysticks
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
An exterior member of an input device is enabled to be removed easily. A cover lock member of the input device has a second engaging portion for engaging with a first engaging portion of an upper cover as an exterior member. The cover lock member is movable between a locking position at which the second engaging portion engages with the first engaging portion and an unlocking position at which the engagement between the second engaging portion and the first engaging portion is released.
A system for rendering two-dimensional, 2D, content in a three-dimensional, 3D, virtual reality environment, comprising: receiving circuitry configured to receive the 2D content, the 2D content being in a 2D format; environment generating circuitry configured to generate the 3D virtual reality environment, wherein the 3D virtual reality environment comprises a virtual surface upon which the 2D content is to be rendered; recognition circuitry configured to recognise one or more regions of interest in the 2D content; mask generating circuitry configured to generate, in dependence upon a location of the virtual surface within the generated 3D virtual reality environment and in dependence upon at least one recognised region of interest in the 2D content, a 3D mask of the generated 3D virtual reality environment, wherein the 3D mask indicates at least one region within the 3D virtual reality environment in which the at least one recognised region of interest is to be rendered; and rendering circuitry configured to render the 3D virtual reality environment for display at a head mounted display, HMD, wherein the rendering circuitry is configured to render the 2D content on the virtual surface, and upscale the at least one region within the 3D virtual reality environment indicated in the 3D mask.
A method and system for generating a customized summary of virtual actions and events. Gameplay data sent over a communication network from a client device of the player engaged in a current activity of the respective interactive content title within a current gameplay session may be monitored. A trigger in the monitored gameplay data is detected and associated with a request for a summary that encapsulates actions and events of past gameplay associated with the trigger. A subset of the actions and events of the past gameplay for the summary is selected based on one or more selected customized tags associated with the trigger. The summary is generated based on the selected subset of the actions and events and provided to the client device for presentation.
G06V 20/40 - ScenesScene-specific elements in video content
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
A63F 13/85 - Providing additional services to players
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
98.
INFORMATION PROCESSING DEVICE AND VIDEO EDITING METHOD
A ring buffer records a game video provided by a running game software together with time information. When an unlock condition of a trophy that is a virtual award is satisfied, a trophy processing section gives a trophy to a user playing a game. A video acquiring section reads, from the ring buffer, the video including the game image that is at a time when the unlock condition becomes satisfied, and records the video in a second recording section. A video processing section carries out an editing process on the video recorded in the second recording section.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/86 - Watching games played by other players
G11B 27/02 - Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
99.
NETWORK ARCHITECTURE PROVIDING HIGH SPEED STORAGE ACCESS THROUGH A PCI EXPRESS FABRIC BETWEEN A STREAMING ARRAY AND A DEDICATED STORAGE SERVER
A network architecture including a streaming array that includes a plurality of compute sleds, wherein each compute sled includes one or more compute nodes. The network architecture including a network storage of the streaming array. The network architecture including a PCIe fabric of the streaming array configured to provide direct access to the network storage from a plurality of compute nodes of the streaming array. The PCIe fabric including one or more array-level PCIe switches, wherein each array-level PCIe switch is communicatively coupled to corresponding compute nodes of corresponding compute sleds and communicatively coupled to the network storage. The network storage is shared by the plurality of compute nodes of the streaming array.
G06F 13/42 - Bus transfer protocol, e.g. handshakeSynchronisation
H04L 49/351 - Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
100.
COORDINATED R-TWT SP SCHEDULING AMONG MULTIPLE ADJACENT BSS ACCORDING TO NON-AP STA NEEDS
A coordinated form of R-TWT SP scheduling taking into account the needs of non-AP STAs in adjacent BSS. A flag in the BSSID field is used to indicate if the AP is represented by the BSSID is a UHR AP. Mechanisms are described to support coordinated R-TWTs and the operation of UHR devices supporting these coordinated R-TWT. The protocol for UHR AP and UHR non-AP STAs allows for identifying possible interference with the OBSS, directly rescheduling, or performing negotiation with the UHR AP of the OBSS for scheduling the R-TWT SP.