Systems and method are provided for providing a playlist transport bar. The playlist transport bar provides an overlay which graphically represents assets (e.g., programs) of a playlist in a manner that enables a user to simultaneously ascertain a playback position within the playlist and a particular asset. The playlist transport may include asset regions which each correspond to an asset in a playlist and a position indication region which may provide information relating to a playback position.
H04N 21/472 - End-user interface for requesting content, additional data or servicesEnd-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
2.
METHODS AND SYSTEMS FOR STREAMING MEDIA CONTENT ON MULTIPLE DEVICES
Methods and systems are presented herein for streaming of media content. The methods and systems include receiving a request to stream a media content item; accessing a profile of a user authorized to access the streaming service; determining whether a bonus stream in addition to a default number of streams should be granted based on an analysis of at least one of: a status of the streaming service, a status of the requesting media device, metadata of the media content item, a status of the communication system, the profile, or a status of the currently streaming media device. Related apparatuses, devices, techniques, and articles are also described.
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/433 - Content storage operation, e.g. storage operation in response to a pause request or caching operations
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
3.
METHODS AND SYSTEMS FOR RESPONDING TO A NATURAL LANGUAGE QUERY
Systems and methods are provided for responding to a natural language query, e.g., a first natural language query. A first natural language understanding model is used to process the natural language query. A confidence level, e.g.. a first confidence level, of the understanding of the natural language query is determined. In response to the confidence level being below a confidence level threshold, the natural language query is reprocessed using a reprocessing module. A response to the first natural language query is generated based on the processing of the natural language query by the first natural language understanding model and the reprocessing of the natural language query.
4.
USER DEFINED RULES FOR ASSIGNING DESTINATIONS OF CONTENT
A media guidance application is provided by which users can define rules for assigning user equipment devices as destinations for media content. For example, a user may define a rule by which selected media content having attributes that satisfy a user-defined condition are downloaded, recorded, or streamed to a particular, user-specified user equipment device. The user may define and manage rules using media guidance menus, and may restrict other users from accessing the rules (e.g., parents restricting children).
5.
SYSTEMS AND METHODS FOR AUGMENTED REALITY VIDEO GENERATION
Systems and methods are described for generating an AR image are described herein. A physical camera is used to capture a video of a physical object in front of a physical background. The system then accesses data defining a virtual environment and selects a first position of a virtual camera in the virtual environment. While capturing the video, the system displays captured video of the physical object, such that the physical background is replaced with a view of the virtual environment from the first position of the virtual camera. In response to detecting a movement of the physical camera, the system selects a second position of the virtual camera in the virtual environment based on the detected movement. The system then displays the captured video of the physical object, wherein the view of the physical background is replaced with a view of the virtual environment from the second position of the virtual camera.
Systems and methods are presented herein for providing a user with a notification, or access to content, based on the user's factual discourse during a conversation with other users. A first user may provide a first statement. A second user may provide a second statement. An application determines the first and the second statement are associated with first and second user profiles, respectively. The application analyzes the elements of each respective statement and determines there is a conflict between the user statements. In response to determining there is a conflict between the respective statements, the application generates a respective search query to verify each respective statement. When the application determines there is an answer that resolves the conflict between the respective statements, the application generates a notification for the users that comprises the answer that resolves the conflict and may include access to content affirming the answer.
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
7.
SYSTEMS AND METHODS FOR RECOMMENDING CONTENT ITEMS BASED ON AN IDENTIFIED POSTURE
Systems and methods are provided for generating a content item recommendation based on an identified posture. An input associated with a content item delivery service is received at a computing device. A capture of a user is received, and a digital representation of the user is generated based on the capture of the user. A posture of the user is determined based on the digital representation of the user, and a content item genre is identified based on the determined posture. A content item recommendation that is based on the identified genre is generated and output.
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
8.
SYSTEMS AND METHODS FOR DISAMBIGUATING A VOICE SEARCH QUERY
Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
9.
SYSTEMS AND METHODS FOR DECENTRALIZED GENERATION OF A SUMMARY OF A VIRTUAL MEETING
Systems, methods and apparatuses are described for providing a summary associated with a virtual meeting. In response to detecting a break in presence (BIP) at a first computing device for a first user in the virtual meeting, each of one or more second computing devices participating in the virtual meeting and corresponding to at least one second user may be caused to locally monitor reactions of the corresponding at least one second user to the virtual meeting during the BIP. The server may receive one or more parameters associated with the locally monitored reactions and corresponding to a portion of the virtual meeting during the BIP. In response to determining to generate a summary associated with a corresponding portion of the virtual meeting during the BIP, based on the received one or more parameters, the summary may be generated and provided to the first computing device.
Systems and methods for presenting user-selectable options for parental control in response to detecting a triggering action by a user are disclosed. A system generates for output a first content item on a device. The system identifies a first user and a second user in proximity to the device and determines that a first gesture is performed by the first user wherein the first gesture is covering the eyes of the second user. In response to determining that the first gesture is performed, the system presents a selectable option for a user input such as (a) skipping a portion of the first content item; (b) lowering the volume; (c) removing the video of the first content item; or (d) presenting a second content item instead of presenting the first content item. In response to receiving a user input selecting the selectable option, the system performs an action corresponding to the selectable option.
Systems and methods for generating a graphically animated audience are disclosed. Biometric data is captured via a sensor during display of content via a first device. The biometric data is stored in association with metadata for the content, and is mapped to a graphical representation. Based on the mapping of the biometric data to the graphical representation and the metadata, a graphical animation is generated for display in synchronization with displaying of the content via a second device.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors
12.
SUPPLEMENTAL AUDIO GENERATION SYSTEM IN AN AUDIO-ONLY MODE
Systems and methods for generating supplemental audio for an audio-only mode are disclosed. For example, a system generates for output a content item that includes video and audio. In response to determining that an audio-only mode is activated, the system determines that a portion of the content item is not suitable to play in the audio-only mode. In response to determining that the portion of the content item is not suitable to play in the audio-only mode, the system generates for output supplemental audio associated with the content item during the portion of the content item.
H04N 21/439 - Processing of audio elementary streams
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
Methods and systems are provided for streaming a media asset with an adaptive bitrate transcoder. A server receives, from a client device, a first request for a first portion of the plurality of portions to be transcoded at a first bitrate. The server then starts to transcode the plurality of portions at the requested first bitrate to generate a plurality of corresponding transcoded portions. The server updates a header of a transcoded portion to include: 1) a transcode latency value; and 2) a count value indicating a number of available pre-transcoded portions of the media asset at the time the first request was received. The server then transmits the transcoded portion to the client. The client device then determines a second bitrate based on the transcode latency value included in the header of the transcoded portion corresponding to the first portion.
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 21/433 - Content storage operation, e.g. storage operation in response to a pause request or caching operations
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
14.
SYSTEMS AND METHODS FOR PROVIDING A SLOW MOTION VIDEO STREAM CONCURRENTLY WITH A NORMAL-SPEED VIDEO STREAM UPON DETECTION OF AN EVENT
Methods and systems for providing a video stream along with a slow motion video showing a particular event depicted in the video stream are described herein. The method includes generating a first video stream and generating a second video stream, which is a slow motion video stream, from the first video stream by modifying a playback speed of the first video stream. The method includes monitoring content of the first video stream to identify an event trigger of a predefined set of event triggers. Each event trigger indicates a presence in the first video stream of an event that is to be generated for display using the second video stream. The method includes determining, based on the identifying of the event trigger, to transmit the second video stream along with the first video stream, and simultaneously transmitting both the first video stream and the second video stream.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
G06F 16/78 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
G11B 27/00 - EditingIndexingAddressingTiming or synchronisingMonitoringMeasuring tape travel
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
H04N 21/2365 - Multiplexing of several video streams
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
Systems and methods are presented for providing to filter unwanted sounds from a media asset. Voice profiles of a first character and a second character are generated based on a first voice signal and a second voice signal received from the media device during a presentation. The user provides a selection to avoid a certain sound or voice in association with the second character. During a presentation of the media asset, a second audio segment is analyzed to determine, based on the voice profile of the second character, whether the second voice signal includes the voice of a second character. If so, the second voice signal output characteristics are adjusted to reduce the sound.
H04N 21/439 - Processing of audio elementary streams
G06F 16/635 - Filtering based on additional data, e.g. user or group profiles
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
16.
SYSTEMS AND METHODS FOR IMPROVING MEDIA CONTENT PLAYBACK AT A PLAYBACK DEVICE
Systems and methods for improving audio playback at a playback device are described herein. In some embodiments, a system transitions between causing playback of a radio broadcast stream to causing playback of device music, such as in response to determining that a quality of the radio broadcast stream is below a threshold value. In some embodiments, a system selects songs to play based on device preferences of a plurality of different media devices. In some embodiments, a system selects a device from which to retrieve songs for playback based on one or more rules.
Systems and methods for generating and presenting content recommendations to new users during or immediately after the onboarding process, before any history of the new user's viewed content is available. A machine learning or other model may be trained to determine clusters of content genre values corresponding to genres of content watched by viewers. Clusters are thus associated with popular groupings of content genres viewed by many users. Clusters representing popular groupings of content genres may be selected for new users, and content corresponding to the selected clusters may be recommended to the new users as part of their onboarding process. A sufficient amount of content may be selected to fully populate any content recommendation portion of a new user onboarding page.
G06F 16/9535 - Search customisation based on user profiles and personalisation
G06F 18/23213 - Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
H04N 21/482 - End-user interface for program selection
18.
METHODS AND SYSTEMS FOR IMPLEMENTING A LOCKED MODE FOR VIEWING MEDIA ASSETS
Methods and systems that provide an interactive media guidance application having a locked mode for viewing media assets. In the locked mode, the interactive media guidance application may provide media assets suited to a certain audience. The interactive media guidance application may determine suitable media assets for the locked mode based on media assets viewed by other users having characteristics similar to the user of the interactive media guidance application. In the locked mode, the interactive media guidance application may allow access to only certain media assets and/or limit the time period for which the media assets are presented.
H04N 21/454 - Content filtering, e.g. blocking advertisements
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
H04N 21/472 - End-user interface for requesting content, additional data or servicesEnd-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
H04N 21/475 - End-user interface for inputting end-user data, e.g. PIN [Personal Identification Number] or preference data
H04N 21/482 - End-user interface for program selection
H04N 21/6543 - Transmission by server directed to the client for forcing some client operations, e.g. recording
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
19.
SYSTEMS AND METHODS TO CURATE NOTIFICATIONS FROM UNSUBSCRIBED SOCIAL MEDIA ACCOUNTS
Methods and systems for curating notifications from unfollowed accounts are described herein. The system tracks that a first account previously followed a second account and subsequently unfollowed the second account. The system identifies an interest of the first account and monitors the activities of the second account for activity that matches the interest. If there is a match between the interest and an activity of the second account, the system notifies the first account of the activity. These methods and systems provide the user with relevant information from unfollowed accounts.
G06F 16/9536 - Search customisation based on social or collaborative filtering
H04L 51/212 - Monitoring or handling of messages using filtering or selective blocking
H04L 51/224 - Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
20.
SYSTEMS AND METHODS FOR PROVIDING BINGE-WATCHING RECOMMENDATIONS
Systems and methods are provided for generating and presenting content series recommendations to a particular user that has just completed binge-watching a particular content series. The recommendations are based on content series consumed by other users who have also consumed the content series just completed by the user and that share behavioral attributes to that of the user.
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
Systems and associated methods are described for providing content recommendations. The system accesses content item consumption data for a plurality of users subscribed to a media service. Then, the system determines that a first subset of the plurality of users has unsubscribed from the media service and that a second subset of the plurality of users has not unsubscribed from the media service. The system identifies a time slot typical for the first subset of users and atypical for the second subset of users based on content item consumption data of the first subset of users and content item consumption data of the second subset of users. In response to determining that a user is consuming a first content item at the identified time slot, the system generates for display a recommendation for a second content item that is scheduled for a different time slot.
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
G06Q 30/0201 - Market modellingMarket analysisCollecting market data
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/458 - Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming streamUpdating operations, e.g. for OS modules
22.
METHODS AND SYSTEMS TO PROVIDE A PLAYLIST FOR SIMULTANEOUS PRESENTATION OF A PLURALITY OF MEDIA ASSETS
Systems and methods are described herein for generating a playlist for a simultaneous presentation of a plurality of media assets. The system retrieves a user preference associated with a user profile and receives a selection of a first media asset and a second media asset from the plurality of media assets for presentation on a user device. The system parses the respective audio streams of the first media asset and the second media asset to identify one or more preferred audio segments based on the user preference and generates the playlist of the identified one or more preferred audio segments. Based on a generated audio playlist, the system generates, for presentation on the user device, the video stream for each of the first media asset and the second media asset and the playlist of the identified one or more preferred audio segments.
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
H04N 21/431 - Generation of visual interfacesContent or additional data rendering
H04N 21/439 - Processing of audio elementary streams
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/482 - End-user interface for program selection
Embodiments of the present disclosure includes systems for and methods of playing media based on local and remote data. A method falling within the disclosure includes: storing in a particular entry of a data structure stored in the local memory of the device: a media title, metadata, and an assigned identifier of the media streaming application via which a content item is received at the device; receiving a search request for the media title; searching the internet for a web page via which media associated with the media title can be played; identifying, a particular entry in the data structure that comprises the media title to access the metadata stored in the data structure; comparing the metadata in the data structure and the metadata of the web page; and launching of the media streaming application or opening a web application to play the media.
G06Q 30/0207 - Discounts or incentives, e.g. coupons or rebates
G06F 16/9535 - Search customisation based on user profiles and personalisation
G06K 7/14 - Methods or arrangements for sensing record carriers by electromagnetic radiation, e.g. optical sensingMethods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
24.
METHODS AND SYSTEMS FOR SELECTING A 3D OBJECT FOR DISPLAY IN AN EXTENDED REALITY ENVIRONMENT
Systems and methods are described for selecting a 3D object for display in an extended reality environment. A space in an extended reality environment is determined for placement of a 3D object. A set of space parameters are determined comprising: an amount of memory available for generating the display of the extended reality environment and an amount of computing power available for generating the display of the extended reality environment. The 3D object is selected for display in the space based on the amount of memory and the amount of computing power available.
Systems and methods are described for identifying a plurality of candidate interactive sessions for a user with a user profile to join, each candidate interactive session being associated with a plurality of user profiles. A digital representation of the user may be generated, and the digital representation of the user may be caused to join each of the plurality of candidate interactive sessions. The systems and methods may monitor, in each candidate interactive session, behavior of digital representations of each of the plurality of user profiles associated with the candidate interactive session in relation to the digital representation of the user. The systems and methods may generate, based on the monitoring, a social inclusivity score for each of the plurality of candidate interactive sessions. A recommended interactive session may be selected and provided based on the corresponding social inclusivity score for each candidate interactive session.
H04L 67/131 - Protocols for games, networked simulations or virtual reality
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
26.
Ecosystem for NFT Trading in Public Media Distribution Platforms
A computer-implemented method and an apparatus are provided for presenting an option to purchase an NFT based on a scene of a media asset to an advertiser. One example computer-implemented method includes obtaining, from a first source, a scene of a media asset, determining that the scene comprises a product, obtaining, from a second source, a non-fungible token (NFT) based on the scene, matching the NFT to an advertiser based on the product, and presenting an option to purchase the matched NFT to the advertiser.
Systems, methods and apparatuses are described for providing a summary associated with a virtual meeting. In response to detecting a break in presence (BIP) at a first computing device for a first user in the virtual meeting, each of one or more second computing devices participating in the virtual meeting and corresponding to at least one second user may be caused to locally monitor reactions of the corresponding at least one second user to the virtual meeting during the BIP. The server may receive one or more parameters associated with the locally monitored reactions and corresponding to a portion of the virtual meeting during the BIP. In response to determining to generate a summary associated with a corresponding portion of the virtual meeting during the BIP, based on the received one or more parameters, the summary may be generated and provided to the first computing device.
Systems and methods are described for identifying a plurality of candidate interactive sessions for a user with a user profile to join, each candidate interactive session being associated with a plurality of user profiles. A digital representation of the user may be generated, and the digital representation of the user may be caused to join each of the plurality of candidate interactive sessions. The systems and methods may monitor, in each candidate interactive session, behavior of digital representations of each of the plurality of user profiles associated with the candidate interactive session in relation to the digital representation of the user. The systems and methods may generate, based on the monitoring, a social inclusivity score for each of the plurality of candidate interactive sessions. A recommended interactive session may be selected and provided based on the corresponding social inclusivity score for each candidate interactive session.
H04L 67/131 - Protocols for games, networked simulations or virtual reality
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
29.
SYSTEMS AND METHODS FOR ENABLING NON-FUNGIBLE TOKENS (NFTs) IN A VIRTUAL/METAVERSE ENVIRONMENT
Systems and methods for enabling non-fungible tokens (NFTs) in a virtual or metaverse environment by connecting an NFT marketplace to the virtual or metaverse environment and providing tools to display, sell, broker, and trade NFTs based on matching of user interests is disclosed. The methods generate a weighted taxonomy based on NFTs displayed in the virtual environment. A new user's interest in the NFTs is determined in multiple ways, including if the new user owns NFTs that share characteristics with the taxonomy associated NFTs. A match is determined based on new user's interests in relation to the weighed taxonomy. Upon a match, guidance is provided for avatars of the new user and other NFT owners to virtually meet. Separate servers, locations, and ingress points may be determined to facilitate meetings and buy/sell discussions. If an NFT sale/trade is executed, the sale/trade is recorded in the blockchain.
G06Q 20/12 - Payment architectures specially adapted for electronic shopping systems
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
G06Q 40/04 - Trading Exchange, e.g. stocks, commodities, derivatives or currency exchange
30.
SYSTEMS AND METHODS FOR ENHANCING GROUP MEDIA SESSION INTERACTIONS
Systems and methods are provided for enabling enhanced group media session interactions. A group session for consuming a media content item is initiated between first and second computing devices, and a portion of the media content item is received at the computing devices. A reaction of a first user is captured based at least in part on receiving the portion of the media content item. A trigger condition is identified, and it is determined that the captured reaction satisfies the trigger condition. In response to determining that the captured reaction satisfies the trigger condition, a prompt that is based on the portion of the media content item and the captured reaction is generated. A computing device is identified, and at least one of the portion of the media content item, the captured reaction, or the prompt is transmitted to the identified computing device and is generated for output.
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04L 65/1089 - In-session procedures by adding mediaIn-session procedures by removing media
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
31.
SYSTEMS AND METHODS FOR IMPROVING GROUPCAST MEDIA STREAMING USING METRIC INFORMATION IN DEVICE-TO-DEVICE COMMUNICATIONS
Systems and methods are provided for improving communications between computing devices. A content item is received at a first computing device, and a sidelink channel is initiated between the first computing device and a second computing device. A first portion of the content item is transmitted from the first computing device to the second computing device via the sidelink channel. Feedback is generated, based on a condition of the sidelink channel, at the second computing device, and the feedback is transmitted from the second computing device to the first computing device. An action to perform is identified based on the feedback, and the action is performed.
H04W 4/06 - Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]Services to user groupsOne-way selective calling services
A computer-implemented method and a system are provided for amending sent text-based messages. One example computer-implemented method includes obtaining, from a source, a text-based message and receiving, at a user device, an inquiry of a portion of the text-based message. The computer-implemented method further includes requesting, from a network, data based on the inquiry of the portion of the text-based message, amending at least the portion of the text-based message based on the data, and presenting the amended portion of the text-based message at the user device.
H04M 1/72436 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
H04L 51/224 - Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages
33.
MULTI-CAMERA MULTIVIEW IMAGING WITH FAST AND ACCURATE SYNCHRONIZATION
There is provided a method comprising: receiving a communication from one or more devices capable of recording content, determining, using a wireless communication transceiver, a geographical location of the one or more devices, determining an orientation of the one or more devices, receiving content capturing an event and recorded on the one or more devices, storing the content capturing an event and recorded on the one or more devices, and creating, from a collection of recordings comprising at least the stored content capturing an event and recorded on the one or more devices, a single representation of the event by combining segments of the collection of recordings.
Systems and methods are described for determining a position of a user device in a field of view of a user in an XR environment. One or more display elements are generated for display in the XR environment relative to the position of the user device in the field of view. Each display element comprises a user interface of an executable application for controlling the user device
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
Systems and methods for encoding/decoding a 3D image are provided. The system accesses an image data that comprises a texture data and a depth map. The system decomposes the depth map into a plurality of component depth maps (CDMs) for a plurality of depth ranges, wherein each component depth map corresponds to a focal plane of multiple focal planes (MFPs) decomposition of the image data. The system generates a plurality of encoded CDM data streams for the plurality of depth ranges, wherein each respective CDM data stream is based at least in part on a respective CDM. The system then transmits the plurality of encoded CDM data streams to a client device to cause the client device to: (a) reconstruct the depth map, and (b) generate for display or for further processing an image based on the reconstructed depth map.
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/184 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
36.
Systems and Methods to Provide Otherwise Obscured Information to a User
Systems and methods are presented for enhancing media or information consumption. An example method includes identifying a movement of a first user, identifying an object referenced by the movement of the first user external to the first user, and in response to determining that the object is at least partially obstructed from a field of view of a second user, generating, for display on a display that is within the field of view of the second user, a view of the object.
B60R 1/23 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
B60R 1/28 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
Methods and systems for conversion of imagery and video for three-dimensional (3D) displays, four-dimensional experiences, next-generation user interfaces, virtual reality, augmented reality, mixed reality experiences, and interactive experiences into imagery and video suitable for a two-dimensional (2D) display. A 2D display is configured to generate a 3D-like effect. 3D images are analyzed and represented by parameters including movement, depth, motion, shadow, focus, sharpness, intensity, and color. Using the parameters, the 3D images are converted to 2D images that include the 3D-like effect. The 2D images are presented to users to generate feedback. The feedback informs changes to the conversion. Artificial intelligence systems, including neural networks, are trained for improving the conversion. Models are developed for improving the conversion. Related apparatuses, devices, techniques, and articles are also described.
Systems and methods for enabling non-fungible tokens (NFTs) in a virtual or metaverse environment by connecting an NFT marketplace to the virtual or metaverse environment and providing tools to display, sell, broker, and trade NFTs based on matching of user interests is disclosed. The methods generate a weighted taxonomy based on NFTs displayed in the virtual environment. A new user's interest in the NFTs is determined in multiple ways, including if the new user owns NFTs that share characteristics with the taxonomy associated NFTs. A match is determined based on new user's interests in relation to the weighed taxonomy. Upon a match, guidance is provided for avatars of the new user and other NFT owners to virtually meet. Separate servers, locations, and ingress points may be determined to facilitate meetings and buy/sell discussions. If an NFT sale/trade is executed, the sale/trade is recorded in the blockchain.
There is provided a method comprising: receiving a communication from one or more devices capable of recording content, determining, using a wireless communication transceiver, a geographical location of the one or more devices, determining an orientation of the one or more devices, receiving content capturing an event and recorded on the one or more devices, storing the content capturing an event and recorded on the one or more devices, and creating, from a collection of recordings comprising at least the stored content capturing an event and recorded on the one or more devices, a single representation of the event by combining segments of the collection of recordings.
A host server of a digital platform, such as a virtual world or augmented reality platform, receives a request to graphically indicate an affiliation between a user of an avatar and an organization or other entity. The host server queries an authenticating server to authenticate the affiliation between the user and the entity. Accordingly, the host server generates for display a logo, or other indication, to indicate the authenticated affiliation. Other users of the digital platform can learn whether the user of the avatar is actually affiliated with the entity. Access to digital spaces, virtual objects and some interactions of the avatar may be controlled according to an access policy of the entity.
Systems and methods for encoding/decoding a 3D image are provided. The system decomposes depth map into a plurality of component depth maps (CDMs) for a plurality of depth ranges, wherein each component depth map corresponds to a focal plane of a multiple focal plane (MFP) decomposition of the image data. The system generates a plurality of component depth map focal planes (CDMFPs) by combining each respective CDM with the depth map. The system scales data in each CDMFP by a respective scaling factor. The system generates for transmission a plurality of encoded scaled CDMFP data streams for the plurality of depth ranges, wherein each respective scaled CDMFP data stream is based at least in part on a respective scaled CDMFP.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Systems and methods are described for encrypting and decrypting data in a distributed storage environment. Such systems and methods for encryption may divide a data payload into slices, the slices including a first slice and a subsequent slice, employ a content encryption key and an initialization vector, encrypt the first slice using the content encryption key and the initialization vector, generate a subsequent initialization vector for the subsequent slice based upon the initialization vector and the unencrypted content of the first slice, and encrypt the subsequent slice using the subsequent initialization vector and the content encryption key. The systems and methods may then generate a list of the encrypted slices into which the data payload has been generated, and publish to a secure storage location, the slice list, the content encryption key and the initialization vector for the first slice in the slice list, with the slices outputted to the distributed storage environment. Systems and methods for decryption may receive, from a secure storage location, a slice list, a content encryption key, and an initialization vector, determine the encrypted slices to be received from the distributed storage environment. The systems and methods may receive, from the distributed storage environment, at least encrypted first slice and the encrypted subsequent slice, and decrypt the first slice using the content encryption key and the initialization vector, to generate a decrypted first slice, and generate a subsequent initialization vector for the subsequent slice based upon the initialization vector and the decrypted first slice, decrypt the subsequent slice using the subsequent initialization vector and the content encryption key, and combine the first slice and the subsequent slice into a data payload.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Systems and methods are described for encrypting and decrypting data in a distributed storage environment. Such systems and methods for encryption may divide a data payload into slices, the slices including a first slice and a subsequent slice, employ a content encryption key and an initialization vector, encrypt the first slice using the content encryption key and the initialization vector, generate a subsequent initialization vector for the subsequent slice based upon the initialization vector and the unencrypted content of the first slice, and encrypt the subsequent slice using the subsequent initialization vector and the content encryption key. The systems and methods may then generate a list of the encrypted slices into which the data payload has been generated, and publish to a secure storage location, the slice list, the content encryption key and the initialization vector for the first slice in the slice list, with the slices outputted to the distributed storage environment. Systems and methods for decryption may receive, from a secure storage location, a slice list, a content encryption key, and an initialization vector, determine the encrypted slices to be received from the distributed storage environment. The systems and methods may receive, from the distributed storage environment, at least encrypted first slice and the encrypted subsequent slice, and decrypt the first slice using the content encryption key and the initialization vector, to generate a decrypted first slice, and generate a subsequent initialization vector for the subsequent slice based upon the initialization vector and the decrypted first slice, decrypt the subsequent slice using the subsequent initialization vector and the content encryption key, and combine the first slice and the subsequent slice into a data payload.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
44.
MULTI-FORMAT REPRESENTATION AND CODING OF VISUAL INFORMATION
System and methods are provided for using multi-format representation and coding of visual information. The system accesses an image data that comprises a texture data and a depth map; decomposes the depth map into a plurality of component depth maps (CDMs); and generates multiple focal planes (MFPs) comprising a plurality of focal planes. Each respective focal plane is based on the texture data and a respective CDM of the plurality of CDMs. The system selects a data subset including one or more of: (a) the texture data, (b) the depth map, (c) the plurality of CDMs, or (d) the plurality of focal planes; generates encoded data based on the selected data subset; and transmits, over a communication network, the encoded data to a client device to cause the client device to: generate for display or for further processing an image based on the encoded data.
Systems and methods are provided herein for detecting key words provided by ancillary devices and acquiring virtual objects based on the detected key words. This may be accomplished by a system displaying an augmented reality view to a user and detecting a received message. The system can determine whether a portion of the message corresponds to an augmented reality object. In response to detecting that the portion of the message corresponds to the augmented reality object, the system can display the augmented reality object in the augmented reality view in a first format. The first format can be based on the environment around the user.
Systems and methods are provided for enabling payments in an extended reality environment. A virtual space is mapped to a physical space at an extended reality device, and a virtual payment location is identified in the virtual space, where the virtual payment location corresponds to a location in the physical space. A collision with the virtual payment location is detected and a payment is initiated based on the collision. A payment request is transmitted from the extended reality device, and confirmation of the payment is received at the extended reality device. Confirmation of the payment is generated for output.
G06Q 20/30 - Payment architectures, schemes or protocols characterised by the use of specific devices
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentialsReview and approval of payers, e.g. check of credit lines or negative lists
There is provided systems and methods for estimating quality of experience, QoE, for a media stream. The systems and methods comprise receiving a first window of frames of the media stream, receiving a second window of frames of the media stream, measuring a plurality of metrics relating to the first and the second windows of frames, aggregating the plurality of metrics for each window of frames, and determining a window quality of experience value based on the aggregated plurality of metrics.
Systems and methods for conceptualizing an initial object by using a reference object are described. The methods identify an initial object such as a virtual object or a live object. Characteristic(s) of the initial object may be obtained and used in a search query to search for a reference object with which a user has interacted. In some instances, the characteristic(s) may be obtained upon determining an interest in the initial object. Reference objects that include the characteristic(s) and have a user interaction may be identified and scored. The reference objects, if more than one identified, may be ranked based on the score and a reference object may be selected for display. The display may include both the selected reference object and the initial object and provide context to the initial object by such that the user may be able to relate to it based on their interactions with the reference object.
Systems and methods are described for configuring adaptive streaming of content items (e.g., extended reality experiences, or any videos, including 360° videos), and selecting a version of the content item based on desired content comfort rating(s) which may be determined based on monitoring discomfort trends of a user. A determination is made of whether the discomfort trend exceeds a threshold, and if so, a version of the same content item that is rated for the desired discomfort rating is used, where the selected version is more comfortable for the user than the originally scheduled content item. Alternatively, the user's actual discomfort is measured during consumption of content item and used to select a version of the content that is more comfortable. In a live setting, specific enhancement of the content item can be selected, such as view from a specific camera, an angle of the camera, zoomed in/out image, etc.
Systems and methods are disclosed herein for temporally predictive coding of three-dimensional (3D) dynamic point cloud attributes. A first frame and a second frame of point cloud data are accessed. The point cloud data points include 3D spatial coordinates and one or more graphic attributes. A block tree data structure comprising a plurality of blocks is generated based on a tree partitioning of the second frame of point cloud data. Matching block pairs between the first frame and the second frame are identified from the plurality of blocks based on block-wise searching. Frequency-domain projections are generated for each matching block pair via a graph Fourier transform (GFT) algorithm. A bitstream of motion-compensated residuals is generated based on differences in the frequency-domain projections for each matching block pair.
A method and an apparatus are provided for assigning users to virtual world servers based on social connectedness. One example method includes receiving a request to connect a client device to one of a plurality of virtual world servers and accessing social network connectivity data of a user account associated with the client device. The method further includes identifying a plurality of other user accounts based on the social network connectivity data, ranking the plurality of virtual world servers based on connections with devices associated with the plurality of other user accounts, and connecting the client device to a virtual world server of the plurality of virtual world servers based on the ranking.
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
G06F 16/908 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
52.
SYSTEMS AND METHODS FOR CONFIGURING ADAPTIVE STREAMING OF CONTENT ITEMS BASED ON USER COMFORT LEVELS
Systems and methods are described for configuring adaptive streaming of content items (e.g., extended reality experiences, or any videos, including 360 videos), and selecting a version of the content item based on desired content comfort rating(s) which may be detennined based on monitoring discomfort trends of a user. A detennination is made of whether the discomfort trend exceeds a threshold, and if so, a version of the same content item that is rated for the desired discomfort rating is used, where the selected version is more comfortable for the user than the originally scheduled content item. Alternatively, the user's actual discomfort is measured during consumption of content item and used to select a version of the content that is more comfortable. In a live setting, specific enhancement of the content item can be selected, such as view from a specific camera, an angle of the camera, zoomed in/out image, etc.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
H04N 13/178 - Metadata, e.g. disparity information
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
Systems and methods are described for determining an amount of time for charging a vehicle battery and selecting one or more media content items for display on a user device based on the amount of time for charging the vehicle battery. A level of driving autonomy of a vehicle is determined. An audio and/or video setting of the media content item is adjusted based on the level of driving autonomy
B60L 58/12 - Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles for monitoring or controlling batteries responding to state of charge [SoC]
B60L 53/62 - Monitoring or controlling charging stations in response to charging parameters, e.g. current, voltage or electrical charge
H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
Systems and methods are described for determining an amount of time for charging a vehicle battery and selecting one or more media content items for display on a user device based on the amount of time for charging the vehicle battery. A level of driving autonomy of a vehicle is determined. An audio and/or video setting of the media content item is adjusted based on the level of driving autonomy
A virtual reality play area is defined. Movement of an object in the vicinity of the play area may then be detected. Based on the movement of the object, it may be determined whether the object is projected to enter the play area. If the object is projected to enter the play area, a representation of the object is generated for display to the user.
G08B 21/02 - Alarms for ensuring the safety of persons
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G08B 7/06 - Signalling systems according to more than one of groups Personal calling systems according to more than one of groups using electric transmission
G08C 17/02 - Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
56.
SYSTEMS AND METHODS FOR ENABLING A VIRTUAL ASSISTANT IN DIFFERENT ENVIRONMENTS
Systems and methods are provided for enabling the protection of user privacy when adding a virtual assistant to a conference. A conference is initiated between a first computing device and at least a second computing device and a virtual assistant is added to the conference. At the virtual assistant, it is identified that the virtual assistant is in the conference and a guest mode is activated in response. A query is received at the virtual assistant and based on the query and the guest mode, an action is identified. The identified action is performed via the virtual assistant.
Systems and methods are described for generating a virtual reality (VR) environment comprising an interactive object, wherein the interactive object is associated with a service provider and is generated based on a user profile associated with a current VR session in the VR environment. The systems and methods may detect user input in association with one or more options associated with the interactive object, and, based on the detecting, cause an action to be performed in association with the user profile and the service provider associated with the interactive object, wherein the action comprises accessing a service provided by the service provider, the service being external to the VR environment.
Methods and systems for video compression at scene changes provide an improved, low latency interactive experience in cloud computing environments. Exemplary use cases include all forms of cloud gaming including cloud-enabled interactive sporting events, e-sports, fantasy sports, gaming, and enhancements. Improvements in performance and experience are achieved with at least one of an extreme low latency rate controller, an extreme low latency rate controller method, frame partitioning at scene changes, preventive (relatively early) termination of encoding at scene changes, interactive signaling between a decoder and an encoder, or interactive signaling. Related apparatuses, devices, techniques, and articles are also described.
H04N 19/146 - Data rate or code amount at the encoder output
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
Systems and methods are provided for using a Multi Focal Plane (MFP) prediction in predictive coding. The system detects a camera viewpoint change between a current frame from a current camera viewpoint to a previous frame from a previous camera viewpoint, decomposes a reconstructed previous frame to a plurality of focal planes, adjusts the plurality of focal planes from the previous camera viewpoint to correspond with the current camera viewpoint, generates an MFP prediction by summing pixel values of the adjusted plurality of focal planes along a plurality of optical axes from the current camera viewpoint, determines an MFP prediction error between the MFP prediction and the current frame, quantizes and codes the MFP prediction error, and transmits, to a receiver over a communication network, the camera viewpoint change and the coded quantized MFP prediction error for reconstruction of the current frame and display of the 3D scene.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
H04N 19/103 - Selection of coding mode or of prediction mode
H04N 19/164 - Feedback from the receiver or from the transmission channel
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Systems and methods are provided for using a Multiple Depth Plane (MDP) prediction in predictive coding. The system detects a camera viewpoint change between a current frame and a previous frame, decomposes a reconstructed depth map of the previous frame to a plurality of depth planes, adjusts the plurality of depth planes from a previous camera viewpoint to correspond with a current camera viewpoint, generates an MDP prediction by summing pixel values of the adjusted plurality of depth planes along a plurality of optical axes from the current camera viewpoint, determines an MDP prediction error between the MDP prediction and a depth map of the current frame, quantizes and codes the MDP prediction error, and transmits, to a receiver over a communication network, the camera viewpoint change and the coded quantized MDP prediction error for reconstruction of a depth map of the current frame.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
H04N 19/166 - Feedback from the receiver or from the transmission channel concerning the amount of transmission errors, e.g. bit error rate [BER]
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/89 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
61.
SLANTED BURIED DIFFRACTIVE GRATINGS FOR OPTICAL ELEMENTS OF AUGMENTED REALITY AND VIRTUAL REALITY DISPLAYS
Head-mounted displays (HMD) or other suitable optical equipment with waveguides comprising one or more slanted buried diffractive gratings and methods for fabricating said waveguides are described herein. In an embodiment, an HMD comprises an optical element and an image source that provides an image beam to the optical element. The optical element may comprise a first flat surface, a second flat surface, and a buried diffractive grating disposed between the first surface and the second surface. The buried diffractive grating may be positioned in a slanted arrangement at a particular angle relative to the first flat surface and the second flat surface.
Systems and methods are provided for creating enhanced VR content. The system generates, for display on a user device, a view of a virtual 3D environment. The system detects that a location in the view of virtual 3D environment matches one or more criteria. In response to the detecting that the location in the virtual 3D environment matches the one or more criteria, the system automatically stores an image of the virtual 3D environment.
Systems and methods for mitigating cybersickness that is caused due to display of content, such as a 360° video or a virtual reality experience, are disclosed. The methods measure biometrics of a user to determine a cybersickness score. The score is associated with a cybersickness severity level. A determination is made whether the user's cybersickness severity level exceeds a threshold, and, if so, mitigation or remedial actions are automatically performed. The mitigation options range from altering content, changing device configuration, and automating home automation devices to automating body electronics worn by the user. The type of mitigation option selected is based on the user's cybersickness severity level. The methods also determine demographics of a plurality of users who encountered cybersickness due to engagement with the content. A match between the user's demographics and the plurality of users is determined and accordingly mitigation options are selected on the basis of the match.
A61M 21/00 - Other devices or methods to cause a change in the state of consciousnessDevices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
64.
SYSTEMS AND METHODS FOR EMULATING A USER DEVICE IN A VIRTUAL ENVIRONMENT
A user device associated with a user interacting with a virtual environment is identified. Using an emulation application, a virtual instance of the user device is launched. User preferences for the user device, retrieved from user data, are then applied to the virtual instance of the user device. A graphical representation of the virtual instance of the user device is then generated for presentation to the user within the virtual environment.
Systems and methods for mitigating cybersickness that is caused due to display of content, such as a 360° video or a virtual reality experience, are disclosed. The methods measure biometrics of a user to determine a cybersickness score. The score is associated with a cybersickness severity level. A determination is made whether the user's cybersickness severity level exceeds a threshold, and, if so, mitigation or remedial actions are automatically performed. The mitigation options range from altering content, changing device configuration, and automating home automation devices to automating body electronics worn by the user. The type of mitigation option selected is based on the user's cybersickness severity level. The methods also determine demographics of a plurality of users who encountered cybersickness due to engagement with the content. A match between the user's demographics and the plurality of users is determined and accordingly mitigation options are selected on the basis of the match.
A61M 21/00 - Other devices or methods to cause a change in the state of consciousnessDevices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
66.
SYSTEMS AND METHODS FOR NAVIGATING AN EXTENDED REALITY HISTORY
A plurality of snapshots from XR sessions are retrieved. A plurality of entities within the plurality of snapshots are identified. Based on the identified plurality of entities, a plurality of salient snapshots is identified. The plurality of snapshots is partitioned into contiguous clusters, with each cluster containing a salient snapshot. The salient snapshots are generated for presentation to the user and, in response to selection of a salient snapshot, a subset of the plurality of entities from within a cluster containing the selected salient snapshot is generated for presentation to the user. In response to selection of a presented entity of the presented subset of the plurality of entities, snapshots including the selected entity are generated for presentation. In response to selection of a snapshot, an XR scene corresponding to the selected snapshot is generated for presentation.
Systems and methods are described for providing a pace indicator in an extended reality environment. First route data of a first route, wherein the first route data comprises a pace of a first user moving along the first route. Second route data of a second route is determined, wherein the second route data comprises a pace of the second user moving along the second route. A pace indicator is provided to the first user moving along the first route based on the first route data and second route data, wherein the pace indicator comprises an avatar moving along the first route in an extended reality environment, the avatar representing the second user moving along the second route.
Systems and methods are described for modifying a media guidance application. Such systems and methods may aid a user in selecting media content for viewing which may be of particular interest to them. Such systems and methods may receive programming information from one or more program guide sources, generate a media guidance application for display based upon the received programming information, receive behavior information from at least one further source, and generate parameters for modifying the media guidance application in response to the behavior information. The systems and methods may then modify the media guidance application based upon the generated parameters and display the modified media guidance application.
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
Systems, methods and apparatuses are described herein for encoding image data comprising two-dimensional (2D) perspective images that exhibit parallax for presentation on a three-dimensional (3D) display. The image data may be accessed and encoded by generating a group of pictures (GOP) that comprises the 2D perspective images and ordering the 2D perspective images within the GOP in a particular order based on a set of evaluated metrics derived from content of the plurality of 2D perspective images or based on characteristics associated with equipment used to capture the plurality of 2D perspective images. The encoded image data may be transmitted for display.
Systems and methods for bandwidth-adaptive light field video transmission on mobile and portable devices is disclosed. An upstream bandwidth is estimated. A request for a service tier for capture and transmission of light field content is received, wherein the light field content comprises an image array of a plurality of sub-aperture images. When the requested service tier is greater than the estimated upstream bandwidth, a reduced service tier is determined based on the estimated upstream bandwidth. A number of sub-aperture images comprising a reduced image array is determined based on the reduced service tier. The image array is reduced to the reduced image array based on feature saliency and adjacency of sub-aperture images. Resources corresponding to the reduced service tier are provided for capture and transmission of the reduced image array.
Systems and methods for broadcasting images identifying a destination device as it appears in an environment for content transfer are disclosed. Systems include a first device which selects a profile image of itself as it appears in its environment and embeds the profile image in its identification profile. The first device transmits the identification profile over a network during a discovery phase and the identification profile is received by a second device located in proximity to the first device. The identification profile is verified as corresponding to the first device by comparing the profile image to a real-time image of the first device. When the profile image substantially matches the real-time image of the first device, pairing between the first device and second device is initiated or content is sent over the network from the second device to the first device.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 9/451 - Execution arrangements for user interfaces
G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
Systems and methods are described for modifying a media guidance application. Such systems and methods may aid a user in selecting media content for viewing which may be of particular interest to them. Such systems and methods may receive programming information from one or more program guide sources, generate a media guidance application for display based upon the received programming information, receive behavior information from at least one further source, and generate parameters for modifying the media guidance application in response to the behavior information. The systems and methods may then modify the media guidance application based upon the generated parameters and display the modified media guidance application.
Systems and methods are described for modifying a media guidance application. Such systems and methods may aid a user in selecting media content for viewing which may be of particular interest to them. Such systems and methods may receive programming information from one or more program guide sources, generate a media guidance application for display based upon the received programming information, receive behavior information from at least one further source, and generate parameters for modifying the media guidance application in response to the behavior information. The systems and methods may then modify the media guidance application based upon the generated parameters and display the modified media guidance application.
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
H04N 21/482 - End-user interface for program selection
H04N 21/80 - Generation or processing of content or additional data by content creator independently of the distribution processContent per se
74.
ATTRIBUTE-BASED CONTENT RECOMMENDATIONS INCLUDING MOVIE RECOMMENDATIONS BASED ON METADATA
Improved content recommendations are generated based on a knowledge graph of a content item, which is based on an attribute of the content item, metadata regarding the content item, a viewing history, and user preferences determined by analysis and selected by a user. An option for selecting attributes of interest from a plurality of attributes is generated for display. A content recommendation based on the selected attributes is generated and displayed in a user interface, which changes as user preference selections change. As a result, a user quickly identifies and consumes a customized list of content items related to the user's favorite actor, character, title, depicted object, depicted setting, actual setting, type of action, type of interaction, genre, release date, release decade, director, MPAA rating, critical rating, plot origin point, plot end point, and the like. Related apparatuses, devices, techniques, and articles are also described.
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
H04N 21/431 - Generation of visual interfacesContent or additional data rendering
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
H04N 21/482 - End-user interface for program selection
75.
FIELD OF VISION AUDIO CONTROL FOR PHYSICAL OR MIX OF PHYSICAL AND EXTENDED REALITY MEDIA DISPLAYS IN A SPATIALLY MAPPED SPACE
Systems and methods for controlling the volume of content displayed on displays, such as physical and extended reality displays, based on the pose of an extended reality (XR) headset, or the gaze therefrom, are disclosed. The methods spatially map displays and audio devices on which the content is to be outputted. The methods also monitor 6DOF of the XR headset worn by the user to consume the displayed content. Based on a user's current pose or gaze, the methods determine a field of view (FOV) from the XR headset and the displays that fall within the FOV. The volume of the displays is controlled based on the where the display is located relative to the pose or gaze. The volume of a display that is within a threshold angle of the gaze is increased and volume of other displays is minimized, muted, and/or the content is displayed as closed captioning.
Systems and methods are described for selecting a 3D object for display in an extended reality environment. A space in an extended reality environment is determined for placement of a 3D object. A set of space parameters are determined comprising: an amount of memory available for generating the display of the extended reality environment and an amount of computing power available for generating the display of the extended reality environment. The 3D object is selected for display in the space based on the amount of memory and the amount of computing power available.
Systems and methods are provided for dynamically adjusting a personal boundary of an avatar in an XR environment. The system identifies a first avatar in an extended reality (XR) environment based on rule data stored in a storage. In response to the system detecting that the first avatar has entered a portion of the XR environment at a communicable distance from a second avatar, the system does the following steps. The system determines an offensiveness rating of the first avatar. The system retrieves, from the storage, an offensiveness tolerance of the second avatar. The system compares the offensiveness rating of the first avatar and offensiveness tolerance of the second avatar. In response to determining, based on the comparing, that the offensiveness rating of the first avatar exceeds the offensiveness tolerance of the second avatar, the system automatically censors one or more messages from the first avatar to the second avatar.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
G06T 19/00 - Manipulating 3D models or images for computer graphics
G10L 15/18 - Speech classification or search using natural language modelling
G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
78.
METHODS AND SYSTEM FOR PARAPHRASING COMMUNICATIONS
Systems and methods for paraphrasing communications are disclosed. A first communication input is received and a context of the first communication input is determined. Based on the context of the first communication input, a plurality of linguistic elements are selected and a plurality of paraphrasing pairs are identified, each pair having one of the plurality of linguistic elements and a paraphrasing candidate of the linguistic element. The paraphrasing candidate is based on an emotional state of a sender of the first communication input and at least one of the plurality of paraphrasing pairs are displayed to the sender for selection.
An augment reality (AR) system captures an image of a physical environment. The AR system identifies an object in the captured image to serve as an anchor point. The AR system calculates a distance between the identified object and an AR display device that comprises left and right displays. The AR system identifying a virtual object associated with the anchor point. The AR system then generates for simultaneous display: (a) a first separate image of the virtual object on the left display of the AR device, and (b) a second separate image of the virtual object on the right display of the AR device, such that apparent distance of the virtual object of the composite image of the first separate image and the second separate image is set to the calculated distance between the identified object and the AR display device.
An augment reality (AR) system captures an image of a physical environment. The AR system identifies an object in the captured image to serve as an anchor point. The AR system calculates a distance between the identified object and an AR display device that comprises left and right displays. The AR system identifying a virtual object associated with the anchor point. The AR system then generates for simultaneous display: (a) a first separate image of the virtual object on the left display of the AR device, and (b) a second separate image of the virtual object on the right display of the AR device, such that apparent distance of the virtual object of the composite image of the first separate image and the second separate image is set to the calculated distance between the identified object and the AR display device.
Systems and methods are provided for generating a soundmoji for output. A content item is generated for output at a computing device, and a first input associated with the selection of a soundmoji menu is received. One or more soundmojis are generated for output, and a second input associated with the selection of a first soundmoji of the one or more soundmojis is received. A first timestamp of the content item associated with the selection of the first soundmoji is identified. An indication of a second timestamp of the content item and a second soundmoji is received, and a user interface element associated with the content item is updated to indicate the second soundmoji when the content item is being generated for output at the second timestamp.
An augment reality (AR) system captures an image of a physical environment. The AR system identifies an object in the captured image to serve as an anchor point. The AR system calculates a distance between the identified object and an AR display device that comprises left and right displays. The AR system identifying a virtual object associated with the anchor point. The AR system then generates for simultaneous display: (a) a first separate image of the virtual object on the left display of the AR device, and (b) a second separate image of the virtual object on the right display of the AR device, such that apparent distance of the virtual object of the composite image of the first separate image and the second separate image is set to the calculated distance between the identified object and the AR display device.
An AR display compensates for excessive light levels in a location in which the AR display is being used. AR objects are rendered for display on the AR display. Light levels in a location at which an AR object is being rendered for display are monitored. If the light level in the location exceeds the threshold light level, a light source in the location is identified and light emissions from the identified light source are mitigated.
Positions of AR objects being rendered for display on the AR display are identified. A light level in an area in which an AR object is positioned is then detected and compared to a threshold light level. If the detected light level exceeds the threshold light level, display of the AR object is modified or the AR object is repositioned to a second position at which the light level is at or below the threshold light level.
G09G 5/377 - Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
G09G 5/38 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory with means for controlling the display position
85.
SYSTEMS AND METHODS FOR CONTROLLING NETWORK DEVICES IN AN AUGMENTED REALITY ENVIRONMENT
Systems and methods are described herein for controlling network devices in an augmented reality environment. A user may point a second network device at a first network device to determine a network activity the first network device. The second network device may display a user control interface to enable the user to control the network activity of the first network device (e.g., a pinch gesture control). In response to receiving the user input, the second network device causes the modification of the network activity based on the user input.
H04L 67/131 - Protocols for games, networked simulations or virtual reality
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
86.
METHODS AND SYSTEMS FOR DISPLAYING A MEDIA CONTENT ITEM
Systems and methods are described for encoding and displaying a media content item. An encoded media content item haying a partitioning structure is generated. The partitioning structure comprises multiple partitioned areas configured to, when decoded, generate display of the media content item in a first format, and a partition boundary defining one of the partitioned areas configured to, when decoded, generate display of the media content item in a second format.
A sidelink connection is created between each device of a plurality of devices. A first device connected to a content source retrieves a manifest file for the media from the content source. The first device then notifies other devices, including a second device not connected to the content source, that the manifest file is available from the first device. Based on connection metrics of each device, it is determined which of the devices has the highest quality connection to the content source. If, for example, the first device is determined to have the highest quality connection, then the first devices retrieves a segment of the media from the content source, stores the segment in a cache of the first device, and delivers the segment to other devices in response to requests for the segment received from each device.
Systems and methods are provided for improving live streaming. A content item comprising a plurality of segments is received at a computing device at a first time. The content item is stored, and it is identified that a first segment of the plurality of segments is below a quality threshold. The first segment is processed to improve the quality of the first segment, and the content item is updated with the improved-quality first segment. At a second time, a request to access the content item is received, and the updated version of the content item is transmitted in response to the request.
H04N 21/23 - Processing of content or additional dataElementary server operationsServer middleware
H04N 21/238 - Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidthProcessing of multiplex streams
89.
Systems and methods for media delivery in cooperative device-to-device communications
A sidelink connection is created between each device of a plurality of devices. If the sidelink connection quality is not sufficient to transmit a first version of a segment encoded at a first quality level, a lower quality version of the segment may also be retrieved. The lower quality version of the segment may be retrieved by a different device than the device that retrieved the first version of the segment. If the segment is requested from a first device by a second device and the sidelink connection between the first device and the second device is not sufficient to transmit the first version of the segment, the first device may cause transmission of the lower quality version of the segment retrieved by a third device to the second device.
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
90.
Ecosystem for NFT trading in public media distribution platforms
A computer-implemented method and an apparatus are provided for presenting an option to purchase an NFT based on a scene of a media asset to an advertiser. One example computer-implemented method includes obtaining, from a first source, a scene of a media asset, determining that the scene comprises a product, obtaining, from a second source, a non-fungible token (NFT) based on the scene, matching the NFT to an advertiser based on the product, and presenting an option to purchase the matched NFT to the advertiser.
Systems and methods are disclosed to mitigate stalling of streaming content due to rebuffering so that, e.g., the content consumer does not experience gaps in playback. In some embodiments, by buffering streaming content simultaneously at two bitrate levels—e.g., one of the lowest bitrates and a better-quality bitrate, within the bandwidth limitations—rebuffering-caused gaps in playback of a higher quality (HQ) stream may be filled with a lower quality (LQ) stream. For instance, client-side dual buffers may store n segments from the HQ stream during a given time and a multiple of n number of segments from the LQ stream, thus allowing for many of the LQ segments to be output if the HQ stream is rebuffering. If a segment of content is beginning to be played back as an LQ segment, there is no reason to buffer the same segment from the HQ stream. Moreover, after a segment of content is played back (or decoded) as either HQ or LQ, the corresponding HQ segment and/or LQ segment may be discarded from the dual buffer, e.g., to create buffer space for upcoming segments.
H04L 65/612 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
H04L 65/752 - Media network packet handling adapting media to network capabilities
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
92.
METRIC-DRIVEN IMPROVEMENTS OF VIRTUAL REALITY EXPERIENCES
Systems and methods for obtaining metrics relating to an extended reality experience and using the obtained metrics to perform remedial actions, such as managing user motion sickness, determining user performance relating to a designed game difficulty, and performing home automation are disclosed. The methods include determining a starting and ending checkpoint in an extended reality experience. Data from a plurality of users as they navigate between the determined checkpoints is obtained and used to determine a metric, such as a median, average, or other representative data. The current user's navigation through the same checkpoints is monitored and compared with the metric. The results from the comparison are used to enhance extended reality experience, which includes customizing the experience for motion sickness, game difficulty level, and home automation.
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
93.
System and method for preprocessing of focal planes data for rendering
Systems and methods for rendering a 3D image are provided. The system receives texture and depth data (depth map) for an image. The system generates, based on the image data, a plurality of folded focal planes matrices. For each respective pixel matrix, the system preprocesses pixel values in the respective focal pixel plane matrix to generate a respective preprocessed matrix, wherein the respective preprocessed matrix clusters together pixel values of the respective folded focal plane matrix based on the depth data for the image. The system generates phase functions based on a plurality of the preprocessed matrices. The system configures a spatial light modulator of the SLM device in accordance with the generated phase functions. The system then provides the plurality of the preprocessed matrices as input to the SLM device to generate for display a 3D representation of the received image data.
H04N 13/395 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
H04N 13/312 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers the parallax barriers being placed behind the display panel, e.g. between backlight and spatial light modulator [SLM]
G02B 30/52 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being constructed from a stack or sequence of 2D planes, e.g. depth sampling systems
94.
SYSTEMS AND METHODS FOR COMPLETING PAYMENT TRANSACTIONS INITIATED THROUGH A FIRST DEVICE USING A SECOND DEVICE
A payment transaction is initiated for a user, based on a voice command, on a public voice-activated device. A user device associated with the user is identified. A transaction identifier is generated and transmitted to the identified user device. Once the user has entered their banking or credit card information to use for payment, a payment token is received from the user device. The transaction is then completed using the payment token. The payment token may be generated from a local digital wallet on the user device, or from a server-based digital wallet.
G06Q 20/32 - Payment architectures, schemes or protocols characterised by the use of specific devices using wireless devices
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentialsReview and approval of payers, e.g. check of credit lines or negative lists
G06Q 20/42 - Confirmation, e.g. check or permission by the legal debtor of payment
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
95.
STATIC AND DYNAMIC NETWORK DEVICE AND SERVICE INVENTORIES AS A MECHANISM TO GENERATE PERSONALIZED AD PROFILES
Systems and methods are disclosed for a network inventory service (NIS). An exemplary NIS may profile a private network for personalized advertising by recording a list of devices connected to the private network, recording traffic data for the network, and generating profile associated with the network based on the recorded list of devices and the recorded traffic data. The NIS may be notified, e.g., of a device within the network, an advertising avail, request, from an advertising service, a targeted advertisement based on the user profile, and cause the targeted advertisement to be inserted in the advertising avail. In some embodiments, an NIS may work with an inventory profile service (IPS) and/or an ad presentation service (APS). An IPS may deliver rules for analyzing new devices and/or network traffic and an APS may communicate with the NIS along with various advertiser networks to supply personalized ads.
Systems and methods for reducing a number of focal planes used to display a three-dimensional object are disclosed herein. In an embodiment, data defining a three-dimensional image according to a first plurality of focal planes are received. Pixel luminance values from the first plurality of focal planes are mapped to a second plurality of focal planes comprising fewer focal planes than the first plurality of focal planes. Data is stored identifying initial focal distances of the mapped pixel luminance values in the first plurality of focal planes. The second plurality of focal planes are then displayed on a near eye device which uses the data identifying initial focal distances of the mapped pixel luminance values to adjust a wavelength of light produced by the second plurality of images to cause the pixels to appear at their original focal distances.
Systems and methods are provided for adapting playout of a plurality of media items. One example method includes receiving one or more inputs representing a conversation between an audience of two or more people experiencing the playout of the plurality of media items, processing the input to determine a level of engagement of the audience with the playout of at least one of the plurality of media items, and adapting playout of the at least one of the plurality of media items before the scheduled start of the next media item in the schedule to take account of the inputs representing the conversation.
Systems and methods are provided for enabling a smart automatic skip mode during playback of a content item. A content item is generated for output at a first time at a computing device, and input associated with navigating the content item is received. Metadata associated with a plurality of segments of the content item is identified and, based on the input and the metadata, a segment to skip is identified. The segment to skip is skipped, and the content item is generated for output at a second time.
A content provision system is disclosed. The advent of potential interactivity in advertisements and other content items means that the time for which those advertisements absorb the attention of the user cannot be known in advance. This presents a challenge when the interactive advertisements or other content items are to be accommodated in a scheduled slot for such items. To address this challenge, the duration of interaction of each interactive content item is estimated, statistically measured or modelled in advance of the scheduled slot, and at least one of the interactive items is provided in the slot at a time which accords with the duration of interaction. Where there are a plurality of interactive content items for inclusion in the slot, the interactive contents items can be ordered such that those having a longer duration of interaction are provided closer to the start of the slot.
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/472 - End-user interface for requesting content, additional data or servicesEnd-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
100.
SYSTEMS AND METHODS FOR CUSTOMIZING A MEDIA PROFILE PAGE
Systems and methods are provided for customizing a profile page. A first user profile and a second user profile are accessed by a computing device. First and second pluralities of content items associated with the first and second user are identified. Based on the first and second user profiles, first and second subsets of content items of the first and second plurality of content items are selected. For each content item of the first subset of content items, an image associated with the content item is identified. For each content item of the second subset of content items, an image associated with the content item is identified. Based on the identified images, first and second image collages are generated for the first and second user profiles. The first and second image collages and first and second indicators corresponding to the first and second user profiles are generated for display.
G06T 11/60 - Editing figures and textCombining figures or text
H04N 21/472 - End-user interface for requesting content, additional data or servicesEnd-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
H04N 21/482 - End-user interface for program selection