A server system transmits from an application executing on a virtual client device, through a remote physical client device, a request for a manifest. The server system receives, a manifest received by, and forwarded from, the remote physical client device. The server system determines whether the server system is authorized to modify the received manifest. In response to determining that the server system is authorized to modify the received manifest, the server system requests additional content to modify the received manifest. The server system modifies listed content in the received manifest to generate an updated manifest. The server system sends the updated manifest to the application at the server system. The application processes the updated manifest. The server system sends, to the remote physical client device, an instruction to request the additional content.
H04N 21/2747 - Remote storage of video programs received via the downstream path, e.g. from the server
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/266 - Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system or merging a VOD unicast channel into a multicast channel
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
The server, initializes, for a third-party application executing on the server, an entirety of available GPU memory of a client device, including pre-allocating a plurality of blocks of GPU memory. During execution of the third-party application, the server receives a first request from the third-party application to store first data in the GPU memory of the client device, and, in response to the first request, frees a portion of a respective pre-allocated block of the plurality of pre-allocated blocks of GPU memory and stores the first data in the portion of the respective pre-allocated block. The server pre-allocates a new block of GPU memory of the client device, the new block comprising a complementary portion of the respective pre-allocated block such that, after pre-allocating the new block of GPU memory, the entirety of available GPU memory of the client device remains allocated.
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 21/78 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
3.
SYSTEMS AND METHODS OF IMAGE REMOTING USING A SHARED IMAGE CACHE
A server system executing a third-party application accesses an image asset and provides a modified version of the image asset to the third-party application to be processed by the third-party application. The server system receives, from the third-party application, an indication that the modified version of the image asset has been down-scaled by the third- party application during processing. In response to receiving the indication that the modified version of the image asset has been down-scaled by the third-party application, the server system determines that the image asset is to be down-scaled at the server system. The server system down-scales the image asset and transmits the down-scaled version of the image asset to be stored in a shared image cache. The server system transmits the down-scaled version of the image asset to the first client device for display.
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
4.
SYSTEMS AND METHODS OF MODIFYING MANIFESTS FOR APPLICATIONS
A server system transmits from an application executing on a virtual client device, through a remote physical client device, a request for a manifest. The server system receives, a manifest received by, and forwarded from, the remote physical client device. The server system determines whether the server system is authorized to modify the received manifest. In response to determining that the server system is authorized to modify the received manifest, the server system requests additional content to modify the received manifest. The server system modifies listed content in the received manifest to generate an updated manifest. The server system sends the updated manifest to the application at the server system. The application processes the updated manifest. The server system sends, to the remote physical client device, an instruction to request the additional content.
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
G06F 9/451 - Execution arrangements for user interfaces
H04N 21/482 - End-user interface for program selection
The server system hosts one or more virtual client devices executing one or more virtual applications, each virtual client device corresponding to a remote physical client device. The server system receives, from a first remote physical client device, a signal of a characteristic of media detected by a physical component of the first remote physical client device. The server system, in response to receiving the signal of the characteristic of the media, determines, based on the characteristic of the media, an instruction for adjusting the media detected by the physical component of the first remote physical client device and transmits, to the client device, the instruction for adjusting the media at the first remote physical client device.
The server system receives, from a respective remote physical client device, a digest of a segment of video content received by the respective remote physical client device, the segment of video content including a plurality of frames of video content. In response to receiving the digest, the server system sends a playback command to the respective remote physical device to playback one or more of the plurality of frames of video content in the segment. The plurality of frames of video content in the segment have a frame rate. The server system determines a graphical processing unit (GPU) overlay instruction for overlaying content of a frame buffer with a respective portion of the segment of video content and sends, asynchronously from the frame rate of the plurality of frames of video content, the GPU overlay instruction to the respective remote physical client device.
A system and method for providing content-dependent location information based upon video frame information. In response to ta user command, video frame data is captured from content being viewed and analyzed with respect to location information database. The analysis ideally leverages artificial intelligence and/or machine learning processes and returns a graphical improved content casting audio management. The content-dependent location information is provided to a requesting user graphically and or audibly.
G06F 16/487 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
A server computing device hosts one or more virtual machines. A first virtual machine corresponding to a first client device receives a first media stream that includes first content corresponding to a plurality of frames of video data and generates a first digest segment that corresponds to the first media stream. The first digest segment includes a representation of the pl urality of frames but does not include the video data. The first virtual machine stores the first digest segment in a cache at the server system. A second virtual machine corresponding to a second client device receives a playback position of the first media stream playing at the second client device and uses the playback position from the second client device and the first digest segment stored in the cache to perform processing to recreate a representation of the playback of the first media stream on the second client device.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
H04N 21/232 - Content retrieval operation within server, e.g. reading video streams from disk arrays
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/6547 - Transmission by server directed to the client comprising parameters, e.g. for client setup
9.
SYSTEMS AND METHODS OF ALTERNATIVE NETWORKED APPLICATION SERVICES
A server computing device hosts one or more virtual machines. A first virtual machine corresponding to a first client device receives a first media stream that includes first content corresponding to a plurality of frames of video data and generates a first digest segment that corresponds to the first media stream. The first digest segment includes a representation of the pl urality of frames but does not include the video data. The first virtual machine stores the first digest segment in a cache at the server system. A second virtual machine corresponding to a second client device receives a playback position of the first media stream playing at the second client device and uses the playback position from the second client device and the first digest segment stored in the cache to perform processing to recreate a representation of the playback of the first media stream on the second client device.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
H04N 21/232 - Content retrieval operation within server, e.g. reading video streams from disk arrays
A server system generates a model of a first memory architecture of a client device, the model of the first memory architecture including a GPU memory portion and a CPU memory portion. The server system receives a representation of a first image asset, and stores a first texture image corresponding to the first image asset in the GPU memory portion of the model at the server system. The first texture image is stored in the GPU memory portion of the client device. The server system determines, using the model, that the GPU memory portion at the client device needs to be reallocated. The server system identifies, using the model, one or more texture images that are stored in the GPU memory portion at the client device to evict and transmits an instruction, to the client device, to evict the one or more texture images from the GPU memory portion.
A client device receives a first image frame from a server, stores the first image frame and generates a first modified image that corresponds to the first image frame. The client transmits, to a remote device, the generated first modified image. The remote device uses the first modified image to determine the instruction for displaying the second image frame. The client receives, from the remote device, an instruction for displaying a second image frame. In response to receiving the instruction, the client device displays, on a display communicatively coupled to the client device, the second image frame.
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
12.
Method to preserve video data obfuscation for video frames
A method and video decoder system using the method are provided for identifying video frames in an encoded or encrypted video stream without performing decoding or decryption. The method includes: receiving a video data stream comprised of a plurality of transport stream (TS) packets; detecting a first video frame in the video data stream, wherein detection of the first video frame includes registering a last checked position at the start of the video data stream, examining bytes in a next TS packet to identify a predetermined pattern indicating a network abstraction layer (NAL) unit, repeating the examining step until two TS packets have been identified that include an NAL unit, wherein the last checked position is updated after each examining step, and identifying a video frame based on a position of the NAL unit identified in the two TS packets; and repeating the detecting step for a plurality of additional video frames in the video data stream.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
H04N 7/10 - Adaptations for transmission by electrical cable
H04N 21/4408 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network
13.
Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks
A server system determines, for a group of user sessions assigned to a single modulator, that an aggregate bandwidth for a first frame time exceeds a specified budget for the modulator. The user sessions comprise data in a plurality of classes, each class having a respective priority. In response to a determination that the aggregate bandwidth exceeds a specified budget, the server system allocates a portion of the aggregate bandwidth, including allocating a first portion of the data for a first user session in the group of user sessions and allocating a second portion of the data for a second user session in the group of user sessions, where both the first portion and the second portion are allocated in accordance with the class priorities. The server system transmits the allocated portions of the data for the group of user sessions through the modulator during the first frame time.
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/2368 - Multiplexing of audio and video streams
H04N 21/2381 - Adapting the multiplex stream to a specific network, e.g. an IP [Internet Protocol] network
H04N 21/61 - Network physical structure; Signal processing
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
H04L 12/851 - Traffic type related actions, e.g. QoS or priority
14.
Class-based intelligent multiplexing over unmanaged networks
An electronic device sends a content stream, via an unmanaged network, toward a client device and monitors the capacity of the unmanaged network. The device determines whether an aggregate bandwidth of an upcoming portion of the content stream fits the capacity. The upcoming portion of the content stream includes video content and user-interface data. In response to a determination that the aggregate bandwidth of the upcoming portion of the content stream does not fit the capacity, when the user-interface data is not the result of a user interaction: the device prioritizes a frame rate of the video content over latency for the user-interface data, and in accordance with a determination that the aggregate bandwidth of the upcoming portion of the content stream does not fit the capacity, sends ahead one or more frames of the video content in the upcoming portion, and delays the user-interface data in the upcoming portion.
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/233 - Processing of audio elementary streams
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/6373 - Control signals issued by the client directed to the server or network components for rate control
H04N 21/6379 - Control signals issued by the client directed to the server or network components directed to server directed to encoder
15.
Multiple-mode system and method for providing user selectable video content
The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
16.
SYSTEM AND METHOD FOR CONSTRUCTING A PLANE FOR PLANAR PREDICTION
A system and method of defining a plane for planar coding in JVET in which first and second lines can be defined based upon pixels in left-adjacent and top-adjacent coding units. In some embodiments, the least squares method can be employed to define the relevant lines. One point along each of the lines can then be identified and the y-intercepts of the two lines can be averaged to obtain a third point. The three points can then be used to identify and define a plane for planar coding in JVET.
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
H04N 19/11 - Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/196 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
H04N 19/463 - Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
17.
SYSTEMS AND METHODS OF ORCHESTRATED NETWORKED APPLICATION SERVICES
A server computing device receives, from a client device, a digest segment generated by the client device. The digest segment corresponds to a first media stream segment received by the client device, and the digest segment includes a representation of the first media stream segment. The server computing devices determines, using the digest segment, a playback command that corresponds to the first media stream segment and transmits, to the client device, the playback command.
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/232 - Content retrieval operation within server, e.g. reading video streams from disk arrays
18.
SYSTEMS AND METHODS FOR VIRTUAL SET-TOP SUPPORT OF AN HTML CLIENT
A server remote from client device executes an HTML-based virtual client application. Using the HTML-based virtual client application, the server renders an image corresponding to a video frame. The rendered image includes HTML commands. The server generates an HTML wrapper for the rendered image. Generating the HTML wrapper includes converting the HTML commands to HTML primitives that are selected from a subset of available HTML commands. The server sends the HTML wrapper to the client device to be processed by an HTML-based application on the client device to enable the image to be displayed.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 11/36 - Preventing errors by testing or debugging of software
H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
19.
Class-based intelligent multiplexing over unmanaged networks
A method of adapting content-stream bandwidth includes generating a content stream for transmission over an unmanaged network with varying capacity and sending the content stream toward a client device. The method includes monitoring the capacity of the unmanaged network and determining whether an aggregate bandwidth of an upcoming portion of the content stream fits the capacity. The upcoming portion of the content stream includes video content and user-interface data. The method further includes, in response to a determination that the aggregate bandwidth of the upcoming portion of the content stream does not fit the capacity, prioritizing low latency for the user-interface data over maintaining a frame rate of the video content when the user-interface data is the result of a user interaction and reducing a size of the upcoming portion of the content stream in accordance with the prioritizing. The reducing comprises decreasing the frame rate of the video content.
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/233 - Processing of audio elementary streams
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/6373 - Control signals issued by the client directed to the server or network components for rate control
H04N 21/6379 - Control signals issued by the client directed to the server or network components directed to server directed to encoder
20.
Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks
A server system assigns a group of user sessions to a single modulator. The user sessions comprise data in a plurality of classes, each class having a respective priority. The plurality of classes includes, in order of priority from highest priority to lowest priority, audio data, video data, and user-interface graphical elements. The server system determines that an aggregate bandwidth for a first frame time exceeds a specified budget for the modulator. In response to determining that the aggregate bandwidth for the first frame time exceeds the specified budget, the server system transmits an allocated portion of the data for the group of user sessions through the modulator onto a channel corresponding to the modulator during the first frame time in accordance with the class priorities.
H04N 21/2365 - Multiplexing of several video streams
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/2368 - Multiplexing of audio and video streams
H04N 21/2381 - Adapting the multiplex stream to a specific network, e.g. an IP [Internet Protocol] network
H04N 21/61 - Network physical structure; Signal processing
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
H04L 12/851 - Traffic type related actions, e.g. QoS or priority
21.
Scalable video coding using reference and scaled reference layer offsets
A process for determining the selection of filters and input samples is provided for scalable video coding. The process provides for re-sampling using video data obtained from an encoder or decoder process of a base layer (BL) in a multi-layer system to improve quality in Scalable High Efficiency Video Coding (SHVC). In order to accommodate other applications such as interlace/progressive scalability and to increase the resolution of the alignment between layers, it is proposed that the phase offset adjustment parameters be signaled.
H04N 11/02 - Colour television systems with bandwidth reduction
H04N 19/33 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/80 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
H04N 19/187 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
22.
A METHOD TO ENCODE VIDEO WITH CLOSE MULTIPLE SCENE CHANGES
A video encoding method is provided when three scenes are separated by two closely spaced scene changes. For scene changes spaced greater than a threshold, scene changes are programmed with I frames in a normal fashion. If less than the threshold, the method encodes depending on complexity of the first, second and third scene to determine how to encode the scene changes. To compare complexities, the process begins by using X1, X2, and X3 to note respectively the complexities of the first, the second and the third scenes. If the absolute difference of X1 and X2 is higher than a first threshold and the absolute difference of X2 and X3 is higher than a second threshold, the first scene change is more significant than the second scene change, so in that case the process encodes the first scene change as an I-frame and picks a quantization parameter (QP) based on the complexity blended from the complexity of scene 2 (X2) and scene 3 (X3).
H04N 19/142 - Detection of scene cut or scene change
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
H04N 19/107 - Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/87 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
23.
MANAGING DEEP AND SHALLOW BUFFERS IN A THIN-CLIENT DEVICE OF A DIGITAL MEDIA DISTRIBUTION NETWORK
A client device receives, from a server, first content directed to a first buffer in the client device and second content directed to a second buffer in the client device. The second buffer is deeper than the first buffer. The client device buffers the first content in the first buffer and buffers the second content in the second buffer. At least a portion of the second content is buffered in the second buffer simultaneously with buffering the first content in the first buffer. The client device selects between the first content in the first buffer and the second content in the second buffer, and provides the selected content for display.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
24.
SECURE BRIDGING OF THIRD-PARTY DIGITAL RIGHTS MANAGEMENT TO LOCAL SECURITY
Encrypted content from a content provider is received at a central location of a multichannel video programming distributor (MVPD). The content provider is distinct from the MVPD. The content is decrypted and processed in a virtual set-top application associated with a set- top of a customer of the MVPD. The set-top of the customer is located in a customer premises remote from the central location. The processed content is provided over a secure data link to a conditional-access encoder at the central location. The conditional-access encoder encrypts the processed content, which is then transmitted to the set-top of the customer.
H04N 21/2347 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving video stream encryption
H04N 21/4408 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network
H04N 7/167 - Systems rendering the television signal unintelligible and subsequently intelligible
H04N 5/445 - Receiver circuitry for displaying additional information
H04L 9/18 - Encryption by serially and continuously modifying data stream elements, e.g. stream cipher systems
25.
WIRELESS SETUP PROCEDURE ENABLING MODIFICATION OF WIRELESS CREDENTIALS
Methods, systems, and computer readable media may be operable to facilitate an overwrite of wireless credentials with user-input credentials. An extended wireless setup may be initiated at an access point when a predetermined input is received. During the extended wireless setup period, the access point may request updated wireless credentials from a user via a direct message or through a web page interface. The access point may overwrite currently used wireless credentials with the updated wireless credentials received from user input, and the access point may use the updated wireless credentials for establishing future wireless connections with one or more stations.
A method of generating a blended output including an interactive user interface and one or more supplemental images. At a client device, a video stream containing an interactive user interface is received from a server using a first data communications channel configured to communicate video content and a command is transmitted to the server that relates to a user input received through the interactive user interface. In response to the transmitting, an updated user interface is received using the first data communications channel, and one or more supplemental images are received using a second data communications channel. Each supplemental image is associated with a corresponding transparency coefficient. The updated user interface and the one or more supplemental images are blended according to the transparency coefficient for each supplemental image to generate a blended output and the blended output is transmitted toward the display device for display thereon.
H04N 21/8545 - Content authoring for generating interactive applications
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabi
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
Methods, systems, and computer readable media can be operable to facilitate the output of communications to additional emergency contacts upon the occurrence of an alarm triggering event. A central device may be configured with one or more data profiles associated with one or more emergency contacts, and each data profile may include emergency contact information associated with one or more methods for communicating a message to each respective emergency contact. When the device identifies an alert trigger within one or more communications passing through the device, the device may output one or more emergency messages to one or more of the emergency contacts according to the one or more stored data profiles.
Methods, systems, and computer readable media can be operable to output troubleshooting and/or setup information associated with a device from a server within the device. In embodiments, troubleshooting and/or setup information is output from within the device to a subscriber when the device detects that an issue or failure exists with the device's connection to a network. A data or service request received from a subscriber can be rerouted to a server within the receiving device, and troubleshooting and/or setup information can be output as a result.
A method is performed at a client device distinct from an application server. In the method, a first key is stored in a secure store of the client device. A wrapped second key is received from the application server. The first key is retrieved from the secure store and used to unwrap the second key. Encrypted media content is received from the application server, decrypted using the unwrapped second key, and decoded for playback.
Particular embodiments provide a system to determine ad segments in a video asset to enable subsequent ad replacement in video programs. The system is included in a multiple service operator (MSO) system that broadcasts video programs via a broadcast schedule. The MSO may not know the location of the ad segments in the video asset. To determine the ad segments, the MSO uses a classifier to classify video program segments and advertisements in the video asset. The classifier may be integrated with an nDVR system. By integrating with the nDVR system, particular embodiments may determine user behavior information, such as trick play commands, from the nDVR system. The classifier may use the user behavior information to detect ad segments in the video asset. In one embodiment, the classifier may fuse outputs from different detectors to detect and validate ad segments in the video program.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/2747 - Remote storage of video programs received via the downstream path, e.g. from the server
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
H04N 21/658 - Transmission by the client directed to the server
The methods, systems, and apparatuses described in this disclosure enable the identification of an alarm condition and the termination of a connection between an MTA and one or more telephony devices. An alarm condition can be identified at an MTA through the monitoring of feedback received from an alarm interface, and the MTA can respond to the identification of the alarm condition by terminating a connection to a telephony device for which communications are not routed through a corresponding alarm interface. An interface between the MTA and alarm interface may route communications to and from a telephony network through a first pair of wires and may receive feedback from the alarm interface through a second pair of wires.
Particular embodiments use aggregation logic that reduces the noise in a daisy-chained optical return signal aggregation of multiple nodes. The aggregation logic determines when an input to a transmitter/receiver is not used and disables or turns off that input. Further, in the case of daisy-chaining, a service group aggregation signal (e.g., RF signals) from the customer premise equipment (CPEs) serviced by a respective node are presented to the channel "A" port of a digital return transmitter/receiver. However, internal to the transmitter/receiver, aggregation logic auto-senses what optical return signals have already been aggregated up to that point in the daisy chain and can then intelligently place the service group aggregation signal onto one of the digital return transmitter channels. In one embodiment, if there are two return channels, A and B, whichever of these channels has seen fewer aggregations up to this point, will receive the service group aggregation signal.
A method for operating a power supply circuit of a communications device involves operating the power supply circuit in a first operating mode in which a switching frequency of the power supply circuit is higher than an operating frequency band of the communications device and switching the power supply circuit to operate in a second operating mode in response to connecting the power supply circuit to a battery by decreasing the switching frequency of the power supply circuit.
A system and method for characterizing the sensitivity of image data to compression. After a video signal is transformed to the frequency domain, statistical data regarding a video signal or frame of a video signal can be calculated. In one alternate, a contour map of the original signal can be calculated and the parameters of the contour map can be recorded. The same signal can be compressed and then upscaled and a second contour map can be calculated and the parameters of the second contour map can be recorded. Based on the difference between the first and second contour maps, a sensitivity of the video to compression can be determined.
An analyzer analyzes portions of a logical data stream including data content received from a source. Based on analyzing the data content (e.g., data content formatted according to Moving Picture Experts Group (MPEG)) received from the source, the analyzer generates metadata associated with multiple analyzed portions of the logical data stream. The metadata supports manipulation of how the logical data stream is presented when at least a portion of the data content of the logical data stream is later presented to a receiver for play back in a mode different than the original content (e.g., play back includes fast forwarding, rewinding, and/or pausing.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
36.
DETECTING OF GRAPHICAL OBJECTS TO IDENTIFY VIDEO DEMARCATIONS
Particular embodiments analyze logos found in a video program to determine video demarcations in the video program. For example, a video demarcation may be content that marks ("marker content") a transition from a first video content type to a second video content type. Marker content may be used so the user knows that a transition is occurring. Particular embodiments analyze the logos found in a video program to determine the video demarcations in the video. The video is first analyzed to determine logos in the video program. Once these logos are determined, particular embodiments may re-analyze the video program to identify marker frames that include the marker content that signal the transitions to a different video content types. The marker frames may be determined without any prior knowledge of the marker content. Then, particular embodiments may use the marker frames to determine video segments.
A method of encoding digital content is provided that allows for adaptive joint bitrate allocation that allocates bits for audio and video. The method includes: determining an overall transport stream bitrate, determining a target audio bitrate for each audio stream based on their complexity, determining a portion of the overall transport stream bitrate available for video streams by subtracting the sum of the target audio bitrates from the overall transport stream bitrate, allocating a target video bitrate for each video streams out of the portion of the overall transport stream bitrate available for video streams, encoding audio streams at the target audio bitrates, encoding video streams at the target video bitrates, and combining the audio streams and video streams with a multiplexor into a transport stream.
A method and apparatus for processing picture slices is disclosed. The method determines if the slice of the current picture excludes any predictive coding derived from another picture. If the slice of the current picture is designated to exclude any predictive coding derived from another picture, a flag is set to a first logic state, and if the slice of the current picture is not designated to exclude any predictive coding derived from another picture, the flag is set to a second logic state. Further, at least a portion of predicted weight processing of the slice of the current picture is bypassed according to the logic state of the flag.
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/107 - Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
39.
WIRELESS VIDEO PERFORMANCE SELF-MONITORING AND ALERT SYSTEM
Methods, systems, and computer readable media can be operable to facilitate a self-monitoring of the performance of a wireless video service that is provided by a customer premise equipment (CPE) device. A CPE device such as an access point may periodically or continuously retrieve wireless video performance parameters associated with one or more devices receiving a wireless video service from the CPE device and/or one or more wireless links used by the CPE device to deliver wireless video services. The CPE device may consolidate retrieved parameters into a wireless video performance index and may compare the wireless video performance index to a threshold range. If the video performance index lies outside of the threshold range, the CPE device may output an alert to a device controlled by a content provider. The alert may provide a notification of an issue with the delivery of wireless video services by the CPE device.
Methods, systems, and computer readable media can be operable to facilitate playback manipulation based upon a received notification. A client device such as a set-top box may receive information associated with a notification, wherein the notification comprises a reminder or action that is to be completed. The client device may output information associated with the notification to a display that is being used to present content to a viewer, wherein the output of information includes an identification of the reminder or requested action. The reminder or action may be associated with a predetermined duration of time within which the action is to be completed. If the action is not completed within the predetermined duration of time, the client device may manipulate playback and output of the content to the display until a confirmation of the action being completed is received by the client device.
H04M 19/04 - Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone the ringing-current being generated at the substations
H04N 21/433 - Content storage operation, e.g. storage operation in response to a pause request or caching operations
H04N 21/41 - Structure of client; Structure of client peripherals
H04N 21/436 - Interfacing a local distribution network, e.g. communicating with another STB or inside the home
41.
AUTOMATIC CONFIGURATION OF A WIRELESS DISTRIBUTION SYSTEM EXTENDED NETWORK
Methods, systems, and computer readable media may be operable to facilitate the automatic configuration of a network device within a wireless distribution system (WDS) extended network. Upon the boot of a network device such as a network extender, the network device may search for an access point through the transmission and reception of wireless communications. Once an access point is found, the network device may attempt to connect to the access point and may self-configure as either a station or a station operating as an access point. The network device may make the determination whether to operate as an access point based upon one or more network and/or device parameters associated with the identified access point, and may switch between station and station-access point modes based upon the link connecting the network device to the access point.
A method is provided for obfuscating program code to prevent unauthorized users from accessing video. The method includes receiving an original program code that provides functionality. The original program code is transformed into obfuscated program code defining a randomized branch encoded version of the original program code. The obfuscated program code is then stored, and a processor receiving input video data flow uses the obfuscated program code to generate an output data flow.
Methods, systems, and computer readable media can be operable to facilitate the generation of a user interface displaying the devices associated with a local network. A client device may retrieve information associated with one or more devices associated with a common central device, local network, and/or subscriber. The client device may generate a user interface including one or more device objects organized along an ellipsoidal wireframe, wherein each device object represents an identified device. The user interface may include device identification and/or status information associated with each displayed device. Devices displayed within the user interface may be filtered based upon one or more parameters selected by a user. The client device may update and rearrange the displayed device objects based upon navigation commands received from a user via a control device.
Methods, systems, and apparatuses can be operable to facilitate program change operations on a multimedia device that can access both QAM video delivery and IP video delivery. A multimedia device can request content from multiple sources, choosing the first available source for a requesting media player. The multimedia device can shift from an IP unicast source to a QAM broadcast source seamlessly when both are available.
H04N 21/438 - Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/61 - Network physical structure; Signal processing
45.
REDUCING START-UP DELAY IN STREAMING MEDIA SESSIONS
A method is provided for delivering a streaming media asset to a client device. For the method, a request is received over a communication network from a client device for playing a media asset in accordance with a streaming media technique. Prior to fully authorizing the client device to play the media asset, the client device is provided with access to a first cryptographic key that decrypts a subset of the media asset so that the client device is able to render the subset of the media asset before completion of the authorization. The subset of the media asset is less than all of the media asset. Subsequent to successfully fully authorizing the client device to play the media asset, the client is provided with access to at least one additional cryptographic key that decrypts a remainder of the media asset.
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/2389 - Multiplex stream processing, e.g. multiplex stream encrypting
H04N 21/438 - Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
H04N 21/4385 - Multiplex stream processing, e.g. multiplex stream decrypting
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 7/167 - Systems rendering the television signal unintelligible and subsequently intelligible
H04N 21/4405 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving video stream decryption
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
H04N 21/2347 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving video stream encryption
H04N 21/61 - Network physical structure; Signal processing
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/266 - Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system or merging a VOD unicast channel into a multicast channel
Particular embodiments automatically identify and track a logo that appears in video content. For example, particular embodiments can track a branding logos position and size without any prior knowledge about the logo, such as the position, type, structure, and content of the logo. In one embodiment, a heat map is used that accumulates a frequency of short-term logos that are detected in the video content. The heat map is then used to identify a branding logo in the video content.
Particular embodiments detect a solid color frame, such as a black frame, that may include visible content other than the solid color in a portion of the frame. These frames may conventionally not be detected as a solid color frame because of the visible content in the portion of the frame. However, these solid color frames may be functional black or white frames, in that the solid color frames are performing the function of the solid color frame even though the frames include the visible content. The visible content may be content that may always be displayed on the screen even if the video content is transitioning to an advertisement. Particular embodiments use techniques to detect the functional solid color frames even when visible content appears in the solid color frames. Particular embodiments use color layout information and edge distribution information to detect solid color frames.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
Methods, systems, and computer readable media can be operable to facilitate the detection of a closed caption standard and corresponding extraction of closed caption data according to the detected closed caption standard. A set-top box (STB) may receive content from multiple different networks and/or service providers, and closed caption data may be formatted differently in content received from the various networks and/or service providers. The STB may identify a relevant standard with which to process closed caption data in a received content stream based upon an identification of the network and/or provider from which the content stream is received, and the STB may extract and render closed caption data using a process associated with the identified network and/or provider.
H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
H04N 21/61 - Network physical structure; Signal processing
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
A system for providing improved video quality and compression efficiency during encoding by detecting video segments having film grain approaching the "Red Lady" problem. The system detects when film grain approaches the level of the "Red Lady" problem by measuring frame-by-frame temporal differences (ME scores). From the ME scores, two key indicators are identified: (1) The average temporal difference in frames with an intermediate motion level higher than frames of non-noisy video; and (2) The fluctuation of the temporal differences between frames in a group is very small. When these indicators identify a high film video, a signal is provided to an encoder which allocates less bits to I frames and more bits to P and B frames than for other frames of video without comparable film grain.
H04N 19/137 - Motion inside a coding unit, e.g. average field, frame or block difference
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
H04N 19/86 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
50.
USING MOTION COMPENSATED TEMPORAL FILTER (MCTF) STATISTICS FOR SCENE CHANGE DETECTION WHEN A FADE, DISSOLVE OR CUT OCCURS
A method is provided to better detect a scene change to provide a prediction to an encoder to enable more efficient encoding. The method uses a Motion Compensated Temporal Filter (MCTF) that provides motion estimation and is located prior to an encoder. The MCTF provides a Motion Compensated Residual (MCR) used to detect the scene change transition. When a scene is relatively stable, the MCR score is also relatively stable. However, when a scene transition is in process, the MCR score behavior changes, Algorithmically, the MCR score is used by comparing the sliding mean of the MCR score to the sliding median. This comparison highlights the transition points. In the case of a scene cut, the MCR score exhibits a distinct spike. In the case of a fade or dissolve, the MCR score exhibits a transitional period of degradation followed by recovery. By implementing the above detection using the MCR, the location of the I-pictures in the downstream encoding process can be accurately determined for the encoder.
H04N 19/87 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/142 - Detection of scene cut or scene change
H04N 19/107 - Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
H04N 19/615 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding using motion compensated temporal filtering [MCTF]
H04N 19/146 - Data rate or code amount at the encoder output
H04N 19/194 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive involving only two passes
51.
A METHOD FOR EFFICIENT PROCESSING OF BTP ENABLED MPEG4 STREAM
A system is provided for providing a trickplay operation using a digital video recorder (DVR) when the video includes Broadcom Transport Packets (BTPs) designed for MPEG-2, but in an MPEG-4 video steam. In a first implementation, to enable trickplay to function properly with MPEG-4 video, the BTP descriptors included with each group of data frames are disabled so that a single descriptor provided without BTPs that would otherwise be provided in MPEG-4 is all that remains. In a second implementation, the 5 descriptors for MPEG-4 are combined into a single descriptor. In a third implementation, the pace of decoding of the MPEG-4 descriptors is increased so that the speed of encoding all the 5 descriptors is comparable to the pace of decoding a single MPEG-2 descriptor.
Methods and systems are described for adaptively transmitting streaming data to a client. In one embodiment, the method comprises receiving, in a server, a request for a data asset from the client, transcoding at least a segment of the data asset according to initial transcoding parameters, transmitting a first fragment of the transcoded segment of the data asset from the server to the client over a communication channel, generating an estimate of a bandwidth of the communications channel at least in part from information acknowledging reception of at least the first fragment of the transcoded segment of the data asset by the client, generating adaptive transcoding parameters at least in part from an estimate of a bandwidth of the communications channel, the estimate generated at the server, transcoding a further segment of the data asset according to the adaptive transcoding parameters, and transmitting the further segment of the data asset.
H04L 1/00 - Arrangements for detecting or preventing errors in the information received
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
53.
Carriage systems encoding or decoding JPEG 2000 video
A system configured to decode video data in a packetized elementary stream (PES) including frames of image data. The system includes a processor configured to receive a transport stream including control information associated with the image data including video metadata parameters associated with application specific functions applicable to the image data. The processor is also configured to receive the PES including the frames of image data in video access units. The processor is configured to retrieve and decode the retrieved video access units using the control information to form a signal including the frames of image data. The system also includes a storage device configured to store the frames of image data and the control information.
H04N 19/44 - Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
H04N 7/52 - Systems for transmission of a pulse code modulated with one or more other pulse code modulated signals, e.g. an audio signal or a synchronizing signal
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 19/467 - Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
54.
SYSTEMS AND METHODS FOR INTERLEAVING VIDEO STREAMS ON A CLIENT DEVICE
A method of displaying video embedded in a user interface is performed at an electronic device such as a server system or client device. The method includes obtaining user-interface frames having a first placeholder for a first video window and obtaining source video frames having a first video stream in the first video window. The source video frames and the user- interface frames are interleaved to form an output video stream, which is provided for decoding and display.
A method of displaying video embedded in a user interface is performed at an electronic device such as a server system or client device. The method includes obtaining user-interface frames having a first placeholder for a first video window and obtaining source video frames having a first video stream in the first video window. The source video frames and the user-interface frames are interleaved to form an output video stream, which is provided for decoding and display.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/2389 - Multiplex stream processing, e.g. multiplex stream encrypting
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabi
56.
MULTIPLE STREAM VIDEO COMPRESSION IN MULTIPLE BITRATE VIDEO ENCODING
Methods, systems, and computer readable media can be operable to reduce the number of video streams or increase the quality of delivered video with the same number of video streams in a multiple bitrate video encoding. Multiple video resolutions and/or frame rates can be combined into a single stream. In embodiments, optimal segments from a plurality of input streams can be selected for inclusion in an output stream based upon a range of acceptable quantization parameter values for the output stream and a quality characteristic associated with the optimal input stream.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/2365 - Multiplexing of several video streams
57.
DETECTION OF FAILURES IN ADVERTISEMENT REPLACEMENT
Methods of monitoring segment replacement within a multimedia stream are provided. A multimedia stream having a replacement segment spliced therein is evaluated by extracting at least one of video, text, and audio features from the multimedia stream adjacent a beginning or ending of the replacement segment, and the extracted features are analyzed to detect if a residual of a segment replaced by the replacement segment exists within the multimedia stream. Methods of ad replacement and a system for preforming the above methods are also disclosed.
G11B 27/034 - Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
G11B 27/28 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
G11B 27/36 - Monitoring, i.e. supervising the progress of recording or reproducing
Modern audio and video content typically provides multiple programming, such as multiple alternate video versions, multiple language options, and possibly also closed captioning or subtitles in multiple languages. Gateway conditioned media streaming provides systems and methods for conditioning multimedia content according to the preferences of a recipient client device, such that the device receives the preferred video, audio, and/or closed captioning automatically and regardless of the application used to play the content. When a gateway server receives a request for content, the gateway server identifies the requesting client device from recorded information, and uses the recorded preferences to modify the content stream according to those preferences. The modified content stream is then sent to the requesting client device.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
59.
PROCESSING SEGMENTS OF CLOSED-CAPTION TEXT USING EXTERNAL SOURCES
Particular embodiments provide supplemental content that may be related to video content that a user is watching. A segment of closed-caption text from closed-captions for the video content is determined. A first set of information from the segment of closed-caption text, such as terms may be extracted. Particular embodiments use an external source that can be determined from a set of external sources. To determine the supplemental content, particular embodiments may extract a second set of information from the external source. Because the external source may be more robust and include more text than the segment of closed-caption text, the second set of information may include terms that better represent the segment of closed-caption text. Particular embodiments thus use the second set of information to determine supplemental content for the video content, and can provide the supplemental content to a user watching the video content.
H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabi
H04N 21/4722 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for requesting additional data associated with the content
Methods, systems, and computer readable media can be operable to facilitate the detection and management of a filler region during trickplay of a content stream. A filler region at the edge of a targeted advertisement segment may be detected based on the presence of one or more consecutive I-frames. When a filler region is detected during a trickplay operation, a corrective method may be applied during trickplay of the filler region. The corrective method may include a modification of a frame skip count (FSC) and/or frame repeat count (FRC) during trickplay of the filler region, or may include skipping over the processing and output of filler frames during trickplay of the filler region. An index file may be modified to include an indication of a designation of one or more frames as either a filler frame or non-filler frame.
Improving the quality of service in multiple service tiers, particularly during periods of Internet congestion, may be accomplished by scaling back the bitrate from a maximum rate to a minimum reserved rate in a non-linear fashion. The use of an intermediate value for scaling back enables a quick drop off of high maximum rates. In embodiments, the shape of a non-linear curve may be configured and controlled to make the drop-off steeper for heavy users.
Methods, systems, and computer readable media may be operable to facilitate the automatic configuration of a network extender with network parameters. An access point may identify a network extender and may determine whether the identified network extender is configured for an automatic configuration of network parameters based upon device description information retrieved during the identification of the network extender. The access point may output a configuration message to the identified network extender, the configuration message including one or more parameters associated with a network provided by the access point, and the network extender may apply the one or more parameters. The access point may periodically or conditionally provide the network extender with updates to the network parameters.
A method of transmitting media content is provided that provides for a significantly reduced chunk size. The method includes receiving one or more adaptive transport streams into a memory buffer at a HTTP streamer from a media preparation unit. The received transport streams include a plurality of switchable segments each comprising one or more delivery chunks, the switchable segments being marked with segment boundary points and the delivery chunks being marked with chunk boundary points. One or more of the delivery chunks are then transmitted from a requested switchable segment to a requesting client device until a terminating segment boundary point is reached, wherein each delivery chunk is independently decodable, and a client device can begin decoding and rendering received delivery chunks even when the HTTP streamer has not yet completely received the entire requested switchable segment from the media preparation unit.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
Power consumption levels of a network device can be adjusted based upon traffic flow at the device. A network device can recognize a situation where the traffic flow associated with a CPE device reaches a level that can be supported by a smaller channel set, and when this situation arises, the CPE device can request and receive an updated, smaller channel set. In response to receiving the smaller channel set, the CPE device can operate using fewer resources, thereby reducing power consumption at the CPE device. When traffic level at the CPE device warrants, the CPE device can request and receive a new or updated, larger channel set.
H04L 12/923 - Dynamic resource allocation, e.g. in-call renegotiation requested by the user or upon changing network conditions requested by the network initiated by the network
H04L 12/28 - Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
65.
UPSTREAM INTERFERENCE ELIMINATING TRANSMISSION OF DIGITAL BASEBAND SIGNAL IN AN OPTICAL NETWORK
Particular embodiments provide a method for delivering data in the upstream direction without the need for upstream radio frequency (RF) modulation. For example, in some embodiments, an optical network may reach to a gateway associated with a user device. The gateway may receive digital baseband data from the user device in the upstream direction. The gateway can then send the digital baseband data through the optical network without modulating the digital baseband signal via radio frequency. At the headend, because no modulation is performed in the upstream direction, there is no need for de-modulation in the headend. In one embodiment, a scheduler-based approach is used to avoid instances of optical beat interference in the upstream direction as only one upstream device that may interfere with other devices may be able to send data at one time.
Methods, systems, and computer readable media can be operable to facilitate packet bridging based upon a host device address. An access point may identify a source or destination address associated with a received packet, wherein the address identifies an associated host device. When the destination address of a downstream data packet matches an address associated with the access point, the access point may route the packet to a destination within a local area network (LAN). When the source address of an upstream data packet identifies a host device for which communications are to be bridged, the access point may bridge the data packet to a wide area network (WAN). An access point may identify a multicast or broadcast downstream data packet and the access point may flood the packet to both a route and a bridge routine. The bridging determination may be made by a dual-layer WAN or LAN interface.
Methods, systems, and computer readable media can be operable to facilitate an analysis and control of video quality of experience (VQoE) of services delivered to one or more client devices. A content version segment may be selected for delivery to a client device based upon an estimation of the video quality experienced by the client device and the bandwidth available for delivering content to the client device. Video quality estimation may be based upon information associated with the encoding of a media stream coupled with one or more parameters of the client device receiving the media stream. Video quality estimation for one or more client devices may be aggregated and displayed to a service operator and/or may be used to inform content selection decisions in an adaptive bit-rate delivery method.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
Methods, systems, and computer readable media can be operable to facilitate the use of a station as a proxy for scanning one or more wireless channels. Upon a determination that a currently utilized wireless channel has become impaired, an access point may identify one or more idle wireless stations and may request that the one or more idle wireless stations perform a scan of one or more other wireless channels. The identified wireless station(s) may perform a scan of one or more other wireless channels and may provide an indication of the current level of congestion on each respective wireless channel to the access point. Based on the indication of the current congestion levels of each wireless channel, the access point may determine whether a more advantageous channel is available. If a more advantageous channel is available, the access point may tune to the more advantageous channel.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Methods, systems, and computer readable media can be operable to encode an input video stream into one or more output streams by using information obtained from a first transcoding of the input video stream. During a first encoding of an input video stream, pre-processing, motion estimation, mode decision, and other information can be collected or buffered and can be re-used to encode the input video stream to multiple output video streams at different bitrates, resolutions, and/or frame rates. Motion estimation, macroblock mode decision, and pre-processing data can be manipulated and re-used during the encoding of an input video stream at various resolutions. Data can be re-computed at a new resolution or bitrate to improve visual quality.
H04N 19/40 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
H04N 19/192 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/164 - Feedback from the receiver or from the transmission channel
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
71.
METHOD AND APPARATUS FOR LOCALIZED MANAGEMENT OF FEATURE LICENSES
Methods and systems are provided for managing feature licenses for pools or groups of devices. In an embodiment, a method of licensing features for a device in a license pool or group includes receiving, at the device, a license capacity request; determining, based on the reply to the license capacity request, if the device in the license pool or group is compliant with the feature license configuration; if the device is noncompliant with the feature license configuration: transmitting a generate license request message having a desired feature license configuration; receiving a feature license request from the device; and updating the noncompliant device with a compliant feature license.
Methods, systems, and computer readable media may be operable to facilitate the management of connections between one or more client devices and an access point over one or more service sets. An access point may maintain a list of client devices that have successfully associated with a private service set broadcast from the access point, and when a client device from the list attempts to connect to a public service set broadcast from the access point, the access point may deny the client devices attempt to connect to the public service set. Attempts by the client device to join the public service set may be denied for a predetermined number of attempts or a predetermined period of time. Denying an attempt to connect to a public service set may provide a client device with more opportunities to connect to a private service set broadcast from a corresponding access point.
A method is provided for distributing video content by a network operator. The method includes receiving media streams including programs having a prescribed duration; assigning a traffic index for each program contained within the media streams, each traffic index reflecting the volume of traffic expected to be associated with the program by client devices; selecting a predetermined number of programs to provide to client devices based at least in part on the traffic index for each program; encoding said selected program as individual adaptive bitrate streams; and streaming said encoded individual adaptive bitrate streams to client devices over the network as a managed bundle, wherein the adaptive bitrate streams in the bundle are multicast simultaneously to the client devices.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
Disclosed are methods and systems for a transcoding device to provide sets of video streams or profiles having different encoding parameters for transmitting the sets of video streams to a media device. In an embodiment, a method for transmitting video streams for a media program from a transcoding device to a media device includes receiving, by the transcoding device, video data; generating, by the transcoding device, a plurality of profiles from the video data, each profile representing a video stream; performing analysis on the generated plurality of profiles to identify similar profiles; reducing the number of profiles to provide a distinct set of profiles; and transmitting the distinct set of profiles from the transcoding device to the media device.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H01F 41/04 - Apparatus or processes specially adapted for manufacturing or assembling magnets, inductances or transformers; Apparatus or processes specially adapted for manufacturing materials characterised by their magnetic properties for manufacturing cores, coils or magnets for manufacturing coils
H03H 7/46 - Networks for connecting several sources or loads, working on different frequencies or frequency bands, to a common load or source
H04B 1/00 - TRANSMISSION - Details of transmission systems not characterised by the medium used for transmission
Systems, devices, and methods for hybrid anti-clipping in optical links in hybrid fiber-coaxial (HFC) networks are disclosed. A hybrid anti-clipping circuit can be included in both the uplink and downlink paths of the HFC network to avoid driving the laser in the optical link above a clipping threshold. The anti-clipping circuit can compare the average, or RMS, input power level and the power envelope of a RF input signal to a clipping threshold associated with the particular laser module being used. If the average power is above the clipping threshold, then the input signal can be attenuated proportionally to avoid clipping. If peaks in the power envelope are above the clipping threshold, then the bias current of the laser module can be adjusted to avoid clipping. Accordingly, the modes of anti-clipping circuit operation include applying attenuation to the input signal and/or adjusting the laser module bias current.
A method and apparatus for encoding three-dimensional ("3D") video includes receiving a left-eye interlaced frame and a corresponding right-eye interlaced frame of a 3D video. An amount of interlacing exhibited by at least one of the left-eye interlaced frame and the corresponding right-eye interlaced frame is determined. A frame packing format to be used for packing the left-eye interlaced frame and the corresponding right-eye interlaced frame into a 3D frame is selected based on the amount of interlacing that is determined. The left-eye interlaced frame and the corresponding right-eye interlaced frame are formatted into a 3D frame using the selected frame packing format. Illustrative frame packing formats that may be employed include a side-by-side format and a top-and-bottom format.
A video processing system enhances quality of an overlay image, such as a logo, text, game scores, or other areas forming a region of interest (ROI) in a video stream. The system separately enhances the video quality of the ROI, particularly when screen size is reduced. The data enhancement can be accomplished at decoding with metadata provided with the video data for decoding so that the ROI that can be separately enhanced from the video. In improve legibility, the ROI enhancer can increase contrast, brightness, hue, saturation, and bit density of the ROI. The ROI enhancer can operate down to a pixel-by-pixel level. The ROI enhancer may use stored reference picture templates to enhance a current ROI based on a comparison. When the ROI includes text, a minimum reduction size for the ROI relative to the remaining video can be identified so that the ROI is not reduced below human perceptibility.
G06K 9/34 - Segmentation of touching or overlapping patterns in the image field
G06K 9/32 - Aligning or centering of the image pick-up or image-field
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
80.
A METHOD FOR USING A DECODER OR LOOK-AHEAD ENCODER TO CONTROL AN ADAPTIVE PRE-FILTER
An adaptive video pre-filter system is provided that uses a blend of both spatially neighboring pixels and motion compensated neighboring pixels to produce a filtered output that has reduced pixel noise to drive a primary encoder. In one embodiment, the pre-filter is used with a look-ahead encoder that provides a complexity input control to a pre-filter enabling the pre-filter to provide a filtered video signal to a primary encoder. A complexity model is provided between the look-ahead encoder and the pre-filter to enable an increase or decrease in the filtering strength to be provided depending upon the complexity of the input signal. In a further embodiment, the look-ahead encoder is replaced with a decoder to provide complexity values. In some embodiments, a delay buffer is provided to buffer the complexity values between the complexity model and the pre-filter and buffering is further provided with the same delay to buffer the video frames to the pre-filter to smooth filtering in the pre-filter.
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/137 - Motion inside a coding unit, e.g. average field, frame or block difference
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
A method of manufacture can produce an apparatus for reflecting light from a light source to an icon, wherein the icon is integrated with the apparatus. An extrusion in the shape of the icon can be molded onto one end of the apparatus. In embodiments, the extrusion can extend into a panel cut-out, wherein the panel cut-out is the same shape as the extrusion.
G09F 9/305 - Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements being the ends of optical fibres
The application relates to a station, such as a set-top box, which is operable to communicate wirelessly with an access point. Before such a communication is possible, a wireless connection between the station and the access point has to be established, e.g. through a protected setup sequence, comprising a scan of available wireless channels, the exchange of key messages between the station and the access point, and the installation of a key at the station. It is common that such a station does not have a dedicated physical button for initiating this setup. Users are therefore forced to connect the station to a display device, such as a television or a computer. The present application overcomes this inconvenience by providing a physical button (210) in the station or a remote control (260) by which the setup can be initiated. Furthermore, each stage of the setup procedure is visually indicated at the station by either a text output or a blinking light (250), whereby the blinking frequency is specific for the stage of the setup procedure.
Methods and systems are described for adaptively transmitting streaming data to a client. In one embodiment, the method comprises receiving, in a server, a request for a data asset from the client, transcoding at least an segment of the data asset according to initial transcoding parameters, transmitting a first fragment of the transcoded segment of the data asset from the server to the client over a communication channel, generating an estimate of a bandwidth of the communications channel at least in part from information acknowledging reception of at least the first fragment of the transcoded segment of the data asset by the client, generating adaptive transcoding parameters at least in part from an estimate of a bandwidth of the communications channel, the estimate generated at the server, transcoding a further segment of the data asset according to the adaptive transcoding parameters, and transmitting the further segment of the data asset.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
A method of generating a trick-play stream is provided that includes providing a master trick-play stream having a plurality of groups of pictures, wherein each group of pictures comprises a leading intra-coded frame and a plurality of inter-coded frames, and frames within of each group of pictures encoded with a temporally scalable hierarchical encoding relationship, deriving a trick-play stream from the master trick-play stream for a particular temporal resolution by skipping a consistent pattern of frames from each group of pictures that are not needed to decode other frames at the particular temporal resolution according to the temporally scalable hierarchical encoding relationship, and providing the trick-play stream to a client device, wherein the trick-play stream is packaged to appear to the client device as a standards-compliant adaptive bitrate stream.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/61 - Network physical structure; Signal processing
H04N 21/6587 - Control parameters, e.g. trick play commands or viewpoint selection
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 19/31 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
85.
METHOD AND SYSTEM FOR GENERATING REFERENCES TO RELATED VIDEO
A method of generating references to related videos is provided. Closed caption text of a primary video is analyzed to identify at least one keyword contained within the closed captioned text and a separate pre-determined listing of keywords. A keyword identified within the closed caption text and a context thereof is compared to keywordcontext pairings provided within the listing. Information of a reference video related to the primary video is obtained by taking actions required by a rule in the listing associated with a matched keyword-context pairing when the keyword identified from the primary video and the context thereof is determined to match one of the keyword-context pairings in the listing. An annotation of the reference video relative to the primary video is created. A video processing electronic device and at least one non-transitory computer readable storage medium having computer program instructions stored thereon for performing the method are provided.
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
A method of identifying a representative image of a video stream is provided. Similarity between video frames of a primary video stream relative to video frames of a different secondary video stream having similar content is evaluated and a video frame from the primary video stream having a greatest extent of similarity relative to a video frame of the secondary video stream is identified. The identified video frame is selected as an image representative of the primary video stream and may be used as an informative thumbnail image for the primary video stream. A video processing electronic device and at least one non-transitory computer readable storage medium having computer program instructions stored thereon for performing the method are also provided.
A method of detecting frames in a video that demarcate a pre-determined type of video segment within the video is provided. The method includes identifying visually distinctive candidate marker frames within the video, grouping the candidate marker frames into a plurality of groups based on visual similarity, computing a collective score for each of the groups based on temporal proximity of each of the candidate marker frames within the group to related events occurring within the video, and selecting at least one of the groups based on the collective proximity scores as marker frames that demarcate the pre-determined type of video segment. A video processing electronic device and at least one non-transitory computer readable storage medium having computer program instructions stored thereon for performing the method are also provided.
A method and system is provided for signing data such as code images. In one embodiment, the method comprises receiving, from a requestor, a request to sign the data according to a requested configuration selected from a first configuration, in which the data is for use with any of the set of devices, and a second configuration in which the data is for use only with a subset of a set of devices; modifying the data according to the requested configuration; generating a data signature using the modified data; and transmitting the generated data signature to the requestor. Another embodiment is evidenced by a processor having a memory storing instructions for performing the foregoing operations.
A pivotable fan assembly includes a mounting frame, a panel, and a bracket. The panel can be coupled to the mounting frame at a first edge. The panel can pivot about the mounting frame between a first position and an angularly displaced second position. At least one fan assembly can be coupled to the bracket, which extends distally from the panel with a first pair of adjacent sides of the fan assembly bounded by the panel and the bracket and a second pair of adjacent sides of the fan assembly unbounded and exposed. When attached to a chassis cover, the panel can pivot to expose the fan assembly for tool-less replacement.
A method is provided for decoding an encoded video stream on a processor having a plurality of processing cores includes receiving and examining a video stream to identify any macroscopic constructs present therein that support parallel processing. Decoding of the video stream is divided into a plurality of decoding functions. The plurality of decoding functions is scheduled for decoding the video stream in a dynamic manner based on availability of any macroscopic constructs that have been identified and then based on a number of bytes used to encode each block into which each picture of the video stream is partitioned. Each of the decoding functions is dispatched to the plurality of processing cores in accordance with the scheduling.
H04N 19/13 - Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/174 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/44 - Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
91.
ACCELERATION OF CONTEXT ADAPTIVE BINARY ARITHMETIC CODING (CABAC) IN VIDEO CODECS
A method is provided for determining a context-index when performing Context-based Adaptive Binary Arithmetic Coding (CABAC) for video compression or decompression includes initializing to an initialized value each of a plurality of context-indexes of chosen syntax elements associated with a given block (e.g., a macroblock). The context-index of dependent neighboring blocks of the given block is evaluated. The dependent neighboring blocks are blocks that have a context-index that depends on coding of a current bin position. The context-index of the dependent neighboring blocks is updated if and only if their context-index changes from the initialized values.
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
H04N 19/423 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
92.
REFERENCE LAYER OFFSET PARAMETERS FOR INTER-LAYER PREDICTION IN SCALABLE VIDEO CODING
A process for determining the selection of filters and input samples is provided for scalable video coding. The process provides for re-sampling using video data obtained from an encoder or decoder process of a base layer (BL) in a multi-layer system to improve quality in Scalable High Efficiency Video Coding (SHVC). In order to provide better alignment between layers, it is proposed that reference layer offset adjustment parameters be signaled.
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/187 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/33 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
93.
SIGNALING AND SELECTION FOR THE ENHANCEMENT OF LAYERS IN SCALABLE VIDEO
A method of signaling individual layers in a transport stream is provided that includes: determining a plurality of layers in a transport stream, wherein each layer includes a respective transport stream parameter setting; determining an additional layer for the plurality of layers in the transport stream, wherein the additional layer enhances one or more of the plurality of layers including a base layer and the respective layer parameter settings for the plurality of layers do not take into account the additional layer; and determining an additional transport stream parameter setting for the additional layer, the additional transport stream parameter setting specifying a relationship between the additional layer and at least a portion of the plurality of layers, wherein the additional transport stream parameter setting is used to decode the additional layer and the at least a portion of the plurality of layers.
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/30 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
H04N 21/2362 - Generation or processing of SI [Service Information]
94.
INDIVIDUAL BUFFER MANAGEMENT IN TRANSPORT OF SCALABLE VIDEO
A method is provided to determine buffer parameter settings for a plurality of layers in a transport stream. Each layer includes a respective transport stream buffer parameter setting. Then, the method provides respective transport stream buffer parameter settings to individual transport stream buffers for respective layers in the plurality of layers. Then, the method buffers the respective layers in the individual transport stream buffers according to the respective transport stream buffer parameter settings. After buffering, the method combines the respective layers to form a combined bit stream.
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/30 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
95.
Method and apparatus for embedding secret information in digital certificates
A method and system is provided for embedding cryptographically modified versions of secret in digital certificates for use in authenticating devices and in providing services subject to conditional access conditions.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
G06F 21/33 - User authentication using certificates
A method for processing a plurality of multilayer bit streams includes receiving a plurality of multilayer bit streams each having a base layer and at least one enhancement layer. One or more of the enhancement layers are extracted in whole or in part from at least one of the multilayer bit streams so that the plurality of multilayer bit streams are collectively reduced in their total bandwidth. Each of the multilayer bit streams are rewritten to a single layer bit stream. The single layer bit streams are multiplexed to form a multiplexed single layer bit stream.
H04N 7/12 - Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
H04N 21/2365 - Multiplexing of several video streams
H04N 19/187 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
H04N 19/66 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience involving data partitioning, i.e. separation of data into packets or partitions according to importance
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 19/30 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/18 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
H04N 19/40 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
97.
PROVISIONING DRM CREDENTIALS ON A CLIENT DEVICE USING AN UPDATE SERVER
A method of provisioning DRM credentials on a client device, comprising receiving DRM credentials at an update server from a key generation system, the DRM credentials having been encrypted by the key generation system, receiving a DRM credential request from a client device, the DRM credential request comprising a digital signature, a device class certificate, and an authorization token, authenticating the DRM credential request by validating the digital signature and the device class certificate, extracting and validating the authorization token, and providing the DRM credentials to the client device.
H04N 21/266 - Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system or merging a VOD unicast channel into a multicast channel
H04N 21/6334 - Control signals issued by server directed to the network components or client directed to client for authorisation, e.g. by transmitting a key
Methods, systems, and computer readable media can be operable to facilitate the provisioning of a user interface with video frame tiles. Specific content sources or pieces of content may be identified according to various parameters, and media renderings of the associated content may be generated. The media renderings may be processed at a device receiving the content, and a user interface including one or more of the media renderings may be generated. The media renderings may be organized within the user interface as individual video frame tiles.
H04N 21/482 - End-user interface for program selection
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabi
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 5/445 - Receiver circuitry for displaying additional information
H04N 21/475 - End-user interface for inputting end-user data, e.g. PIN [Personal Identification Number] or preference data
99.
REFERENCE LAYER AND SCALED REFERENCE LAYER OFFSETS FOR SCALABLE VIDEO CODING
A process for determining the selection of filters and input samples is provided for scalable video coding. The process provides for re-sampling using video data obtained from an encoder or decoder process of a base layer (BL) in a multi-layer system to improve quality in Scalable High Efficiency Video Coding (SHVC). It is proposed that a single scaled reference layer offset be derived from two scaled reference layer offset parameters, and vice-versa. It is also proposed that a single scaled reference layer offset or a single reference layer offset be derived from a combination of a scaled reference layer offset parameter and a reference layer offset parameter.
H04N 19/33 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/80 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
100.
ERROR RECOVERY FOR VIDEO DELIVERY VIA A SEGMENTATION PROCESS
A client device may receive encoded video via a transport stream based on a video coding protocol. When errors result in receiving the encoded video, the client device may use an Internet Protocol (IP) connection to recover from the error. For example, an encoder may insert markers inband in the transport stream, and a segmenter then segments the video using the markers. The content remains in the form of a continuous transport stream that is compatible with existing transport stream delivery mechanisms. When an error occurs, the client device can then determine a locator for a segment that can be used to recover from the error and requests the segment from a server through the IP connection. The server sends the segment to the client device at the level of the transport stream layer without adding another protocol layer to encapsulate the segment.
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
H04N 21/4425 - Monitoring of client processing errors or hardware failure
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabi
H04N 21/63 - Control signaling between client, server and network components; Network processes for video distribution between server and clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing