A verifiable multi-factor authentication scheme uses an authentication service. An authentication request is received from an organization, the request having been generated in response to receipt of an access request from a user. The user has an associated public-private key pair. The organization provides the authentication request together with a first nonce. In response to receiving the authentication request and the first nonce, the service generates a second nonce, and then it sends the first and second nonces to the user. Thereafter, the service receives a data string, the data string having been generated by the client applying its private key over the nonces. Using the user's public key, the service attempts to verify that the data string includes the nonces. If it does, the authentication service provides the authentication decision in response to the authentication request, together with a proof that the user approved the authentication request.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
2.
FAST, SECURE, AND SCALABLE DATA STORE AT THE EDGE FOR CONNECTING NETWORK ENABLED DEVICES
A distributed computing system provides a distributed data store for network enabled devices at the edge. The distributed database is partitioned such that each node in the system has its own partition and some number of followers that replicate the data in the partition. The data in the partition is typically used in providing services to network enabled devices from the edge. The set of data for a particular network enabled device is owned by the node to which the network enabled device connects. Ownership of the data (and the data itself) may move around the distributed computing system to different nodes, e.g., for load balancing, fault-resilience, and/or due to device movement. Security/health checks are enforced at the edge as part of a process of transferring data ownership, thereby providing a mechanism to mitigate compromised or malfunctioning network enabled devices.
H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
An account protection service to prevent user login or other protected endpoint request abuse. In one embodiment, the service collects user recognition data, preferably for each login attempt (e.g. data about the connection, session, and other relevant context), and it constructs a true user profile for each such user over time, preferably using the recognition data from successful logins. The profile evolves as additional recognition data is collected from successful logins. The profile is a model of what the user “looks like” to the system. For a subsequent login attempt, the system then calculates a true user score. This score represents how well the current user recognition data matches the model represented by the true user profile. The user recognition service is used to drive policy decisions and enforcement capabilities.
An enhanced server-side Adaptive Bitrate Streaming (ABR) of source content. The ABR switching logic is located in association with a server, and this logic also receives telemetry data as measured by the client. The client receives a single manifest that comprises a set of encoded entries each associated with a segment of the source content and comprising a first portion encoding, as a set of options, each of the multiple bitrates, and a second portion that, for each of the multiple bitrate options, encodes a size of the segment associated therewith. In operation, the client media player makes a request for a portion of the source content, and that request includes one of the encoded entries. In response, the server-side ABR switching logic determines whether to switch delivery of the source content from an existing first bitrate to a second bitrate. If so, the requested portion is delivered to the client at the second bitrate.
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/6379 - Control signals issued by the client directed to the server or network components directed to server directed to encoder
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
An enhanced server-side Adaptive Bitrate Streaming (ABR) of source content. The ABR switching logic is located in association with a server, and this logic also receives telemetry data as measured by the client. The client receives a single manifest that comprises a set of encoded entries each associated with a segment of the source content and comprising a first portion encoding, as a set of options, each of the multiple bitrates, and a second portion that, for each of the multiple bitrate options, encodes a size of the segment associated therewith. In operation, the client media player makes a request for a portion of the source content, and that request includes one of the encoded entries. In response, the server-side ABR switching logic determines whether to switch delivery of the source content from an existing first bitrate to a second bitrate. If so, the requested portion is delivered to the client at the second bitrate.
H04L 65/61 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
H04N 21/238 - Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
6.
Establishing On Demand Connections To Intermediary Nodes With Advance Information For Performance Improvement
An agent deployed within a private network creates on-demand connections to an intermediary node outside the private network. When a client contacts the intermediary node for an application or more generally any service available from within the private network, the intermediary node signals the agent to create the on-demand connection outbound to the intermediary. The agent may include advance information in the signal that accelerates the establishment of the on-demand connection and/or transmission of responsive data to the client.
Systems and methods for querying large amounts of data are disclosed. Several different versions of a data feed are provided, ranging from a full set of data to various other versions that are smaller or faster to query (e.g., sampled versions, aggregations, sketches) . . . . A machine learning model is trained on features of input queries run against the various versions of the data feed and the corresponding results. The trained model is then applied to a new query to choose, automatically, which version of the data feed to apply the query against. That is, the system can select which version of the data feed to use when executing the given query, optimizing speed and/or compute costs while providing an appropriate level of accuracy for the given query.
Website phishing detection is enabled using a siamese neural network. One twin receives a query image associated with a website page. The other twin receives a subset of a set of reference website images together with positive (phishing) examples that were used to train the networks, the subset of reference website images having been determined by applying an identifier associated with a brand of interest. The operation of applying the identifier significantly reduces the relevant search space for the inferencing task. If the inferencing determines a sufficient likelihood that the website page is a phishing page, control signaling is generated to control a system to take a given mitigation action n.
A multi-factor authentication scheme uses an MFA authentication service and a browser extensionless phish-proof method to facilitate an MFA workflow. Phish-proof MFA verifies that the browser the user is in front of is actually visiting the authentic (real) site and not a phished site. This achieved by only allowing MFA to be initiated from a user trusted browser by verifying its authenticity through a signing operation using a key only it possesses, and then also verifying that the verified browser is visiting the authentic site. In a preferred embodiment, this latter check is carried out using an iframe postMessage owning domain check. In a variant embodiment, the browser is verified to be visiting the authentic site through an origin header check. By using the iframe-based or ORIGIN header-based check, the solution does not require a physical security key (such as a USB authenticator) or any browser extension or plug-in.
Improved security inspections for API traffic are disclosed. A data obfuscation process is applied to structured data in a request or response body to obfuscate the content while retaining the structural aspects thereof. The resulting sanitized version of the structured data is sent for analysis. For example a machine learning component is trained on such sanitized data to develop a signature or model that detects anomalous interactions with the API. The retained structure contains signals useful for pattern recognition and anomaly detection. The signature or model is preferably developed for a specific API endpoint. Then, a detection engine can be deployed to assess subsequent API traffic for the API endpoint, with such subsequent live traffic being similarly obfuscated by the system before being assessed. The teachings hereof can be used to block attacks or other malicious activities directed against API endpoints.
It is often important that a server's responses to a set of client requests are coherent with one another, but if the client's requests are spread over time, that may not occur. In accordance with the teaching of this patent document, a client is able to communicate with a server to achieve coherency. A client can send a request (e.g., an HTTP request for a given resource) with a data preservation directive. The data preservation directive causes the server to initiate a server-side process to preserve the state of underlying server-side data upon which the response relies (or will rely). Also, a client can send a request with an attribute requesting the response be coherent with respect to some date-time or other reference point. This attribute thus asks the server to ensure coherency in the response to the client.
H04L 67/1029 - Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
12.
REAL-TIME DETECTION OF SITE PHISHING USING MESSAGE PASSING NEURAL NETWORKS (MPNN) ON DIRECTED GRAPHS
Website phishing detection is enabled using a Message Passing Neural Network (MPNN) that scores requested HTML with a likelihood of being a phishing website. The technique leverages the assumption that the HTML in a phishing website often presents anomalous structure or features when compared with an analogous benign website. Once a phishing site is detected, a given mitigation action is then taken.
Website phishing detection is enabled using a Message Passing Neural Network (MPNN) that scores requested HTML with a likelihood of being a phishing website. The technique leverages the assumption that the HTML in a phishing website often presents anomalous structure or features when compared with an analogous benign website. Once a phishing site is detected, a given mitigation action is then taken.
A technique to detect and mitigate anomalous Application Programming Interface (API) behavior associated with an application having a set of APIs is described. Across one or more sessions during a time period, and in response to receiving a set of one or more transactions directed to the application, a behavioral graph is generated. The graph comprises a set of vertices, an associated set of edges, and a set of weights representing frequency of observation of one or more behaviors, wherein a behavior is denoted by an edge between a pair of connected vertices, wherein the edge depicts at least one interdependent relationship between first and second APIs of the set of APIs. One or more low weight edges are filtered from the behavioral graph to generate a decision graph. The decision graph is then used to detect that one or more new transactions represent anomalous behavior. In response to detecting that the given new transaction represents the anomalous behavior, an action is taken to protect the application.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
A technique to detect and mitigate anomalous Application Programming Interface (API) behavior associated with an application having a set of APIs is described. Across one or more sessions during a time period, and in response to receiving a set of one or more transactions directed to the application, a behavioral graph is generated. The graph comprises a set of vertices, an associated set of edges, and a set of weights representing frequency of observation of one or more behaviors, wherein a behavior is denoted by an edge between a pair of connected vertices, wherein the edge depicts at least one interdependent relationship between first and second APIs of the set of APIs. One or more low weight edges are filtered from the behavioral graph to generate a decision graph. The decision graph is then used to detect that one or more new transactions represent anomalous behavior. In response to detecting that the given new transaction represents the anomalous behavior, an action is taken to protect the application.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 21/51 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems at application loading time, e.g. accepting, rejecting, starting or inhibiting executable software based on integrity or source reliability
16.
NETWORK SECURITY ANALYSIS SYSTEM WITH REINFORCEMENT LEARNING FOR SELECTING DOMAINS TO SCAN
This document describes among other things, network security systems that incorporate a feedback loop so as to automatically and dynamically adjust the scope of network traffic that is subject to inspection. Risky traffic can be sent for inspection; risky traffic that is demonstrated to have high rate of threats can be outright blocked without further inspection; traffic that is causing errors due to protocol incompatibility or should not be inspected for regulatory or other reasons can be flagged so it bypasses the security inspection system. The system can operate on a domain by domain basis, IP address basis, or otherwise.
This disclosure describes a bot detection system that leverages deep learning to facilitate bot detection and mitigation, and that works even when an attacker changes an attack script. The approach herein provides for a system that rapidly and automatically (without human intervention) retrains on new, updated or modified attack vectors.
A messaging channel is embedded directly into a media stream. Messages delivered via the embedded messaging channel are extracted at a client media player. In lieu of embedding the message data in the media stream, a coordination index is injected, and the message data is sent separately and merged into the media stream downstream (at the media player) based on the index. In one example embodiment, multiple data streams (each potentially with different content intended for a particular “type” or class of user) are transmitted alongside the video stream in which the coordination index has been injected into a video frame. Based on a user's service level, a particular one of the multiple data streams is released when the sequence number appears in the video frame, and the data in that stream is associated with the media.
Among other things, this document describes systems, devices, and methods for improving the delivery and performance of web pages authored to produce virtual reality (VR) or augmented reality (AR) experiences. In some embodiments, such web pages are analyzed. This analysis may be initiated at the request of a content server that receives a client request for the HTML. The analysis may involve, asynchronous to the client request, loading the the page into a non-user-facing browser environment and allowing the VR or AR scene to execute, even including executing animation routines for a predetermined period of time. Certain characteristics of the scene and of objects are thereby captured. Based on this information, an object list ordered by loading priority is prepared. Consulting this information in response to subsequent requests for the page, a content server can implement server push, early hints and/or other delivery enhancements.
F16H 61/00 - Control functions within change-speed- or reversing-gearings for conveying rotary motion
F16H 61/02 - Control functions within change-speed- or reversing-gearings for conveying rotary motion characterised by the signals used
F16K 17/04 - Safety valves; Equalising valves closing on insufficient pressure on one side spring-loaded
F16K 17/06 - Safety valves; Equalising valves closing on insufficient pressure on one side spring-loaded with special arrangements for adjusting the opening pressure
G06F 40/143 - Markup, e.g. Standard Generalized Markup Language [SGML] or Document Type Definition [DTD]
This document describes techniques for rotating keys used to tokenize data stored in a streaming data store where data is stored for a maximum time [W]. In some embodiments, a data layer of such a data store can encrypt arriving original data values twice. The original data value is first encrypted with a first key, producing a first token. The original data value is encrypted with a second key, producing a second token. Each encrypted token can be stored separately in the data store. A field may be associated with two database columns, one holding the value encrypted with the first key and the second holding the value encrypted with the second key. Keys are rotated after time [K], which is at least equal to and preferably longer than [W]. Rotation can involve discarding the older key and generating a new key so that two keys are still used.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
21.
Client Entity Validation with Session Tokens Derived From Underlying Communication Service Values
The generation and use of session tokens in a computer networking environment is disclosed. Such session tokens can be used in a variety of ways, such as to validate client identity and entitlement to resources, for security assessment, or in other trust establishment mechanisms. Preferably, the session token generation algorithm incorporates one or more non-ephemeral value(s) that are established for a given communication session between two hosts. To validate a token presented by a client, for example, a server can check it against the session values actually in use to communicate with the client.
A method of “warm” migrating a virtual machine (VM) on a source host to a target virtual machine on a destination host. The method begins by mirroring contents of disk onto a target disk associated with the target VM. Transfer of the RAM contents is then initiated. Unlike live migration strategies where data transfer occurs at a high rate, the RAM contents are transferred at a low transfer rate. While the contents of the RAM are being transferred, a shutdown of the virtual machine is initiated. This operation flushes to disk all of the remaining RAM contents. Before the shutdown completes, those remaining contents, now on disk, are mirrored to the target disk. Once that mirroring is finished, the shutdown of the virtual machine is completed, and this shutdown is mirrored at the destination host. To complete the warm migration, the target virtual machine is then booted from the target disk.
A method of "warm" migrating a virtual machine (VM) on a source host to a target virtual machine on a destination host. The method begins by mirroring contents of disk onto a target disk associated with the target VM. Transfer of the RAM contents is then initiated. Unlike live migration strategies where data transfer occurs at a high rate, the RAM contents are transferred at a low transfer rate. While the contents of the RAM are being transferred, a shutdown of the virtual machine is initiated. This operation flushes to disk all of the remaining RAM contents. Before the shutdown completes, those remaining contents, now on disk, are mirrored to the target disk. Once that mirroring is finished, the shutdown of the virtual machine is completed, and this shutdown is mirrored at the destination host. To complete the warm migration, the target virtual machine is then booted from the target disk.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
24.
Efficient congestion control in a tunneled network
A method of congestion control implemented by a sender over a network link that includes a router having a queue. During a first state, information is received from a receiver. The information comprises an estimated maximum bandwidth for the link, a one-way transit time for traffic over the link, and an indication whether the network link is congested. In response to the link being congested, the sender transitions to a second state. While in the second state, a sending rate of packets in reduced, in part to attempt to drain the queue of data packets contributed by the sender. The sender transitions to a third state when the sender estimates that the queue has been drained of the data packets contributed. During the third state, the sending rate is increased until either the sender transitions back to the first state, or receives a new indication that the link is congested.
This disclosure describes a technique to fingerprint TLS connection information to facilitate bot detection. The notion is referred to herein as “TLS fingerprinting.” Preferably, TLS fingerprinting herein comprises combining different parameters from the initial “Hello” packet send by the client. In one embodiment, the different parameters from the Hello packet that are to create the fingerprint (the “TLS signature”) are: record layer version, client version, ordered TLS extensions, ordered cipher list, ordered elliptic curve list, and ordered signature algorithms list. Preferably, the edge server persists the TLS signature for the duration of a session.
A server in a content delivery network (CDN) can examine API traffic and extract therefrom content that can be optimized before it is served to a client. The server can apply content location instructions to a given API message to find such content therein. Upon finding an instance of such content, the server can verify the identity of the content by applying a set of content verification instructions. If verification succeeds, the server can retrieve an optimized version of the identified content and swap it into the API message for the original version. If an optimized version is not available, the server can initiate an optimization process so that next time the optimized version will be available. In some embodiments, an analysis service can assist by observing traffic from an API endpoint over time, detecting the format of API messages and producing the content location and verification instructions.
A method executes upon receiving data (email, IP address) associated with an account registration. In response, an encoding is applied to the data to generate a node vector. The node vector indexes a database of such node vectors that the system maintains (from prior registrations). The database potentially includes one or more node vector(s) that may have a given similarity to the encoded node vector. To determine whether there are such vectors present, a set of k-nearest neighbors to the encoded node vector are then obtained from the database. This set of k-nearest neighbors together with the encoded node vector comprise a virtual graph that is then fed as a graph input to a Graph Neural Network previously trained on a set of training data. The GNN generates a probability that the virtual graph represents a NAF. If the probability exceeds a configurable threshold, the system outputs an indication that the registration is potentially fraudulent, and a mitigation action is taken.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
28.
REAL-TIME DETECTION OF ONLINE NEW-ACCOUNT CREATION FRAUD USING GRAPH-BASED NEURAL NETWORK MODELING
A method executes upon receiving data (email, IP address) associated with an account registration. In response, an encoding is applied to the data to generate a node vector. The node vector indexes a database of such node vectors that the system maintains (from prior registrations). The database potentially includes one or more node vector(s) that may have a given similarity to the encoded node vector. To determine whether there are such vectors present, a set of k-nearest neighbors to the encoded node vector are then obtained from the database. This set of k-nearest neighbors together with the encoded node vector comprise a virtual graph that is then fed as a graph input to a Graph Neural Network previously trained on a set of training data. The GNN generates a probability that the virtual graph represents a NAF. If the probability exceeds a configurable threshold, the system outputs an indication that the registration is potentially fraudulent, and a mitigation action is taken.
Systems and methods for obfuscating data. The technology herein can be used to produce an obfuscated output that exhibits no easily discernible pattern, making difficult to identify or to filter using regular expressions, signature matching or other pattern matching. The output nevertheless can be reversed and the original data recovered by an intended recipient with a relatively low-cost of processing, making it suitable for low-powered devices. The obfuscation is stateless and does not require encryption.
An overlay network is augmented to provide more efficient data storage by processing a dataset of high dimension into an equivalent dataset of lower dimension, wherein the data reduction reduces the amount of actual physical data but not necessarily its informational value. Data to be processed (dimensionally-reduced) is received by an ingestion layer and supplied to a learning-based storage reduction application that implements the data reduction technique. The application applies a data reduction algorithm and stores the resulting dimensionally-reduced data sets in the native data storage or third party cloud. To recover the original higher-dimensional data, an associated reverse algorithm is implemented. In general, the application coverts an N dimensional data set to a K dimensional data set, where K<
A method, apparatus and computer program product for real-time new account fraud detection and prevention. The technique leverages machine learning. In this approach, first and second computational branches of a machine learning model are trained jointly on a corpus of emails. Following training, an arbitrary email is received. The arbitrary email is then applied through the computational branches of the machine learning model. The first branch has an attention layer, and the second branch has a convolutional layer. The outputs of the branches are aggregated into an output that is then applied through another self-attention layer to generate a score. Based on the score, the arbitrary email is characterized. If the email is characterized as fraudulent, a mitigation action is taken.
A method, apparatus and computer program product for real-time new account fraud detection and prevention. The technique leverages machine learning. In this approach, first and second computational branches of a machine learning model are trained jointly on a corpus of emails. Following training, an arbitrary email is received. The arbitrary email is then applied through the computational branches of the machine learning model. The first branch has an attention layer, and the second branch has a convolutional layer. The outputs of the branches are aggregated into an output that is then applied through another self-attention layer to generate a score. Based on the score, the arbitrary email is characterized. If the email is characterized as fraudulent, a mitigation action is taken.
A method for dynamic and extensible creation of an extensible wireless network, using a set of drones that individually support server processes. The drones interact with one another, exchanging information, type of coverage, type and amount of throughput, location, etc. A control node connects to a wired network. The node operates a leader election protocol, captures state information from the drones, and positions/re-positions the drones as necessary. Drones are flown in to position and then engaged as necessary to stretch/adapt the coverage as necessary. The drone's power utilization is monitored and its coverage area modified as necessary to optimize power utilization. The control node performs drone-based coverage/power utilization computations, and attempts to apply the appropriate location assignments to provide maximum network coverage (extensibility) while also preserving drone-specific power (battery) utilization. The approach herein can be used to supplement existing networks during events, migrations of populations during work hours, etc.
A physical object having a programmable, electronically readable tag can be identified and tracked in a given third party system with the aid of an identity services platform. When the owner of the object is about to place it in the custody of a third party system, the owner can use a client device to instruct the identity services platform to generate a nonce, for programming into the object's tag. Devices in the third party system read and use the nonce to identify and track the object and to make decisions about how it is handled. When the object exits from the control of the third party system for return to the owner, the identity services platform is asked to provide a proof of ownership to the third party system, which enables accurate return of the object to its proper owner.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
G06K 19/077 - Constructional details, e.g. mounting of circuits in the carrier
Among other things, this document describes systems, devices, and methods for improving the delivery and performance of web pages authored to produce virtual reality (VR) or augmented reality (AR) experiences. In some embodiments, such web pages are analyzed. This analysis may be initiated at the request of a content server that receives a client request for the HTML. The analysis may involve, asynchronous to the client request, loading the page into a non-user-facing browser environment and allowing the VR or AR scene to execute, even including executing animation routines for a predetermined period of time. Certain characteristics of the scene and of objects are thereby captured. Based on this information, an object list ordered by loading priority is prepared. Consulting this information in response to subsequent requests for the page, a content server can implement server push, early hints and/or other delivery enhancements.
A technique to cache content securely within edge network environments, even within portions of that network that might be considered less secure than what a customer desires, while still providing the acceleration and off-loading benefits of the edge network. The approach ensures that customer confidential data (whether content, keys, etc.) are not exposed either in transit or at rest. In this approach, only encrypted copies of the customer's content objects are maintained within the portion of the edge network, but without any need to manage the encryption keys. To take full advantage of the secure content caching technique, preferably the encrypted content (or portions thereof) are pre-positioned within the edge network portion to improve performance of secure content delivery from the environment.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
A plurality of WiFi-enabled devices that are physically proximate to one another form an ad hoc mesh network, which is associated with an overlay network, such as a content delivery network. A typical WiFi device is a WiFi router that comprises addressable data storage, together with control software operative to configure the device seamlessly into the WiFi mesh network formed by the device and one or more physically-proximate devices. The addressable data storage across multiple such devices comprises a distributed or “mesh-assisted” cache that is managed by the overly network. The WiFi mesh network thus provides bandwidth that is leveraged by the overlay network to provide distribution of content, e.g., content that has been off-loaded for delivery (by content providers) to the CDN. Other devices that may be leveraged include set-top boxes and IPTV devices.
H04L 67/1087 - Peer-to-peer [P2P] networks using cross-functional networking aspects
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 67/568 - Storing data temporarily at an intermediate stage, e.g. caching
H04W 4/70 - Services for machine-to-machine communication [M2M] or machine type communication [MTC]
38.
High performance distributed system of record with cryptographic service support
A high-performance distributed ledger and transaction computing network fabric over which large numbers of transactions (involving the transformation, conversion or transfer of information or value) are processed concurrently in a scalable, reliable, secure and efficient manner. In one embodiment, the computing network fabric or “core” is configured to support a distributed blockchain network that organizes data in a manner that allows communication, processing and storage of blocks of the chain to be performed concurrently, with little synchronization, at very high performance and low latency, even when the transactions themselves originate from distant sources. This data organization relies on segmenting a transaction space within autonomous but cooperating computing nodes that are configured as a processing mesh. Each computing node typically is functionally-equivalent to all other nodes in the core. The nodes operate on blocks independently from one another while still maintaining a consistent and logically-complete view of the blockchain as a whole. According to another feature, secure transaction processing is facilitated by storing cryptographic key materials in secure and trusted computing environments associated with the computing nodes to facilitate construction of trust chains for transaction requests and their associated responses.
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04L 9/12 - Transmitting and receiving encryption devices synchronised or initially set up in a particular manner
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
H04L 9/14 - Arrangements for secret or secure communications; Network security protocols using a plurality of keys or algorithms
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
39.
Uniquely identifying and securely communicating with an appliance in an uncontrolled network
A service consumer that utilizes a cloud-based access service provided by a service provider has associated therewith a network that is not capable of being controlled by the service provider. An enterprise connector is supported in this uncontrolled network, preferably as an appliance-based solution. According to this disclosure, the enterprise configures an appliance and then deploys it in the uncontrolled network. To this end, an appliance is required to proceed through a multi-stage approval protocol before it is accepted as a “connector” and is thus enabled for secure communication with the service provider. The multiple stages include a “first contact” (back to the service) stage, an undergoing approval stage, a re-generating identity material stage, and a final approved and configured stage. Unless the appliance passes through these stages, the appliance is not permitted to interact with the service as a connector. As an additional aspect, the service provides various protections for addressing scenarios wherein entities masquerade as approved appliances.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Improved technology for managing the caching of objects that are rarely requested by clients. A cache system can be configured to assess a class of objects (such as objects associated with a particular domain) for cacheability, based on traffic observations. If the maximum possible cache offloading for the class of objects falls below a threshold level, which indicates a high proportion of non-cacheable or “single-hitter” content, then cache admission logic is configured to admit objects only after multiple clients requests during a time period (usually the object's time in cache, or eviction age). Otherwise, the cache admission logic may operate to admit objects to the cache after the first client request, assuming the object meets cacheability criteria. The technological improvements disclosed herein can be used to improve cache utilization, for example by preventing single-hitter objects from pushing out multi-hit objects (the objects that get hits after being added to cache).
This disclosure describes a technique to determine whether a client computing device accessing an API is masquerading its device type (i.e., pretending to be a device that it is not). To this end, and according to this disclosure, the client performs certain processing requested by the server to reveal its actual processing capabilities and thereby its true device type, whereupon—once the server learns the true nature of the client device—it can take appropriate actions to mitigate or prevent further damage. To this end, during the API transaction the server returns information to the client device that causes the client device to perform certain computations or actions. The resulting activity is captured on the client computing and then transmitted back to the server, which then analyzes the data to inform its decision about the true client device type. Thus, when the server detects the true client device type (as opposed to the device type that the device is masquerading to be), it can take appropriate action to defend the site.
A system for enterprise collaboration is associated with an overlay network, such as a content delivery network (CDN). The overlay network comprises machines capable of ingress, forwarding and broadcasting traffic, together with a mapping infrastructure. The system comprises a front-end application, a back-end application, and set of one or more APIs through which the front-end application interacts with the back-end application. The front-end application is a web or mobile application component that provides one or more collaboration functions. The back-end application comprises a signaling component that maintains state information about each participant in a collaboration, a connectivity component that manages connections routed through the overlay network, and a multiplexing component that manages a multi-peer collaboration session to enable an end user peer to access other peers' media streams through the overlay network rather than directly from another peer. Peers preferably communicate with the platform using WebRTC. A collaboration manager component enables users to configure, manage and control their collaboration sessions.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
A high-performance distributed ledger and transaction computing network fabric over which large numbers of transactions are processed concurrently in a scalable, reliable, secure and efficient manner. In one embodiment, the computing network core is configured to support a distributed blockchain network that organizes data in a manner that allows communication, processing and storage of blocks of the chain to be performed concurrently at very high performance and low latency, even when the transactions themselves originate from distant sources. This data organization relies on segmenting a transaction space within autonomous but cooperating computing nodes that are configured as a processing mesh. The system also provides for confidence-based consensus. A configuration system is provided to enable configuration updates to be securely implemented across various subsets of the computing nodes.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
An account protection service to prevent user login or other protected endpoint request abuse. In one embodiment, the service collects user recognition data, preferably for each login attempt (e.g. data about the connection, session, and other relevant context), and it constructs a true user profile for each such user over time, preferably using the recognition data from successful logins. The profile evolves as additional recognition data is collected from successful logins. The profile is a model of what the user “looks like” to the system. For a subsequent login attempt, the system then calculates a true user score. This score represents how well the current user recognition data matches the model represented by the true user profile. The user recognition service is used to drive policy decisions and enforcement capabilities. Preferably, user recognition works in association with bot detection in a combined solution.
A multi-factor authentication scheme uses an MFA authentication service and a browser extensionless phish-proof method to facilitate an MFA workflow. Phish- proof MFA verifies that the browser the user is in front of is actually visiting the authentic (real) site and not a phished site. This achieved by only allowing MFA to be initiated from a user trusted browser by verifying its authenticity through a signing operation using a key only it possesses, and then also verifying that the verified browser is visiting the authentic site. In a preferred embodiment, this latter check is carried out using an iframe postMessage owning domain check. In a variant embodiment, the browser is verified to be visiting the authentic site through an origin header check. By using the iframe-based or ORIGIN header-based check, the solution does not require a physical security key (such as a USB authenticator) or any browser extension or plug-in.
A client application sends DNS requests to a threat protection service when a mobile device operating the client application is operating off-network. The application is configured to detect network conditions and automatically configure an appropriate system-wide DNS resolution setting. Preferably, DNS requests from the client identify the customer and the device to threat protection (TP) service resolvers without introducing a publicly-visible customer or device identifier to the DNS requests or responses. The TP system then applies the correct policy to DNS requests coming from off-network clients. In particular, the resolver recognizes the customer for requests coming for off net clients and apply the customer’s policy to such request. The resolver is configured to log the customer and the device associated with requests from the TP off-net client. Request logs from the TP resolver are provided to a cloud security intelligence platform for threat intelligence analytics and customer visible reporting.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Improvements to internet cache protocols are disclosed. In certain embodiments, a client-facing proxy server can query peer servers to determine whether they have a copy of an object that the proxy server needs. The peer servers can respond based on object information that the peer servers stored about objects they have in cache, where the peers recorded such object information previously when ingesting the objects into their cache and stored it separately from the objects for fast access (e.g. in RAM vs. on disk). This information can be expressed in a compact way using just a few object flags, and enables the peer server to quickly respond and with detail about the status of objects they hold. The proxy server can make an intelligent decision about which peer to use, and indeed whether to use a peer at all.
A multi-factor authentication scheme uses an MFA authentication service and a browser extensionless phish-proof method to facilitate an MFA workflow. Phish-proof MFA verifies that the browser the user is in front of is actually visiting the authentic (real) site and not a phished site. This achieved by only allowing MFA to be initiated from a user trusted browser by verifying its authenticity through a signing operation using a key only it possesses, and then also verifying that the verified browser is visiting the authentic site. In a preferred embodiment, this latter check is carried out using an iframe postMessage owning domain check. In a variant embodiment, the browser is verified to be visiting the authentic site through an origin header check. By using the iframe-based or ORIGIN header-based check, the solution does not require a physical security key (such as a USB authenticator) or any browser extension or plug-in.
A set of transaction handling computing elements comprise a network core that receive and process transaction requests into an append-only immutable chain of data blocks, wherein a data block is a collection of transactions, and wherein an Unspent Transaction Output (UTXO) data structure supporting the immutable chain of data blocks is an output from a finalized transaction. Typically, the UTXO data structure consists essentially of an address and a value. In this approach, at least one UTXO data structure is configured to include information either in addition to or in lieu of the address and value, thereby defining a Transaction Output (TXO). A TXO may have a variety of types, and one type includes an attribute that encodes data. In response to receipt of a request to process a transaction, the set of transaction handling computing elements are executed to process the transaction into a block using at least the information in the TXO.
G06F 16/22 - Indexing; Data structures therefor; Storage structures
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check of credit lines or negative lists
G06Q 30/0226 - Incentive systems for frequent usage, e.g. frequent flyer miles programs or point systems
H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
A method of traffic forwarding and disambiguation through the use of local proxies and addresses. The technique leverages DNS to on-ramp traffic to a local proxy. The local proxy runs on the end user's device. According to a first embodiment, DNS is used to remap what would normally be a wide range of IP addresses to localhost based on 127.0.0.0/8 listening sockets, where the system can then listen for connections and data. In a second embodiment, a localhost proxy based on a TUN/TAP interface (or other packet interception method) with a user-defined CIDR range to which the local DNS server drives traffic is used. Requests on that local proxy are annotated (by adding data to the upstream connection).
A multi-tenant service platform provides network services, such as content delivery, edge compute, and/or media streaming, on behalf of, or directly for, a given tenant. The service platform offers a policy layer enabling each tenant to specify levels of acceptable performance degradation that the platform may incur so that the platform can use electricity with desirable characteristics to service client requests associated with that tenant. Service nodes in the platform (e.g., edge servers) enforce the policy layer at the time of a service request. Preferably, the ‘quality’ of the electricity is a measurement of source of the energy, e.g., whether it is sourced from high-carbon fossil fuels (low-quality) or low-carbon renewables (high-quality). If the desired quality of electricity cannot be achieved, the node can resort to using less electricity to handle the request, which is achieved in a variety of ways.
This document describes techniques for rotating keys used to tokenize data stored in a streaming data store where data is stored for a maximum time [W]. In some embodiments, a data layer of such a data store can encrypt arriving original data values twice. The original data value is first encrypted with a first key, producing a first token. The original data value is encrypted with a second key, producing a second token. Each encrypted token can be stored separately in the data store. A field may be associated with two database columns, one holding the value encrypted with the first key and the second holding the value encrypted with the second key. Keys are rotated after time [K], which is at least equal to and preferably longer than [W]. Rotation can involve discarding the older key and generating a new key so that two keys are still used.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
53.
High performance distributed system of record with key management
A high-performance distributed ledger and transaction computing network fabric over which large numbers of transactions are processed concurrently in a scalable, reliable, secure and efficient manner. In one embodiment, the computing network fabric or “core” is configured to support a distributed blockchain network that organizes data in a manner that allows communication, processing and storage of blocks of the chain to be performed concurrently, with little synchronization, at very high performance and low latency, even when the transactions themselves originate from distant sources. This data organization relies on segmenting a transaction space within autonomous but cooperating computing nodes that are configured as a processing mesh. Secure transaction processing is facilitated by storing cryptographic key materials in secure and trusted computing environments associated with the computing nodes to facilitate construction mining proofs during the validation of a block.
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
A set of transaction handling computing elements comprise a network core that receive and process transaction requests into an append-only immutable chain of data blocks, wherein a data block is a collection of transactions, and wherein an Unspent Transaction Output (UTXO) data structure supporting the immutable chain of data blocks is an output from a finalized transaction. Typically, the UTXO data structure consists essentially of an address and a value. In this approach, at least one UTXO data structure is configured to include information either in addition to or in lieu of the address and value, thereby defining a Transaction Output (TXO). A TXO may have a variety of types, and one type includes an attribute that encodes data. In response to receipt of a request to process a transaction, the set of transaction handling computing elements are executed to process the transaction into a block using at least the information in the TXO.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check of credit lines or negative lists
G06Q 30/0226 - Incentive systems for frequent usage, e.g. frequent flyer miles programs or point systems
H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
This patent document describes technology for providing real-time messaging and entity update services in a distributed proxy server network, such as a CDN. Uses include distributing real-time notifications about updates to data stored in and delivered by the network, with both high efficiency and locality of latency. The technology can be integrated into conventional caching proxy servers providing HTTP services, thereby leveraging their existing footprint in the Internet, their existing overlay network topologies and architectures, and their integration with existing traffic management components.
H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
H04L 67/568 - Storing data temporarily at an intermediate stage, e.g. caching
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
This disclosure provides embedding a messaging channel directly into a media stream, where messages delivered via the embedded messaging channel are the extracted at a client media player. An advantage of embedding a message is that it can be done in a single ingest point and then passes transparently through a CDN architecture, effectively achieving message replication using the native CDN media delivery infrastructure.
A server in a content delivery network (CDN) can examine API traffic and extract therefrom content that can be optimized before it is served to a client. The server can apply content location instructions to a given API message to find such content therein. Upon finding an instance of such content, the server can verify the identity of the content by applying a set of content verification instructions. If verification succeeds, the server can retrieve an optimized version of the identified content and swap it into the API message for the original version. If an optimized version is not available, the server can initiate an optimization process so that next time the optimized version will be available. In some embodiments, an analysis service can assist by observing traffic from an API endpoint over time, detecting the format of API messages and producing the content location and verification instructions.
Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
H04W 36/12 - Reselecting a serving backbone network switching or routing node
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
59.
SYNCHRONIZING INDEPENDENT MEDIA AND DATA STREAMS USING MEDIA STREAM SYNCHRONIZATION POINTS
A messaging channel is embedded directly into a media stream. Messages delivered via the embedded messaging channel are extracted at a client media player. According to a variant embodiment, and in lieu of embedding all of the message data in the media stream, only a coordination index is injected, and the message data is sent separately and merged into the media stream downstream (at the client media player) based on the coordination index. In one example embodiment, multiple data streams (each potentially with different content intended for a particular "type" or class of user) are transmitted alongside the video stream in which the coordination index (e.g., a sequence number) has been injected into a video frame. Based on a user's service level, a particular one of the multiple data streams is released when the sequence number appears in the video frame, and the data in that stream is associated with the media.
H04L 67/61 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
60.
Traffic delivery using anycast and end user-based mapping in an overlay network
An overlay network is enhanced to provide traffic delivery using anycast and end user mapping. An anycast IP address is associated with sets of forwarding machines positioned in the overlay network. These locations correspond with IP addresses for zero rated billing traffic. In response to receipt at a forwarding machine of a packet, the machine issues an end user mapping request to the mapping mechanism. The mapping request has an IP address associated with the client from which the end user request originates. The mapping mechanism resolves the request and provides a response to the request. The response is an IP address associated with a set of server machines distinct from the forwarding machine. The forwarding machine encapsulates the packet and proxies the connection to the identified server. The server receives the connection, decapsulates the request, and processes the packet. The server machine responds to the requesting client directly.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
A method of delivering a media stream in a network having first and second media servers each capable of delivering segmented media content to a requesting media client. The network provides for HTTP-based delivery of segmented media, and the media client is supported on a client-side device. The method begins by associating the media client with the first media server. As the first server receives from the media client request for media content segments, request times for a given number of the most-recent segments requested are used to generate a prediction, by the first server, of when the media client has transitioned from a start-up or buffering state, to a steady state. In response to a new segment request being received, and upon the first server predicting that the media client has completed a transition to steady state, the new segment request is redirected to the second media server.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
This patent document describes failure recovery technologies for the processing of streaming data, also referred to as pipelined data. The technologies described herein have particular applicability in distributed computing systems that are required to process streams of data and provide at-most-once and/or exactly-once service levels. In a preferred embodiment, a system comprises many nodes configured in a network topology, such as a hierarchical tree structure. Data is generated at leaf nodes. Intermediate nodes process the streaming data in a pipelined fashion, sending towards the root aggregated or otherwise combined data from the source data streams towards. To reduce overhead and provide locally handled failure recovery, system nodes transfer data using a protocol that controls which node owns the data for purposes of failure recovery as it moves through the network.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
H04L 1/18 - Automatic repetition systems, e.g. Van Duuren systems
H04L 1/1867 - Arrangements specially adapted for the transmitter end
H04L 69/10 - Streamlined, light-weight or high-speed protocols, e.g. express transfer protocol [XTP] or byte stream
H04L 41/0654 - Management of faults, events, alarms or notifications using network fault recovery
63.
Certificate authority (CA) security model in an overlay network supporting a branch appliance
A method to generate a trusted certificate on an endpoint appliance located in an untrusted network, wherein client devices are configured to trust a first Certificate Authority (CA) that is administered by the untrusted network. In this approach, an overlay network is configured between the endpoint appliance and an origin server associated with the endpoint appliance. The overlay comprises an edge machine located proximate the endpoint appliance, and an associated key management service. A second CA is configured in association with the key management service to receive a second certificate signed by the first CA. A third CA is configured in association with the edge machine to receive a third certificate signed by the second CA. In response to a request from the appliance, a server certificate signed by the third CA is dynamically generated and provided to the appliance. A client device receiving the server certificate from the endpoint appliance trusts the server certificate as if the server certificate originated from the first CA, thereby enabling the endpoint appliance to terminate a secure information flow received at the endpoint appliance.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
A mechanism to facilitate a private network (VPN)-as-a-service, preferably within the context of an overlay IP routing mechanism implemented within an overlay network. A network-as-a-service customer operates endpoints that are desired to be connected to one another securely and privately using the overlay IP (OIP) routing mechanism. The overlay provides delivery of packets end-to-end between overlay network appliances positioned at the endpoints. During such delivery, the appliances are configured such that the data portion of each packet has a distinct encryption context from the encryption context of the TCP/IP portion of the packet. By establishing and maintaining these distinct encryption contexts, the overlay network can decrypt and access the TCP/IP flow. This enables the overlay network provider to apply one or more TCP optimizations. At the same time, the separate encryption contexts ensure the data portion of each packet is never available in the clear at any point during transport.
Methods and systems for malicious non-human user detection on computing devices are described. The method includes collecting, by a processing device, raw data corresponding to a user action, converting, by the processing device, the raw data to features, wherein the features represent characteristics of a human user or a malicious code acting as if it were the human user, and comparing, by the processing device, at least one of the features against a corresponding portion of a characteristic model to differentiate the human user from the malicious code acting as if it were the human user.
Among other things, this document describes systems, methods and devices for performance testing and dynamic placement of computing tasks in a distributed computing environment. In embodiments, a given client request is forwarded up a hierarchy of nodes, or across tiers in the hierarchy. A particular computing node in the system self-determines to perform a computing task to generate (or help generate) particular content for a response to the client. The computing node injects its identifier into the response indicating that it performed those tasks; the identifier is transmitted to the client with particular content. The client runs code that assesses the performance of the system from the client's perspective, e.g., in servicing the request, and beacons this performance data, along with the aforementioned identifier, to a system intelligence component. The performance information may be used to dynamically place and improve the placement of the computing task(s).
This document describes among other things, network security systems that incorporate a feedback loop so as to automatically and dynamically adjust the scope of network traffic that is subject to inspection. Risky traffic can be sent for inspection; risky traffic that is demonstrated to have high rate of threats can be outright blocked without further inspection; traffic that is causing errors due to protocol incompatibility or should not be inspected for regulatory or other reasons can be flagged so it bypasses the security inspection system. The system can operate on a domain by domain basis, IP address basis, or otherwise.
A distributed computing system provides a distributed data store for network enabled devices at the edge. The distributed database is partitioned such that each node in the system has its own partition and some number of followers that replicate the data in the partition. The data in the partition is typically used in providing services to network enabled devices from the edge. The set of data for a particular network enabled device is owned by the node to which the network enabled device connects. Ownership of the data (and the data itself) may move around the distributed computing system to different nodes, e.g., for load balancing, fault-resilience, and/or due to device movement. Security/health checks are enforced at the edge as part of a process of transferring data ownership, thereby providing a mechanism to mitigate compromised or malfunctioning network enabled devices.
H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
69.
FAST, SECURE, AND SCALABLE DATA STORE AT THE EDGE FOR CONNECTING NETWORK ENABLED DEVICES
A distributed computing system provides a distributed data store for network enabled devices at the edge. The distributed database is partitioned such that each node in the system has its own partition and some number of followers that replicate the data in the partition. The data in the partition is typically used in providing services to network enabled devices from the edge. The set of data for a particular network enabled device is owned by the node to which the network enabled device connects. Ownership of the data (and the data itself) may move around the distributed computing system to different nodes, e.g., for load balancing, fault-resilience, and/or due to device movement. Security /health checks are enforced at the edge as part of a process of transferring data ownership, thereby providing a mechanism to mitigate compromised or malfunctioning network enabled devices.
An analysis system automates IP address structure discovery by deep analysis of sample IPv6 addresses using a set of computational methods, namely, information-theoretic analysis, machine learning, and statistical modeling. The system receives a sample set of IP addresses, computes entropies, discovers and mines address segments, builds a network model of address segment inter-dependencies, and provides a graphical display with various plots and tools to enable a network analyst to navigate and explore the exposed IPv6 address structure. The structural information is then applied as input to applications that include: (a) identifying homogeneous groups of client addresses, e.g., to assist in mapping clients to content in a CDN; (b) supporting network situational awareness efforts, e.g., in cyber defense; (c) selecting candidate targets for active measurements, e.g., traceroutes campaigns, vulnerability assessments, or reachability surveys; and (d) remotely assessing a network's addressing plan and address assignment policy.
H04L 41/142 - Network analysis or design using statistical or mathematical methods
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 61/4511 - Network directories; Name-to-address mapping using standardised directory access protocols using domain name system [DNS]
H04L 61/5092 - Address allocation by self-assignment, e.g. picking addresses at random and testing if they are already in use
H04L 101/659 - Internet protocol version 6 [IPv6] addresses
H04L 101/686 - Types of network addresses using dual-stack hosts, e.g. in Internet protocol version 4 [IPv4]/Internet protocol version 6 [IPv6] networks
71.
Mapping internet routing with anycast and utilizing such maps for deploying and operating anycast points of presence (PoPs)
Generally, aspects of the invention involve creating a data structure (a map) that reflects routing of Internet traffic to Anycast prefixes. Assume, for example, that each Anycast prefix is associated with two or more deployments (Points of Presence or PoPs) that can provide a service such as DNS, content delivery (e.g., via proxy servers, as in a CDN), distributed network storage, compute, or otherwise. The map is built in such a way as to identify portions of the Internet (e.g., in IP address space) that are consistently routed with one another, i.e., always to the same PoP as one another, regardless of how the Anycast prefixes are deployed. Aspects of the invention also involve the use of this map, once created. The map can be applied in a variety of ways to assist and/or improve the operation of Anycast deployments and thus represents an improvement to computer networking technology.
Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.
H04L 41/147 - Network analysis or design for predicting network behaviour
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 43/067 - Generation of reports using time frame reporting
73.
Intermediary handling of identity services to guard against client side attack vectors
This document describes, among other things, security hardening techniques that guard against certain client-side attack vectors. These techniques generally involve the use of an intermediary that detects and handles identity service transactions on behalf of a client. In one embodiment, the intermediary establishes a resource domain session with the client in order to provide the client with desired resource domain content or services from a resource domain host. The intermediary detects when the resource domain host invokes a federated identity service as a condition of client access. The intermediary handles the identity transaction in the identity domain on behalf of the client within the client's resource domain session. Upon successful authentication and/or authorization with an IdP, the intermediary connects the results of the identity services domain transaction to the resource domain.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
A proxy server is augmented with the capability of taking transient possession of a received entity for purposes of serving consuming devices. This capability supplements destination forwarding and/or origin server transactions performed by the proxy server. This capability enables several entity transfer modes, including a rendezvous service, in which the proxy server can (if invoked by a client) fulfill a client's request with an entity that the proxy server receives from a producing device contemporaneous with (or shortly after) the request for that entity. It also enables server-to- server transfers with synchronous or asynchronous destination forwarding behavior. It also enables a mode in which clients can request different representations of entities, e.g., from either the near-channel (e.g., the version stored at the proxy server) or a far-channel (e.g., at origin server). The teachings hereof are compatible with, although not limited to, conventional HTTP messaging protocols, including GET, POST and PUT methods.
This patent document describes technology for providing real-time messaging and entity update services in a distributed proxy server network, such as a CDN. Uses include distributing real-time notifications about updates to data stored in and delivered by the network, with both high efficiency and locality of latency. The technology can be integrated into conventional caching proxy servers providing HTTP services, thereby leveraging their existing footprint in the Internet, their existing overlay network topologies and architectures, and their integration with existing traffic management components.
A server interacts with a bot detection service to provide bot detection as a requesting client interacts with the server. In an asynchronous mode, the server injects into a page a data collection script configured to record interactions at the requesting client, to collect sensor data about the interactions, and to send the collected sensor data to the server. After the client receives the page, the sensor data is collected and forwarded to the server through a series of posts. The server forwards the posts to the detection service. During this data collection, the server also may receive a request from the client for a protected endpoint. When this occurs, and in a synchronous mode, the server issues a query to the detection service to obtain a threat score based in part on the collected sensor data that has been received and forwarded by the server. Based on the threat score returned, the server then determines whether the request for the endpoint should be forwarded onward for handling.
A server interacts with a bot detection service to provide bot detection as a requesting client interacts with the server. In an asynchronous mode, the server injects into a page a data collection script configured to record interactions at the requesting client, to collect sensor data about the interactions, and to send the collected sensor data to the server. After the client receives the page, the sensor data is collected and forwarded to the server through a series of posts. The server forwards the posts to the detection service. During this data collection, the server also may receive a request from the client for a protected endpoint. When this occurs, and in a synchronous mode, the server issues a query to the detection service to obtain a threat score based in part on the collected sensor data that has been received and forwarded by the server. Based on the threat score returned, the server then determines whether the request for the endpoint should be forwarded onward for handling.
An end-to-end verifiable multi-factor authentication scheme uses an authentication service. An authentication request is received from an organization, the request having been generated at the organization in response to receipt there of an access request from a user. The user has an associated public-private key pair. The organization provides the authentication request together with a first nonce. In response to receiving the authentication request and the first nonce, the authentication service generates a second nonce, and then it send the first and second nonces to the user. Thereafter, the service receives a data string, the data string having been generated by the client applying its private key over the first and second nonces. Using the user's public key, the service attempts to verify that the data string includes the first and second nonces. If it does, the authentication service provides the authentication decision in response to the authentication request, together with a proof that the user approved the authentication request.
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
An end-to-end verifiable multi-factor authentication scheme uses an authentication service. An authentication request is received from an organization, the request having been generated at the organization in response to receipt there of an access request from a user. The user has an associated public-private key pair. The organization provides the authentication request together with a first nonce. In response to receiving the authentication request and the first nonce, the authentication service generates a second nonce, and then it send the first and second nonces to the user. Thereafter, the service receives a data string, the data string having been generated by the client applying its private key over the first and second nonces. Using the user' s public key, the service attempts to verify that the data string includes the first and second nonces. If it does, the authentication service provides the authentication decision in response to the authentication request, together with a proof that the user approved the authentication request.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
Improved technology for managing the caching of objects that are rarely requested by clients. A cache system can be configured to assess a class of objects (such as objects associated with a particular domain) for cacheability, based on traffic observations. If the maximum possible cache offloading for the class of objects falls below a threshold level, which indicates a high proportion of non-cacheable or “single-hitter” content, then cache admission logic is configured to admit objects only after multiple clients requests during a time period (usually the object's time in cache, or eviction age). Otherwise, the cache admission logic may operate to admit objects to the cache after the first client request, assuming the object meets cacheability criteria. The technological improvements disclosed herein can be used to improve cache utilization, for example by preventing single-hitter objects from pushing out multi-hit objects (the objects that get hits after being added to cache).
This document relates to a CDN balancing mitigation system. An implementing CDN can deploy systems and techniques to monitor the domains of content provider customers with an active DNS scanner and detect which are using other CDNs on the same domain. This information can be used as an input signal for identifying and implementing adjustments to CDN configuration. Both automated and semi-automated adjustments are possible. The system can issue configuration adjustments or recommendations to the implementing CDN's servers or to its personnel. These might include “above-SLA” treatments intended to divert traffic to the implementing CDN. The effectiveness can be measured with the multi-CDN balance subsequently observed. The scanning and adjustment workflow can be permanent, temporary, or cycled. Treatments may include a variety of things, such as more cache storage, routing to less loaded servers, and so forth.
H04L 61/30 - Managing network names, e.g. use of aliases or nicknames
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1029 - Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
H04L 67/1008 - Server selection for load balancing based on parameters of servers, e.g. available memory or workload
82.
Secure transfer of data between programs executing on the same end-user device
It is often necessary to securely transfer data, such as authenticators or authorization tokens, between programs running on the same end-user device. The teachings hereof enable the pairing of two programs executing on a given end-user device and then the transfer of data from one program to the other. In an embodiment, a first program connects to a server and sends encrypted data elements. A second program intercepts the connection and/or the encrypted data elements. The second program tunnels the encrypted data elements (which remain opaque to the second program at this point) to a server, using an encapsulating protocol. This enables the server to receive the data elements sent by the first program, decrypt them, and provide them to the second program via return message using control fields of the encapsulating protocol. Once set up, the tunneling arrangement enables bidirectional data transfer.
The methods and system described herein automatically generate network router access control entities (ACEs) that are used to filter internet traffic and more specifically to block malicious traffic. The rules are generated by an ACE engine that processes incoming internet packets and examines existing ACEs and a statistical profile of the captured packets to produce one or more recommended ACEs with a quantified measure of confidence. Preferably, a recommended ACE is identified in real time of the attack, and preferably selected from a library of pre-authored ACEs. It is then deployed automatically or alternatively sent to system personnel for review and confirmation.
A payment network comprises ledger services, and associated wallet services. To provide wallet services resiliency, multiple active wallet replicas are used to enable the system (i) to rely on collision detection and blockchain idempotency to produce a single correct outcome, and (2) to implement various collision avoidance techniques. Using a ledger services idempotency feature, multiple actors form independent valid intents and know that no more than one intent will get finalized on the ledger. In a variant embodiment, replicas implement processing delays and utilize so-called "intent" messages. By adding the delays, decision logic is biased logic towards one intent. The. intent messages are used to intercede before a wallet handles a same original upstream message and forms a different intent. Seeing the replica's intent, the wallet can adopt the same intent and proceed with downstream processing. After adopting intent, preferably a wallet also informs its replicas of its intent.
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
This document describes systems, devices, and methods for testing the integration of a content provider's origin infrastructure with a content delivery network (CDN). In embodiments, the teachings hereof enable a content provider's developer to rapidly and flexibly create test environments that send test traffic through the same CDN hardware and software that handle (or at least have the ability to handle) production traffic, but in isolation from that production traffic and from each other. Furthermore, in embodiments, the teachings hereof enable the content provider to specify an arbitrary test origin behind its corporate firewall with which the CDN should communicate.
H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
86.
High performance distributed system of record with wallet services resiliency
A payment network comprises ledger services, and associated wallet services. To provide wallet services resiliency, multiple active wallet replicas are used to enable the system (i) to rely on collision detection and blockchain idempotency to produce a single correct outcome, and (2) to implement various collision avoidance techniques. Using a ledger services idempotency feature, multiple actors form independent valid intents and know that no more than one intent will get finalized on the ledger. In a variant embodiment, replicas implement processing delays and utilize so-called “intent” messages. By adding the delays, decision logic is biased logic towards one intent. The intent messages are used to intercede before a wallet handles a same original upstream message and forms a different intent. Seeing the replica's intent, the wallet can adopt the same intent and proceed with downstream processing. After adopting intent, preferably a wallet also informs its replicas of its intent.
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
The techniques herein provide for enhanced overlay network-based transport of traffic, such as IPsec traffic, e.g., to and from customer branch office locations, facilitated through the use of the Internet-based overlay routing infrastructure. This disclosure describes a method of providing integrity protection for traffic on the overlay network.
H04N 21/266 - Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system or merging a VOD unicast channel into a multicast channel
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
H04L 45/64 - Routing or path finding of packets in data switching networks using an overlay routing layer
H04N 21/6334 - Control signals issued by server directed to the network components or client directed to client for authorisation, e.g. by transmitting a key
88.
Synchronizing independent media and data streams using media stream synchronization points
A messaging channel is embedded directly into a media stream. Messages delivered via the embedded messaging channel are extracted at a client media player. According to a variant embodiment, and in lieu of embedding all of the message data in the media stream, only a coordination index is injected, and the message data is sent separately and merged into the media stream downstream (at the client media player) based on the coordination index. In one example embodiment, multiple data streams (each potentially with different content intended for a particular “type” or class of user) are transmitted alongside the video stream in which the coordination index (e.g., a sequence number) has been injected into a video frame. Based on a user's service level, a particular one of the multiple data streams is released when the sequence number appears in the video frame, and the data in that stream is associated with the media.
A method of detecting bots, preferably in an operating environment supported by a content delivery network (CDN) that comprises a shared infrastructure of distributed edge servers from which CDN customer content is delivered to requesting end users (clients). The method begins as clients interact with the edge servers. As such interactions occur, transaction data is collected. The transaction data is mined against a set of “primitive” or “compound” features sets to generate a database of information. In particular, preferably the database comprises one or more data structures, wherein a given data structure associates a feature value with its relative percentage occurrence across the collected transaction data. Thereafter, and upon receipt of a new transaction request, primitive or compound feature set data derived from the new transaction request are compared against the database. Based on the comparison, an end user client associated with the new transaction request is then characterized, e.g., as being associated with a human user, or a bot.
A server interacts with a bot detection service to provide bot detection as a requesting client interacts with the server. In an asynchronous mode, the server injects into a page a data collection script configured to record interactions at the requesting client, to collect sensor data about the interactions, and to send the collected sensor data to the server. After the client receives the page, the sensor data is collected and forwarded to the server through a series of posts. The server forwards the posts to the detection service. During this data collection, the server also may receive a request from the client for a protected endpoint. When this occurs, and in a synchronous mode, the server issues a query to the detection service to obtain a threat score based in part on the collected sensor data that has been received and forwarded by the server. Based on the threat score returned, the server then determines whether the request for the endpoint should be forwarded onward for handling.
A server interacts with a bot detection service to provide bot detection as a requesting client interacts with the server. In an asynchronous mode, the server injects into a page a data collection script configured to record interactions at the requesting client, to collect sensor data about the interactions, and to send the collected sensor data to the server. After the client receives the page, the sensor data is collected and forwarded to the server through a series of posts. The server forwards the posts to the detection service. During this data collection, the server also may receive a request from the client for a protected endpoint. When this occurs, and in a synchronous mode, the server issues a query to the detection service to obtain a threat score based in part on the collected sensor data that has been received and forwarded by the server. Based on the threat score returned, the server then determines whether the request for the endpoint should be forwarded onward for handling.
A method and apparatus for data collection to facilitate bot detection. According to this approach, and in lieu of conventional user agent-based fingerprinting, a client script is executed to attempt to identify one or more Javascript “landmark” features. In one embodiment, a landmark Javascript feature is a Javascript implementation that exists in a first browser type but not a second browser type distinct from the first browser type, and that also exists in one or more releases of the first browser type, but not in one or more other releases of the first browser type. By testing against landmark Javascript features as opposed to an unconstrained set of API calls and the like, the technique herein provides for much more computationally-efficient client-side operation.
A server interacts with a bot detection service to provide bot detection as a requesting client interacts with the server. In an asynchronous mode, the server injects into a page a data collection script configured to record interactions at the requesting client, to collect sensor data about the interactions, and to send the collected sensor data to the server. After the client receives the page, the sensor data is collected and forwarded to the server through a series of posts. The server forwards the posts to the detection service. During this data collection, the server also may receive a request from the client for a protected endpoint. When this occurs, and in a synchronous mode, the server issues a query to the detection service to obtain a threat score based in part on the collected sensor data that has been received and forwarded by the server. Based on the threat score returned, the server then determines whether the request for the endpoint should be forwarded onward for handling.
This patent document describes technology for providing real-time messaging and entity update services in a distributed proxy server network, such as a CDN. Uses include distributing real-time notifications about updates to data stored in and delivered by the network, with both high efficiency and locality of latency. The technology can be integrated into conventional caching proxy servers providing HTTP services, thereby leveraging their existing footprint in the Internet, their existing overlay network topologies and architectures, and their integration with existing traffic management components.
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
H04L 67/566 - Grouping or aggregating service requests, e.g. for unified processing
G06F 12/0813 - Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
A proxy server is augmented with the capability of taking transient possession of a received entity for purposes of serving consuming devices. This capability supplements destination forwarding and/or origin server transactions performed by the proxy server. This capability enables several entity transfer modes, including a rendezvous service, in which the proxy server can (if invoked by a client) fulfill a client's request with an entity that the proxy server receives from a producing device contemporaneous with (or shortly after) the request for that entity. It also enables server-to-server transfers with synchronous or asynchronous destination forwarding behavior. It also enables a mode in which clients can request different representations of entities, e.g., from either the near-channel (e.g., the version stored at the proxy server) or a far-channel (e.g., at origin server). The teachings hereof are compatible with, although not limited to, conventional HTTP messaging protocols, including GET, POST and PUT methods.
G06F 15/167 - Interprocessor communication using a common memory, e.g. mailbox
H04L 67/563 - Data redirection of data network streams
H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
96.
Detection and optimization of content in the payloads of API messages
A server in a content delivery network (CDN) can examine API traffic and extract therefrom content that can be optimized before it is served to a client. The server can apply content location instructions to a given API message to find such content therein. Upon finding an instance of such content, the server can verify the identity of the content by applying a set of content verification instructions. If verification succeeds, the server can retrieve an optimized version of the identified content and swap it into the API message for the original version. If an optimized version is not available, the server can initiate an optimization process so that next time the optimized version will be available. In some embodiments, an analysis service can assist by observing traffic from an API endpoint over time, detecting the format of API messages and producing the content location and verification instructions.
Origin offload is a key performance indicator of a content delivery network (CDN). This patent document presents unique methods and systems for measuring origin offload and applying those measurements to improve the offload. The techniques presented herein enable resource-efficient measurement of origin offload by individual servers and aggregation and analysis of such measurements to produce significant insights. The teachings hereof can be used to better identify root causes of suboptimal offload performance, to tune CDN settings and configurations, and to modify network operations, deployment and/or capacity planning. In addition, discussed herein are improved metrics showing offload in relation to the maximum achievable offload for the particular traffic being served.
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
98.
Measuring and improving origin offload and resource utilization in caching systems
Origin offload is a key performance indicator of a content delivery network (CDN). This patent document presents unique methods and systems for measuring origin offload and applying those measurements to improve the offload. The techniques presented herein enable resource-efficient measurement of origin offload by individual servers and aggregation and analysis of such measurements to produce significant insights. The teachings hereof can be used to better identify root causes of suboptimal offload performance, to tune CDN settings and configurations, and to modify network operations, deployment and/or capacity planning. In addition, discussed herein are improved metrics showing offload in relation to the maximum achievable offload for the particular traffic being served.
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04N 21/2665 - Gathering content from different sources, e.g. Internet and satellite
Disclosed herein are systems and methods for coordinating the wireless sharing of content between vehicles in a secure and efficient manner. In one embodiment, vehicles recognize when there is an opportunity for them to participate in content sharing, such as when a vehicle is temporarily stopped at a traffic signal, or stuck in traffic, or the like. In response to this opportunity, the vehicle can notify a coordination component, sending a manifest of content it has available for sharing and content that it desires. The coordination component can match two vehicles in location and time, and can facilitate a secure wireless content share transaction. Such a transaction can involve use of ephemeral wireless network parameters, including temporary' network names, passwords and/or security keys. Feedback about the success of the content transfer may be reported to system component(s) to improve identification of sharing opportunities in the future.
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
H04W 4/46 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
100.
IDENTIFICATION AND COORDINATION OF OPPORTUNITIES FOR VEHICLE TO VEHICLE WIRELESS CONTENT SHARING
Disclosed herein are systems and methods for coordinating the wireless sharing of content between vehicles in a secure and efficient manner. In one embodiment, vehicles recognize when there is an opportunity for them to participate in content sharing, such as when a vehicle is temporarily stopped at a traffic signal, or stuck in traffic, or the like. In response to this opportunity, the vehicle can notify a coordination component, sending a manifest of content it has available for sharing and content that it desires. The coordination component can match two vehicles in location and time, and can facilitate a secure wireless content share transaction. Such a transaction can involve use of ephemeral wireless network parameters, including temporary network names, passwords and/or security keys. Feedback about the success of the content transfer may be reported to system component(s) to improve identification of sharing opportunities in the future.
H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]