In some disclosed embodiments, first input data corresponding to a first natural language input may be received and processed to determine at least a first natural language understanding (NLU) hypothesis for the first natural language input. First session data identifying a first skill corresponding to the first NLU hypothesis may be determined and used to obtain first visual content corresponding to the first skill. Second session data identifying a second skill may also be determined in response to the input data and be used to obtain second visual content corresponding to the second skill. The device may output a first graphical user interface (GUI) element including the first visual content and a second GUI element including the second visual content. Second input data corresponding to a second input may be received from the device and used to determine, using the second session data, that the second input corresponds to an intent to invoke the second skill.
A cross-account data management (CAM) service of a provider network may assign, to a primary account of an organization of a client, permission to manage resource management plans for other accounts of the organization. The CAM service may specify, using the primary account (e.g., by an administrator using the primary account), a resource management plan (e.g., data backup plan) The CAM service may indicate, using the primary account, multiple accounts of the organization that the resource management plan is to be implemented for. The CAM service may cause, based on the permission assigned to the primary account, the resource management plan to be implemented for the different accounts of the organization (e.g., by causing execution of jobs to implement a backup plan).
Post-training quantization of weight values and activation values substantially reduces the memory and processing requirements of floating-point (FP) large language models (LLMs). A quantization parameter training process is performed on the FP LLM to determine quantization parameters. Weight-activation scaling may be applied to linear modules of the LLM, including down projection layers, enabling subsequent per-tensor quantization for activation values. The weight and activations values of the FP LLM are quantized from FP to integer values. Different layers may have different integer sizes. For example, weight values may be reduced to 4 bit integers and activation values to 8 bit integers. Layers within the model are modified to operate on the integer values. For example, an integer SiLU module may provide an integer approximation of a sigmoid-weighted linear unit activation function.
Post-training quantization of weight values and activation values substantially reduces the memory and processing requirements of floating-point (FP) large language models (LLMs). A quantization parameter training process is performed on the FP LLM to determine quantization parameters. Weight-activation scaling may be applied to linear modules of the LLM, including down projection layers, enabling subsequent per-tensor quantization for activation values. The weight and activations values of the FP LLM are quantized from FP to integer values. Different layers may have different integer sizes. For example, weight values may be reduced to 4 bit integers and activation values to 8 bit integers. Layers within the model are modified to operate on the integer values. For example, an integer SiLU module may provide an integer approximation of a sigmoid-weighted linear unit activation function.
Systems and methods are provided for processing and responding to utterances in different ways depending upon the intent and content of the utterances. Different back-end workflows and front-end user experiences may be triggered for utterances that have different intents. For example, if a user is researching high-consideration items for possible acquisition, such as complex technology or new fashion, providing different user experiences depending upon the particular type of utterance can make the research and acquisition process more efficient.
Systems and methods are provided for strongly isolating processes executing in a process virtual machine (PVM), to provide a security boundary similar to that provided by a system virtual machine (SVM). The PVM can include a hypervisor that supports execution of multiple processes within the PVM. The hypervisor can intermediate data resource requests from the processes and apply translation rules to such requests, which rules can isolate data resources accessible to each process from data resources available to other processes of the PVM.
A system for removably coupling a battery assembly to a robot or charging system is provided herein. The system can include a cradle body, one or more connectors, and one or more charging interfaces. The cradle body can removably couple to and support a battery of varying shapes and sizes. The cradle body can wrap around at least a portion of the battery. The one or more connectors can be attached to the cradle body to releasably engage corresponding connectors of a robot when the battery assembly is coupled to the robot. The one or more charging interfaces can be attached to the cradle body via which power is provided to the battery to charge the battery.
H01M 50/249 - MountingsSecondary casings or framesRacks, modules or packsSuspension devicesShock absorbersTransport or carrying devicesHolders specially adapted for aircraft or vehicles, e.g. cars or trains
H01M 50/262 - MountingsSecondary casings or framesRacks, modules or packsSuspension devicesShock absorbersTransport or carrying devicesHolders with fastening means, e.g. locks
H01M 50/267 - MountingsSecondary casings or framesRacks, modules or packsSuspension devicesShock absorbersTransport or carrying devicesHolders having means for adapting to batteries or cells of different types or different sizes
H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
H02J 50/10 - Circuit arrangements or systems for wireless supply or distribution of electric power using inductive coupling
Systems and methods are provided for dynamic format conversion of shared files. Computing devices can use different formats to store the same or substantially similar information. This difference in format can cause incompatibilities between such devices when these devices attempt to share files. The present disclosure can address this problem by providing for dynamic format conversion of shared files. Different computing devices may share access to a file directory storing base content, with each device being provided with a different view of the file directory such that base content in the directory appears in a format supported by the respective device. Further, input/output operations to files in the directory can be converted between respective formats, such that each device manipulates base content in the directory in a supported format.
Systems and methods for implementing optimized host-side device-specific queues for a storage device are described. A host system may implement a host-side queue for a storage device that is optimized using device-specific parameters. When an access request to the storage device is received, the request may be enqueued in an order optimized according to parameters of the storage device. Requests are then sent to the device in an optimized order. Optimization parameters may be provided by the manufacturer and read by the host system from the device, the parameters including physical device geometry and runtime telemetry data. In some embodiments, queue ordering for the host-side queue may be supplied by the storage device.
A testing manager at a provider network configures a multi-network-function test sandbox for a first network function developed by a first vendor. To configure the sandbox, the testing manager causes the first network function to be run at a first server and verifies network connectivity between the first network function and another network function which is not developed by the first vendor. The testing manager causes a test to be run, which includes transmission of messages from the first network function to the second network function. A result of the test is provided via a programmatic interface.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
15.
Motion tracking and image capturing using stylus devices
Systems and methods for motion tracking and image capture using a stylus device are described. Example embodiments involve a stylus device outputting/emitting light on a surface and receiving backscattered light at an event sensor of the stylus device. The event sensor may generate event data based in part on intensity of the received backscattered light. The stylus device may perform a motion tracking and/or image capture operation based on the event data.
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
Described are systems and methods for monitoring and managing power sharing across multiple loads connected to a shared direct current (DC) bus. Embodiments of the present disclosure may be implemented to monitor and manage power sharing between multiple propulsion mechanisms (e.g., motors) of an aerial vehicle, such as an unmanned aerial vehicle (UAV). One or more shape functions establishing current limits as a function of the DC bus voltage may be determined for each load (e.g., propulsion mechanism/motor) of the multiple loads (e.g., propulsion mechanisms/motors) connected to the DC bus, and the current consumed by each load can be limited based on the DC bus voltage and each shape function.
H02P 5/68 - Arrangements specially adapted for regulating or controlling the speed or torque of two or more electric motors controlling two or more DC dynamo-electric motors
Techniques for playing a first audio track for a video on a first audio device of a first audience member and simultaneously playing a second audio track that includes different content for the video than the first audio track on a second audio device of a second audience member by a single media player are described. According to some examples, a computer-implemented method includes receiving, by a media player, a manifest indicating a video, a first audio track for the video, and a second audio track comprising an audio-narrated description of the video; receiving an indication from a user of the media player that indicates a first audio device of a first audience member is to output the first audio track, and that indicates a second audio device of a second audience member is to output the second audio track; sending the video to a display coupled to the media player for displaying of the video to the first audience member and the second audience member; sending the first audio track, concurrently with the sending of the video to the display, to the first audio device of the first audience member by a first audio player of the media player; and sending the second audio track, concurrently with the sending of the video to the display and the sending of the first audio track to the first audio device, to the second audio device of the second audience member by a second audio player of the media player.
Techniques for a license manager service of a cloud system to provide users with the ability to activate and run licensed applications in the cloud system. Manually tracking the usage of software licenses for licensed applications can be cumbersome and error-prone. Further, the process of activating the licenses for third-party applications may be difficult or impossible depending on the activation protocols and procedures put in place by the application providers. The license manager may provide users with the ability to activate and use licensed applications, and may further provide users with a managed experience for activating the licenses for the applications. The license manager may launch licensed applications on virtual resources in a user's VPC, manage the process for activating the licensed applications with third-party providers, and provide the users with access to licensed applications that have been activated and configured by the license manager.
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
H04L 61/4511 - Network directoriesName-to-address mapping using standardised directoriesNetwork directoriesName-to-address mapping using standardised directory access protocols using domain name system [DNS]
19.
Managing fairness of resource scheduling for performance bursting in multi-tenant environments
Techniques for managing cloud computing resources hosting burstable performance instances are described. A host computer system of a provider network executes burstable performance compute instances. Compute capacity usage data is obtained from the host computer system, the compute capacity usage data including a first indication of a first compute capacity used by a first burstable performance compute instance. A first weight for the first burstable performance compute instance is calculated, the first weight being inversely related to the first compute capacity. A scheduler of the host computer system is updated with process prioritization weights, the process prioritization weights including a first process prioritization weight that is based at least in part on the first weight. The scheduler allocates compute capacity based on the process prioritization weights.
Techniques for an optimization service to gradually host workloads of users on more optimized virtual machine (VM) instance types to allow users to gain confidence in recommendations provided by the optimization service. The techniques include providing users with a recommended order of VM instance types that gradually move from a current VM instance type towards more optimal VM instance types. The recommended order may initially recommend that the workload be hosted to a VM instance type that is slightly more optimized that the current VM instance type, but is fairly similar to the current VM instance type. The optimization service may then provide the user with performance data that illustrates how well the new VM instance type performed when hosting the workload. The user may gain trust in the recommendations by observing the performance metrics, and continue to use more optimized VM instance types in the recommended order.
Techniques for determining when speech is directed at another individual of a dialog, and storing a representation of such user-directed speech for use as context when processing subsequently-received system-directed speech are described. A system receives audio data and/or video data and determines therefrom that speech in the audio data is user-directed. Based on this, the system determine whether the speech is able to be used to perform an action by the system. If the speech is able to be used to perform an action, the system stores a natural language representation of the speech. Thereafter, when the system receives system-directed speech, the system generates a rewrite of a natural language representation of the system-directed speech based on the previously-received user-directed speech. The system then determines output data responsive to the system-directed speech using the rewritten natural language representation.
Techniques to virtualize a physical hardware clock are described. The techniques may include utilizing a coefficients storage to store timestamp coefficient entries. When a timestamp request is received from a client, a selection circuit selects a timestamp coefficient entry from the coefficients storage. A compute circuit then computes a timestamp according to the client's virtualized clock based on the selected timestamp coefficient entry and a counter value of a counter that is driven by a physical clock signal.
Disclosed herein are techniques for implementing a large fully-connected layer in an artificial neural network. The large fully-connected layer is grouped into multiple fully-connected subnetworks. Each fully-connected subnetwork is configured to classify an object into an unknown class or a class in a subset of target classes. If the object is classified as the unknown class by a fully-connected subnetwork, a next fully-connected subnetwork may be used to further classify the object. In some embodiments, the fully-connected layer is grouped based on a ranking of target classes.
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
Systems and methods are disclosed to automatically load packages into containers. In one embodiment, an example system may include a robotic platform configured to elevate and rotate a container to a predetermined orientation, and a robotic manipulator configured to retrieve a first package from a surface and to position the first package inside the container at a predetermined position.
B65G 47/90 - Devices for picking-up and depositing articles or materials
B66F 9/06 - Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
B66F 9/12 - PlatformsForksOther load-supporting or load-gripping members
Described are techniques for write protecting a non-volatile memory (NVM) after the contents of the NVM have been set. In some examples, a computing device or system having an NVM also includes a Root of Trust (RoT) configured to generate a write protect command as an input to the NVM. The RoT generates the write protect command in response to detecting a write protect signal from an electronic controller. The write protect command sets one or more areas in the NVM to be read-only. Further, the write protect command can make the one or more areas read-only on a power-on basis so that write protection is maintained until the next power cycle. The electronic controller can be configured to assert the write protect signal each time the computing device or system is powered on, for instance during a reboot, thereby causing the RoT to renew the write protection.
G06F 21/78 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures
26.
Constraining the amount of simulation state to rewind based on input relevancy distance
A simulation environment (e.g., multi-player game) hosted by a provider network may reduce the amount of state data that needs to be rewound when performing simulation and verification of locally predicted entity states from a client device (backward reconciliation). When the simulation server receives an input packet, it determines the relevancy distance of the inputs specified by the input packet. Based on the relevancy distances and a previous state of the simulated entity corresponding to a timestamp, a volume of space is determined. The server identifies entities within the volume of space and only rewinds the state for those entities before performing backward reconciliation.
Techniques for distributing transaction processing based on processor capacity and configuration are described herein. For example, a computer system can determine a trigger event indicating that transactions are no longer to be processed by a first processor. The computer system can select a second processor to which the transactions are to be sent. The second processor can be selected based on a transaction processing capacity of the second processor and a configuration of the second processor that indicates that the second processor is configured to process the transactions. The computer system can receive transaction data indicating that a transaction is to be processed. The transaction can be associated with transaction attributes. The computing system can send the transaction data to the second processor in real-time relative to the transaction data being received based at least in part on the second processor being selected and the transaction attributes.
H04L 67/1008 - Server selection for load balancing based on parameters of servers, e.g. available memory or workload
G06Q 20/10 - Payment architectures specially adapted for electronic funds transfer [EFT] systemsPayment architectures specially adapted for home banking systems
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentialsReview and approval of payers, e.g. check of credit lines or negative lists
H04L 41/08 - Configuration management of networks or network elements
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
28.
Computer-implemented method and apparatus for video coding using an implicit video frame output process
The present disclosure relates to methods, apparatus, systems, and non-transitory computer-readable storage media for video coding using an implicit video frame output process. According to some examples, a computer-implemented method includes receiving a video at a content delivery service; encoding, by the content delivery service, the video into an encoded video; generating, by the content delivery service, at least one open bitstream unit from the encoded video according to a video coding format that does not utilize a show existing frame syntax element set to one to indicate a frame in a reference picture buffer of a decoder is to be displayed; and transmitting the at least one open bitstream unit from the content delivery service to the decoder.
H04N 19/00 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/184 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Embodiments are described for implementing a tour generation feature for completing tasks within a facility of an inventory management system. Coordinates for stations, starting locations of inventory holders, and rest locations of the inventory holders of the facility may be obtained. A set of tasks may be determined for each station based on an inventory item requirement associated with each station. A priority order for a subset of inventory holders may be determined based on priority characteristics associated with each inventory holder. Tours for the subset of inventory holders may be determined based on the set of tasks, the coordinates, and travel attributes for each inventory holder. The tours may be modified by iteratively invoking a large neighborhood search algorithm that uses destroy heuristics. Previously determined tours for other subsets of inventory holders may be updated using the modified tours for the subset of inventory holders.
A method may include determining inventory item information associated with an inventory item to be stowed in an inventory system that comprises a plurality of moveable inventory holders. The method may also include determining inventory holder information that characterizes one or more properties of a moveable inventory holder of the plurality of moveable inventory holders. The moveable inventory holder may include a plurality of inventory bins. The method may also include determining, using a machine learning model, a set of candidate inventory bins of the plurality of inventory bins based on the inventory item information and the inventory holder information. The method may also include providing for presentation a set of cues corresponding to the set of candidate inventory bins.
m) may be performed for a given round of Pauli measurements. Additionally, a temporal encoding of lattice surgery technique is provided, which may additionally or alternatively be used to shorten run times. Also, a quantum computer layout is provided, wherein the layout includes a core computing region and a cache region. Also, protocols for swapping logical qubits between the core and cache are provided.
G06N 10/00 - Quantum computing, i.e. information processing based on quantum-mechanical phenomena
G06N 10/40 - Physical realisations or architectures of quantum processors or components for manipulating qubits, e.g. qubit coupling or qubit control
G06N 10/80 - Quantum programming, e.g. interfaces, languages or software-development kits for creating or handling programs capable of running on quantum computersPlatforms for simulating or accessing quantum computers, e.g. cloud-based quantum computing
A system may be configured to receive audio data input and determine whether the audio data corresponds to an acoustic event defined by an acoustic event profile. The system may be further configured to detect an ongoing acoustic event corresponding to multiple occurrences of an acoustic event type and, in response, execute a set of actions. The execution of the routine may be based on conditions corresponding to the ongoing acoustic event. The system may be configured to evaluate the occurrences based on the conditions of the ongoing acoustic event including whether the occurrences occur within a timer period, the relative time between occurrences, and whether at least one occurrence is detected within different portions of the time period. The acoustic event profile may be associated with a user profile. The system may execute a routine that sends a notification or directs a device to perform an action.
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
Some embodiments provide an IC for implementing a machine-trained network with multiple layers. The IC includes a set of circuits to compute a dot product of (i) a first number of input values computed by other circuits of the IC and (ii) a set of predefined weight values, several of which are zero, with a weight value for each of the input values. The set of circuits includes (i) a dot product computation circuit to compute the dot product based on a second number of inputs and (ii) for each input value, at least two sets of wires for providing the input value to at least two of the dot product computation circuit inputs. The second number is less than the first number. Each input value with a corresponding weight value that is not equal to zero is provided to a different one of the dot product computation circuit inputs.
A vehicle data streaming service provides a curated catalog of vehicle attributes and allows a vehicle data stream source to register to the vehicle data streaming system and associate its data stream to a vehicle attribute of the attribute catalog. The vehicle data streaming service also allows vehicle data stream destinations to subscribe to the vehicle attribute in the vehicle catalog, receives streamed vehicle data from the data stream source, and sends streamed vehicle data conforming to registration requirements to the data stream destinations. Additionally, the vehicle data streaming service may allow management of the vehicle attribute catalog and may further manage the registration one or more sources and the subscriptions of one or more destinations.
G07C 5/00 - Registering or indicating the working of vehicles
G06F 16/21 - Design, administration or maintenance of databases
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
(1) Light bulbs; lighting fixtures; lighting fixtures with motion detection; battery powered lighting fixtures; electric lighting fixtures; sconce lighting fixtures; lanterns for lighting; floodlights; spotlights; wall lights; LED light bulbs; lighting apparatus, namely, lighting installations; ceiling lights; ceiling light fittings; electric night lights; LED lighting fixtures for indoor and outdoor lighting applications; lights for illuminating stairs, doors and other portions of buildings; portable battery-operated lights that can be placed on surfaces where other light sources are unavailable; portable utility lights; solar light fixtures, namely, indoor and outdoor solar powered lighting units and fixtures; spot lights; fixtures for incandescent light bulbs; lighting fixtures for use in parking decks and garages; lighting fixtures for use in parking lots and walkways; lighting installations for cabinets, pantries, work spaces, sheds, shelving units, and cupboards; electric lighting fixtures, namely, power failure backup safety lighting; motion sensitive security lights.
Disclosed herein is a method for determining tumor margins relative to non-tumorous tissue using a deep-learning platform. Also disclosed herein are methods for removing a tumor from a tumor biopsy.
Provided are systems and methods for a storage adapter device for communicating with network storage. In some implementations, the storage adapter device comprises a host interface. In these implementations, the host interface may be configured to communicate with a host device using a local bus protocol. In some implementations, the storage adapter device also includes a network interface. In these implementations, the network interface may communicate with a network using a network protocol. In some implementations, the storage adapter device may be configured to communicate with a remote storage device. In some implementations, the storage adapter device may also be configured to translate a request from the host interface from the local bus protocol to the network protocol. The storage adapter device may further be configured to transmit the translated request to the remote storage device.
G06F 13/42 - Bus transfer protocol, e.g. handshakeSynchronisation
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 69/16 - Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
39.
Systems and methods for multimodal indexing of video using machine learning
Systems, methods, and computer-readable media are disclosed for systems and methods multimodal indexing of video using machine learning. An example method may include deceiving, by a video encoder of an audio-video transformer neural network comprising one or more computer processors coupled to memory, a first frame and a second frame associated with a first segment of a video. The example method may also include receiving, by an audio encoder of the audio-video transformer neural network, an audio spectrogram comprising first audio data associated with the first segment of the video. generating, by the video encoder, a first video embedding. The example method may also include generating, by the audio encoder, a first audio embedding. The example method may also include determining a fusion of the first video embedding and the first audio embedding using a multimodal bottleneck token. The example method may also include determining an output including the first video embedding and the first audio embedding. The example method may also include determining a classification of the first portion of the video based on the output.
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Techniques for dynamic database redaction using protected encryption secret material are described. A masking policy is defined that includes a reference to a secret material stored by a secrets manager service. The masking policy further identifies a pseudonymous redaction function that utilizes a cryptographic function requiring such a secret material. The secrets manager service is configured to grant access to the secret material by an entity of the database service that executes queries, such as a leader node of a cluster. For a particular query, the cluster obtains the secret material from the secrets manager service in a secure manner, uses the secret material for applying the cryptographic function to values for redaction purposes, and deletes any copies of secret material thereafter.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
41.
Automatic failure diagnosis and correction in machine learning models
Automatic failure diagnosis and correction may be performed on trained machine learning models. Input data that causes a trained machine learning model may be identified in order to determine different model failures. The model failures may be clustered in order to determine failure scenarios for the trained machine learning model. Examples of the failure scenarios may be generated and truth labels for the example scenarios obtained. The examples and truth labels may then be used to retrain the machine learning model to generate a corrected version of the machine learning model.
Described herein is a computer-implemented method for techniques relating to anomalous content marking and determination. A content marking request of anomalous content can be received by a computer system. A content marking count associated with the content can be determined for the content. A content marking ratio can be determined based on the content marking count. A parameter indicative of the anomalous status of the content can be determined based on the content marking count and/or content marking ratio, and the parameter can be compared to a threshold parameter. Alerts of anomalous content can be delivered at the user device based on the content marking count, the content marking ratio, the parameter or the comparison result between the parameter and the threshold parameter.
A radio network with many endpoints, such as many user terminals (UTs) accessing a constellation of many satellites, may experience self-interference due to reduced apparent angular separation between endpoints. For example, many UTs that are covered by a satellite's antenna gain pattern of an uplink may interfere with one another if those UTs use the same frequencies at the same time. The UTs may be geographically separated, but due to relative position between them and the satellite, they appear within the same gain pattern. Resource mapping is performed to allocate link resources to mitigate self-interference. A conflict graph is determined and used to allocate link resources, such as particular combinations of timeslot and frequency, to reduce or eliminate self-interference. The conflict graph may be determined using one or more analytical or heuristic techniques. Resource allocation may be performed for links such as satellite uplinks, satellite downlinks, or both.
A clock signal in a clock distribution network is transmitted using network packets with the clock signal embedded within a bit of the network packets. The clock signal is adjusted to account for propagation delay in transmitting the clock signal throughout a clock distribution network. The propagation delay is computed using a round-trip packet. When a previous clock signal is received, a timer is set to a local clock's estimate of when the next clock signal will occur minus the propagation delay to the downstream device. When this timer expires, the clock signal is sent to the downstream device, which will allow it to arrive at the downstream device at the same time as the next clock signal is received on the local device.
An aerial vehicle configured for operating within indoor or outdoor spaces is equipped with acoustic sensors for detecting reflections of sound, or echoes, from objects. Distances and bearings to such objects may be calculated based on such echoes. The echoes may be reflections of sound actively emitted by the aerial vehicle, such as by a speaker, or sound radiating from operating components aboard the aerial vehicle, such as rotating motors or propellers. The echoes may be captured by multiple sensors such as microphones provided around the aerial vehicle and used to calculate distances or bearings to the objects, such as by trilateration, triangulation, or in any other manner. Such distances or bearings may also be utilized along with distances or bearings determined from cameras, range sensors, or other systems, and used to generate a navigation map of the space, or compared to a navigation map generated for that space.
B64U 40/10 - On-board mechanical arrangements for adjusting control surfaces or rotorsOn-board mechanical arrangements for in-flight adjustment of the base configuration for adjusting control surfaces or rotors
B64C 27/57 - Mechanisms for controlling blade adjustment or movement relative to rotor head, e.g. lag-lead movement characterised by the control initiating means, e.g. manually actuated automatic or condition responsive, e.g. responsive to rotor speed, torque or thrust
B64U 20/83 - Electronic components structurally integrated with aircraft elements, e.g. circuit boards carrying loads
46.
Managed discovery of inventory information for user accounts
Techniques for managed services of cloud systems to perform inventory discovery of computing instances with particular configurations across user accounts in an organization, and across regions in the cloud systems. Cloud systems offer managed services that automate the management of configurations of computing instances on behalf organizations. To perform cross-account discovery of inventory information for these computing instances, the managed services often harness other internal cloud services, such as internal data-integration services and query services. However, some of these internal cloud services are not available in all the geographic regions of the cloud system, and due to these dependencies, the managed services are in turn not available in all regions. Techniques and architectures are described herein for managed services to perform the cross-account inventory discovery such that these managed services can be made available across all regions, and for launches of new regions in the cloud system.
Techniques for reducing interference graph generation time may include obtaining a data flow graph representing a computational flow. For each memory object in the data flow graph, a memory object live interval can be added to a vector of intervals. The memory object live interval indicates a last-use of the memory object and a first-definition of the memory object. The vector of intervals can be converted into a binary tree of interval nodes. For each interval node in the binary tree, an earliest-first-definition value is determined for the sub-tree rooted at the interval node, and is associated with the interval node. The binary tree can be queried for interferences of a memory object, and memory allocation can be performed for the computational flow based on the interferences.
Systems and techniques are disclosed for determining and provisioning and data resources for use in migrations data processing systems. Source data processing system metadata may be used to estimate data storage requirements for a migration and subtask parameters may be used to determine processing requirements. Migration resources may be determined based on these requirements and provisioned to perform migration operations. A migration may be monitored for resource utilization and resources allocated to the migration may be adjusted to increase migration efficiency and performance.
G06F 16/00 - Information retrievalDatabase structures thereforFile system structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
G06F 16/21 - Design, administration or maintenance of databases
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
49.
Weighted selection of inputs for training machine-trained network
Some embodiments provide a method for training a machine-trained network that includes multiple parameters. The method propagates a batch of input training items through the network to generate output values and compute values of a loss function for each of the input training items. The method computes a weight for each input training item based on the computed loss function values for each of the input training items. The method selects input training items with larger weights more often than input training items with smaller weights for subsequent batches of input training items.
Multiple video files that are captured by calibrated imaging devices may be annotated based on a single annotation of an image frame of one of the video files. An operator may enter an annotation to an image frame via a user interface, and the annotation may be replicated from the image frame to other image frames that were captured at the same time and are included in other video files. Annotations may be updated by the operator and/or tracked in subsequent image frames. Predicted locations of the annotations in subsequent image frames within each of the video files may be determined, e.g., by a tracker, and a confidence level associated with any of the annotations may be calculated. Where the confidence level falls below a predetermined threshold, the operator may be prompted to delete or update the annotation, or the annotation may be deleted.
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human interventionEvaluation of the quality of the acquired patterns
G06V 20/40 - ScenesScene-specific elements in video content
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 30/194 - References adjustable by an adaptive method, e.g. learning
H04N 5/77 - Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
H04N 23/45 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/62 - Control of parameters via user interfaces
H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
H04N 23/661 - Transmitting camera control signals through networks, e.g. control via the Internet
51.
Artificial intelligence (AI) models to improve image processing related to pre and post item deliveries
Techniques for improving image processing related to item deliveries are described. In an example, a computer system receives an image showing a drop-off of an item, the item associated with a delivery to a delivery location. The computer system inputs the image to a first artificial intelligence (AI) model. The computer system receives first data comprising an indication of whether the drop-off is correct from the first AI model. The computer system causes a presentation of the indication at a device associated with the delivery of the item to the delivery location.
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06F 18/2135 - Feature extraction, e.g. by transforming the feature spaceSummarisationMappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
Participants may use one or more devices for engaging in a meeting, such as phones, conferencing devices, and/or computers. The devices include microphones that capture speech for determining the presence of distinct participants. Speech signals originating from different participants, or microphones, may be determined and associated with the participants. For example, microphones may be directional and more sensitive to sound coming from one or more specific directions than sound coming from other directions. By associating an individual with a microphone, or set of microphones, overlapping voices may be disambiguated to provide clear voice streams that aid in producing a clear transcript indicating the speech of the participants, respectively. An identity of the participants may be determined using voiceprint and/or voice recognition techniques.
Computer-implemented techniques for verifying translated access controls for application modernization include an application modernization service of a provider network obtaining a source access control. The service translates the source access control to a target access control. The service compiles the source access control and the target access control into respective automated reasoning solver encodings. The service uses the automated reasoning solver encoding to query an automated reasoning solver such as a Satisfiability Modulo Theories (SMT) solver to determine whether the source access control is less or more permissive than the target access control representing a security issue or an availability issue with the target access control, respectively.
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
54.
Enhanced streaming video advertisement integration
Devices, systems, and methods are provided for enhanced streaming video advertisement integration. A method may include receiving, by a first device, from a second device, a request for advertisement opportunities for a streaming video title; identifying, by the first device, a first advertisement bid for a first advertisement; identifying, by the first device, a second advertisement bid for a second advertisement; sending, by the first device, in response to the request for advertisement opportunities, the first advertisement bid and the second advertisement bid to the second device; sending, by the second device, a request for advertisements to an advertisement server, including the first advertisement bid and the second advertisement bid; receiving, by the second device, a first group of advertisements for a first advertisement opportunity and a second group of advertisements for a second advertisement opportunity of the advertisement opportunities.
41 - Education, entertainment, sporting and cultural services
Goods & Services
Entertainment in the nature of an ongoing television dramatic series; entertainment services, namely, an ongoing dramatic series provided through television, cable, the Internet and wireless communications networks
Devices and techniques are generally described for audio-based entity resolution. In various examples, first audio data representing speech comprising a mention of a first entity may be received. In some examples, first embedding data representing the first audio data may be received. Second embedding data representing the first entity may be determined. A first modified embedding may be generated using a first attention mechanism to compare the first embedding data to the second embedding data. In some examples, a determination may be made that the first audio data includes a mention of the first entity.
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
Devices and techniques are generally described for LM-based content retrieval. First query data including a first request related to first content may be received. First action data associated with the first query data may be determined. First prompt data including a representation of the first query data and data representing the first action data may be generated. The first prompt data may instructs a first LM to recognize entities in the first query data relevant to the first action data. The first LM may determine a first recognized entity from the first request. The first recognized entity may be associated with the first content. A request to resolve the first recognized entity may be generated. A first resolved entity for the first recognized entity may be determined. The first LM may generate first instructions to perform the first action data using the first resolved entity.
An emulated hardware security device is configured for a compute instance. A state descriptor of the compute instance comprising software identification metadata prepared using the emulated hardware security device is provided to a resource verifier. The metadata identifies a program to be executed at the compute instance. In response to a response received from the resource verifier, a decision is made as to whether to execute the software program at the compute instance.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
A provider network implements a machine learning deployment service for generating and deploying packages to implement machine learning at connected devices. The service may receive from a client an indication of an inference application, a machine learning framework to be used by the inference application, a machine learning model to be used by the inference application, and an edge device to run the inference application. The service may then generate a package based on the inference application, the machine learning framework, the machine learning model, and a hardware platform of the edge device. To generate the package, the service may optimize the model based on the hardware platform of the edge device and/or the machine learning framework. The service may then deploy the package to the edge device. The edge device then installs the inference application and performs actions based on inference data generated by the machine learning model.
Devices, systems, and methods are provided for a front light for use with reflective displays. A display device (such as an e-reader, for example) may include a light source and a light guide able to receive first light from the light source. The light guide includes a plurality of extraction features (900) that control optimal movement of light emitted by the light source through a display stack of the display device. The extraction feature is provided in a wedge shape at an angle (906) such that when light refracts from the extraction feature, it is directed towards the reflective LCD display at an incidence angle close to the display normal for the reflected light to exhibit similar properties as the reflected light from the EPD panel.
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Pre-recorded video recordings featuring dramatic entertainment programs and music; pre-recorded and downloadable audio and visual recordings featuring dramatic entertainment programs and music; motion picture films featuring dramatic entertainment programs and music; pre-recorded audio and visual recordings in optical discs, DVD and CD format featuring dramatic entertainment programs and music. Entertainment in the nature of an ongoing television dramatic series; entertainment services, namely, an ongoing dramatic series provided through television, cable, the Internet and wireless communications networks.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software using artificial intelligence (AI) for developing and running intelligent agents; Downloadable computer software using artificial intelligence (AI) for user interface (UI) automation, secure code execution and file management; Downloadable computer software using artificial intelligence (AI) for intelligent memory systems to enable agents to retain context across interactions and adjust behavior; Downloadable computer software using artificial intelligence (AI) for a secure, serverless runtime capability to deploy and scale intelligent agents and tools across various frameworks, protocols, and models; Downloadable computer software using artificial intelligence (AI) for building personalized intelligent agent experiences with fully-managed memory infrastructure and the ability to customize memory; Downloadable computer software using artificial intelligence (AI) to manage the digital identities of intelligent agents and control their access to resources; Downloadable computer software using artificial intelligence (AI) for developing and running intelligent agents using virtual machine (VM)-level isolation, identity controls, virtual private cloud (VPC) integration, and flexible network modes; Downloadable computer software using artificial intelligence (AI) for securely writing and executing code to perform complex calculations, validate reasoning, process data, and generate visualizations; Downloadable computer software using artificial intelligence (AI) to enable intelligent agents to navigate websites, complete multi-step forms, and perform complex web-based tasks within a fully managed, secure sandbox environment with low latency; Downloadable computer software using artificial intelligence (AI) to help developers trace, debug, and monitor intelligent agent performance in production environments Providing on-line non-downloadable software using artificial intelligence (AI) for developing and running intelligent agents; Providing on-line non-downloadable software using artificial intelligence (AI) for user interface (UI) automation, secure code execution and file management; Providing on-line non-downloadable software using artificial intelligence (AI) for intelligent memory systems to enable agents to retain context across interactions and adjust behavior; Providing on-line non-downloadable software using artificial intelligence (AI) for a secure, serverless runtime capability to deploy and scale intelligent agents and tools across various frameworks, protocols, and models; Providing on-line non-downloadable software using artificial intelligence (AI) for building personalized intelligent agent experiences with fully-managed memory infrastructure and the ability to customize memory; Providing on-line non-downloadable software using artificial intelligence (AI) to manage the digital identities of intelligent agents and control their access to resources; Providing on-line non-downloadable software using artificial intelligence (AI) for developing and running intelligent agents using virtual machine (VM)-level isolation, identity controls, virtual private cloud (VPC) integration, and flexible network modes; Providing on-line non-downloadable software using artificial intelligence (AI) for securely writing and executing code to perform complex calculations, validate reasoning, process data, and generate visualizations; Providing on-line non-downloadable software using artificial intelligence (AI) to enable intelligent agents to navigate websites, complete multi-step forms, and perform complex web-based tasks within a fully managed, secure sandbox environment with low latency; Providing on-line non-downloadable software using artificial intelligence (AI) to help developers trace, debug, and monitor intelligent agent performance in production environments
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software using artificial intelligence (AI) for developing and running intelligent agents; Downloadable computer software using artificial intelligence (AI) for user interface (UI) automation, secure code execution and file management; Downloadable computer software using artificial intelligence (AI) for intelligent memory systems to enable agents to retain context across interactions and adjust behavior; Downloadable computer software using artificial intelligence (AI) for a secure, serverless runtime capability to deploy and scale intelligent agents and tools across various frameworks, protocols, and models; Downloadable computer software using artificial intelligence (AI) for building personalized intelligent agent experiences with fully-managed memory infrastructure and the ability to customize memory; Downloadable computer software using artificial intelligence (AI) to manage the digital identities of intelligent agents and control their access to resources; Downloadable computer software using artificial intelligence (AI) for developing and running intelligent agents using virtual machine (VM)-level isolation, identity controls, virtual private cloud (VPC) integration, and flexible network modes; Downloadable computer software using artificial intelligence (AI) for securely writing and executing code to perform complex calculations, validate reasoning, process data, and generate visualizations; Downloadable computer software using artificial intelligence (AI) to enable intelligent agents to navigate websites, complete multi-step forms, and perform complex web-based tasks within a fully managed, secure sandbox environment with low latency; Downloadable computer software using artificial intelligence (AI) to help developers trace, debug, and monitor intelligent agent performance in production environments Providing on-line non-downloadable software using artificial intelligence (AI) for developing and running intelligent agents; Providing on-line non-downloadable software using artificial intelligence (AI) for user interface (UI) automation, secure code execution and file management; Providing on-line non-downloadable software using artificial intelligence (AI) for intelligent memory systems to enable agents to retain context across interactions and adjust behavior; Providing on-line non-downloadable software using artificial intelligence (AI) for a secure, serverless runtime capability to deploy and scale intelligent agents and tools across various frameworks, protocols, and models; Providing on-line non-downloadable software using artificial intelligence (AI) for building personalized intelligent agent experiences with fully-managed memory infrastructure and the ability to customize memory; Providing on-line non-downloadable software using artificial intelligence (AI) to manage the digital identities of intelligent agents and control their access to resources; Providing on-line non-downloadable software using artificial intelligence (AI) for developing and running intelligent agents using virtual machine (VM)-level isolation, identity controls, virtual private cloud (VPC) integration, and flexible network modes; Providing on-line non-downloadable software using artificial intelligence (AI) for securely writing and executing code to perform complex calculations, validate reasoning, process data, and generate visualizations; Providing on-line non-downloadable software using artificial intelligence (AI) to enable intelligent agents to navigate websites, complete multi-step forms, and perform complex web-based tasks within a fully managed, secure sandbox environment with low latency; Providing on-line non-downloadable software using artificial intelligence (AI) to help developers trace, debug, and monitor intelligent agent performance in production environments
Techniques for a multi-check in-line container inspection system are provided herein. In an example, a computer system determines, during movement of a container in a scanning tunnel, first sensor data generated by a first sensor attached to a frame that forms the scanning tunnel. The movement is caused by material handling equipment. The computer system determines, during the movement of the container in the scanning tunnel, second sensor data generated by a second sensor attached to the frame. The computer system performs a first container integrity check based on the first sensor data and a second container integrity check based on the second sensor data. The computer system causes a corrective action to be initiated based on at least one of the first container integrity check or the second container integrity check indicating a container defect.
A telescopic boom system provides an extensible boom that may be used in spacecraft applications including supporting photovoltaic panels, communication antennas, instrumentation, and so forth. A stowed configuration is volumetrically compact, including the boom and actuators such as a motor. During deployment, threaded nuts for each nested section of the boom are self-aligning with respect to a leadscrew driven by the motor. Sections are staged for extension in staged sequence by a flexure arm engaging a ramp feature on a portion of the nested section. Extension failure mitigation is enhanced by allowing partial retraction of some sections during extension. Once fully extended, tension of the boom may be later adjusted, modifying the structural fundamental frequency. A ratchet may be engaged with extension of a final nested section to prevent retraction of the extended boom.
Systems and methods are described for reducing performance variance of code executions on a serverless code execution system. A serverless code execution system can operate to obtain requests to invoke code and handle such requests by generating an execution environment for the code on a host computing device and executing the code within the environment. In some cases, an execution environment is poorly placed, resulting in underperformance of code executions on that environment and variance in overall performance of the code executions. The present disclosure enables a serverless code execution system to identify underperforming execution environments and to replace such environments with new environments, reducing variation in performance across execution of the code. New environments may be placed on host computing devices asynchronously, using a placement algorithm that includes additional processing relative to an algorithm that operates synchronously to code invocation.
A system generates a recommendation that includes at least one action to address at least one root cause anomaly that causes other anomalies occurred within a distributed system. The at least one root cause anomaly is determined by at least using a graph that represents the distributed system and metrics that are associated with the distributed system.
Systems and techniques are described for tracking and providing access reports for individual pieces of data managed by a data storage service. A service may generate and store a record of operations performed on a piece of data, such that may be classified as containing sensitive or important data, in a data store. The record may link representations of users and the operations performed by those users to instances of the piece of data, as it is found in one or more data objects within the data store. The data store may link other instances of the piece of data and other operations performed on the piece of data to the first instance of the piece of data. The service may access the data store to produce a history record of the various instances of the piece of data and operations performed on those instances of the piece of data.
A system and method for project-based uniform data analytics in a provider network. The system and method provide data projects. A data project is a secure container that brings people, data, and tools together to enable easy collaboration and access management for data analytic projects. A data project enables a group of users to collaborate on a particular business use case for producing and consuming data. A data project and its content are subject to their own access controls so that only authorized individuals, groups, and roles can access the projects and data that project has subscribed to, and only use tools permitted by project permissions.
Systems and methods are described for automatic action item detection and generation. In some aspects, textual data, such as may be generated based on an interaction between at least two entities, may be received. At least one issue may be identified in the text using a first machine learning model. At least one action item, corresponding to the issue, may similarly be identified using a second machine learning model, with the action item including an action to be performed to resolve the at least one issue. The action item may be assigned to a queue of a plurality of queues based on attributes of the action item, with the queue corresponding to an action that is specified in the action item. In some aspects, a notification of the action item may also be provided, such as in real-time or near-real-time with the occurrence of the interaction between the two entities.
Artificial intelligence (AI) models for verifying packing removed deliveries are described herein. In an example, a computer system receives image data corresponding to a portion of a delivery location. The computer system determines an indication of at least one delivery object in the portion. The computer system inputs the indication into a first AI model trained for detecting entity-associated packaging associated with the at least one delivery object. The computer system receives, from the first AI model, an output of whether the at least one delivery object includes the entity-associated packaging. The computer system causes a first presentation about the output to be provided at a device.
Technologies directed to a radio frequency (RF) boundary choke between modules in phased array antennas. An antenna module may include a circuit board having one or more conducting layers and one or more electrically insulating layers. The antenna module may include an antenna disposed on a first surface of the circuit board. The antenna module may further include radio frequency front end (RFFE) circuitry disposed on a second surface of the circuit board. The antenna module further includes a first set of vias extending between the antenna and the RFFE circuitry and a second set of vias disposed within the circuit board. Each of the second set of vias is positioned along a first axis parallel to and a first distance from a first edge of the antenna module.
Techniques are disclosed for digitally signing uniform resource locators (URLs) to prevent manipulation of search result rankings. A computer system of a service provider can receive a first request to navigate to a network page provided by the service provider and corresponding to items associated with the first request. The computer system can generate the network page by generating a URL for an additional network page linked from the network page. The computer system can use the URL to generate a signed URL that includes a digital signature. The computer system can include the signed URL in the network page and cause the network page to be presented at a user device.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Systems and methods for device control by a natural language processing system are disclosed. A user may desire to utilize a voice-enabled device to associate an accessory device with a hub device without having to utilize third-party software associated with the accessory device and/or the hub device. The user may provide a user utterance to associate the accessory device with the hub device. Audio data corresponding to the user utterance may be analyzed and utilized to generate and send directive data to a third-party remote system to transition the hub device to a join mode. Upon association completion, audio may be output confirming that the association has been established successfully.
Provided are systems and methods for tracking device engagement The system includes a viewing device (e.g., television, etc.) and an eye-tracking device (e.g., a camera, etc.). The eye-tracking device is configured to capture data about the gaze of a viewer to determine if the viewer is watching the television at any given point in time. This information may be used for a variety of purposes, such as tracking user engagement with advertisement content. As another example, the information may be used for device energy saving purposes. For example, a screen of the device can be dimmed or turned off if the gaze of the viewer has not been directed towards the television for a given period of time. A notification may also be presented to the viewer prompting the viewer to indicate if they are still viewing the content being presented on the device.
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/41 - Structure of clientStructure of client peripherals
Techniques for performing a machine learning (ML) cinematic (e.g., movie) question answering are described. According to some examples, a computer-implemented method includes receiving a request from a viewer device at a content delivery service to play a video; sending the video from the content delivery service to the viewer device; receiving, by the content delivery service, a question from the viewer device during playing of the video; generating, by a script context retrieval machine learning model of the content delivery service, a proper subset of a script of the video based on an input of the question; generating, by a cinematic question answering machine learning model of the content delivery service, an answer based on an input of the proper subset of the script of the video; and sending, by the content delivery service, the answer to the viewer device.
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 40/40 - Processing or translation of natural language
80.
Management of SCTE and contents in ad breaks for compatibility
Methods and apparatus are described for delivering streams of media content in ways that maintain compatibility among different streaming protocols for inserting secondary content into the streams of media content. This is accomplished by encoding media content the same for each streaming protocol but generating different output groups based on each streaming protocol.
A rack may include a frame having vertical uprights and transverse members coupled together so as to form boundaries of an internal volume of the rack. The boundaries may include a first lateral face, a second lateral face, a front face, and a back face. A cartridge can be installed forward or rearward of the front face and laterally inward from the first lateral face or the second lateral face. Cables may be connected between the cartridge and a plurality of appliances. The cables may include a first cable extending between the cartridge and a first appliance supported by the rack. The cables may further include a second cable extending between a second appliance and the cartridge so as to establish a signal path between the first appliance and the second appliance through the first cable, the cartridge, and the second cable.
A host server computer with an uncorrectable memory error can be repaired without a reboot operation. While initially booting a hypervisor, a special software Application Programming Interface (API) can be loaded between a BIOS System Management Mode (SMM) code and the hypervisor. Once the host server computer is booted and a number of virtual machines are executing, a memory error (e.g., uncorrectable error correction code (UECC)) can occur. In response, the hypervisor calls into the special software API identifying the defective memory rows that the BIOS needs to repair. The BIOS starts a soft Post Package Repair (PPR) process on those rows and gives back control to the hypervisor. When the repair is completed, the hypervisor loads a scrubbing virtual machine and validates that the memory is corrected. After the repair is validated, the hypervisor allows the available partition to take a new customer instance.
Techniques for a service provider network to communicatively couple services and/or applications in a serverless computing environment. A pipe component can configure a pipe to integrate two services by transmitting data between services and/or applications using the pipe. The pipe may also be configured to transform how a service processes an event, control timing of event transmissions using the pipe, define an event structure for an event, and/or batch events. Pipes enable an application or service to exchange data with a variety of services provided by the service provider network while controlling what type of data is generated, stored, or transmitted.
Edge functions at an edge location of a content delivery network (CDN) may use APIs of a datastore engine in order to read/write or create/delete local tables at the edge location. Data may be accumulated in the local tables and the new data may be used to enhance decision at the edge. Some of the local tables may be initially populated from a back-end database. This allows the functions to modify the data from the back-end database, without affecting the actual source data at the back-end database (modifications to local tables remain local to the edge location).
A certificate renewal service may receive an indication to request renewal of a certificate in a test mode that allows testing of certificate characteristic property change effects. The certificate renewal service may select, based on the renewal of the certificate being requested in the test mode, a renewal time for renewing of the certificate. The certificate renewal service may change, based on the renewal of the certificate being requested in the test mode, one or more properties of one or more certificate characteristics of the certificate in a certificate renewal request. The certificate renewal service may request renewal of the certificate based on the renewal time with one or more changes to the one or more properties of the one or more certificate characteristics.
G06F 21/33 - User authentication using certificates
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
86.
Extending cover properties in formal verification to generate failure traces that reach end-of-test
Cover properties are extended in formal verification to reach an effective end-of-test stage for a design under test. A formal verification task for a design under test may be received at a verification system. A cover property asserted in the formal verification task may be identified. An additional condition may be implemented for the identified cover property to extend the identified cover property to cause performance of the formal verification task to generate a trace to reach an effective end-of-test stage for the design under test in the event of a failure of the cover property.
Systems and methods are provided for classifying images associated with an item, and generating an image set for that item which includes image classifications determined to be helpful for the item type of the item. To classify images, an image classification model is generated and trained using two phases. The first phase uses intermediate model with text and visual processing to teach the model to recognize patterns created by text without requiring OCR at inference. The second phase uses visual processing to refine the model for use at inference. To generate an image set, image classifications helpful to an item type are identified, items are associated with item types, images are obtained for an item, the images are classified using the image classification model, missing image classifications set out in the preferred image set are identified, and a request or requests is generated for the missing image classifications.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
Systems and methods for device state reversion are disclosed. For example, a requested and/or scheduled device state change may occur, and prior to the device state change, devices may be queried for their device states. This prior device state data may be saved. A user may provide an undo request and the prior device state data may be utilized along with current device state data to select a device to revert device state on, as well as the device state to revert to. In more complex situations and/or when prior state data is unavailable, machine learning techniques may be utilized to select the target device and device state.
Devices and techniques are generally described for wake word suppression using variable step size of an acoustic echo cancellation (AEC) unit. A reference signal representing an audio stream may be sent to an acoustic echo cancellation (AEC) unit. A microphone may receive an input audio signal and send the input audio signal to the AEC unit. The AEC unit may determine a first set of variable step size (Vss) values over the first time period. Vss values may define a rate at which the AEC unit determines a transfer function between the reference signal and the first input audio signal. A wake-word may be detected during the first time period. A determination may be made that the wake-word is part of the audio output by the loudspeaker based at least in part on the first set of Vss values.
Techniques for more precise access control policy findings, use a conditional injection of policy constraints into a findings analysis. The injection of a policy constraint of a policy being analyzed into the findings analysis is conditioned on the policy itself. In particular, the injection is conditioned on whether the policy constraint is trusted in the context of the policy (e.g., unlikely to be spoofed or manipulated in the policy context). As a result, where a policy constraint can be trusted in the context of a given policy, a more precise (e.g., more specific) findings analysis of the policy based on the policy constraint can be conducted than if the policy constraint were not included in the policy finding analysis (e.g., because the policy constraint is not trusted in other policy contexts).
A network service incorporating service groups to increase fault tolerance and data isolation for integrated network services and client data is provided. The network service provider can process network requests utilizing individual service groups that correspond to a set of integrated services and client data (e.g., cells). The service groups can be associated according to customer identifier. Computing resources within a service group are isolated from computing resources utilized in other service groups and resources that host/provide the service or the integrated data can be independently scaled by the service provider.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software using artificial intelligence (AI) and large language models (LLMs) for integrated development software; downloadable computer software using artificial intelligence (AI) and large language models (LLMs) for use in an integrated development environment (IDE); downloadable computer software using artificial intelligence (AI) and large language models (LLMs) for software development tools for use in connection with automating test-driven development (TDD), code reviews, and documentation generation; downloadable computer software using artificial intelligence (AI) and large language models (LLMs) for software development productivity tools Providing on-line non-downloadable software using artificial intelligence (AI) and large language models (LLMs) for integrated development software; providing on-line non-downloadable software using artificial intelligence (AI) and large language models (LLMs) for use in an integrated development environment (IDE); providing on-line non-downloadable software using artificial intelligence (AI) and large language models (LLMs) for software development tools for use in connection with automating test-driven development (TDD), code reviews, and documentation generation; providing on-line non-downloadable software using artificial intelligence (AI) and large language models (LLMs) for software development productivity tools
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software using artificial intelligence (AI) and large language models (LLMs) for integrated development software; downloadable computer software using artificial intelligence (AI) and large language models (LLMs) for use in an integrated development environment (IDE); downloadable computer software using artificial intelligence (AI) and large language models (LLMs) for software development tools for use in connection with automating test-driven development (TDD), code reviews, and documentation generation; downloadable computer software using artificial intelligence (AI) and large language models (LLMs) for software development productivity tools Providing on-line non-downloadable software using artificial intelligence (AI) and large language models (LLMs) for integrated development software; providing on-line non-downloadable software using artificial intelligence (AI) and large language models (LLMs) for use in an integrated development environment (IDE); providing on-line non-downloadable software using artificial intelligence (AI) and large language models (LLMs) for software development tools for use in connection with automating test-driven development (TDD), code reviews, and documentation generation; providing on-line non-downloadable software using artificial intelligence (AI) and large language models (LLMs) for software development productivity tools
42 - Scientific, technological and industrial services, research and design
Goods & Services
Business data analysis; business management services, namely, supply chain logistics, reverse logistics, and management of shipments and returned shipments; logistics management in the field of shipping, returning, and exchanging consumer and wholesale products; freight management services in the nature of shipment processing, facilitating the exchange of shipping documents and invoices, and tracking documents, packages and freight over computer networks, intranets and the internet for business purposes; computerized tracking of packages in transit; transportation logistics services, namely, planning and scheduling shipments for users of transportation services; monitoring package shipments and deliveries for business purposes; computerized tracking and tracing of packages in transit to ensure on-time delivery for business purposes; facilitating shipping disputes between senders and carriers. Providing temporary use of non-downloadable computer software for shipment processing; software as a service (SaaS) services featuring online non-downloadable computer software for shipment processing; providing temporary use of non-downloadable computer software for analyzing and reporting data relating to shipments; software as a service (SaaS) services featuring online non-downloadable computer software for analyzing and reporting data relating to shipments; providing temporary use of non-downloadable computer software for estimating and facilitating payment of shipping costs, coordinating shipment booking and pickup between senders and carriers, tracking shipments, facilitating communications and dispute resolution between senders and carriers, resolving shipping incidents, customs documentation management, and facilitating shipment payments and returns between senders and carriers; software as a service (SaaS) services featuring online non-downloadable computer software for estimating and facilitating payment of shipping costs, coordinating shipment booking and pickup between senders and carriers, tracking shipments, facilitating communications and dispute resolution between senders and carriers, resolving shipping incidents customs documentation management, and facilitating shipment payments and returns between senders and carriers.
96.
AUTOMATED VERIFICATION OF DOCUMENTS RELATED TO ACCOUNTS WITHIN A SERVICE PROVIDER NETWORK
This disclosure describes a verification service within a service provider network for automatically verifying and validating documents. A user may upload a document image to the verification service. A pre-processing service may pre-process the document image. The pre-processed document image may then be forwarded to a first machine learning ML model for similarity evaluation. Once the first ML model has completed its evaluation of the document image, the first ML model may forward the document image to a second ML model for symbol recognition, which may then forward the business license to an optical recognition (OCR) service for OCR validation. If the document image is validated, e.g., is an image of a purported document type, as will be discussed further herein, the publishing service may pre-populate, e.g., publish, information from the document image to an account template.
Techniques for filtering the output of supplemental content are described. When a supplemental output system (e.g., a supplemental content system or notification system) receives supplemental content for output, the supplemental output system sends a user identifier (of the recipient user) and the supplemental content to separately implemented filtering component. The filtering component uses a machine learning (ML) model to determine a topic of the supplemental content. The filtering component determines whether the supplemental content should not be output based on the ML model-determined topic, one or more guardrail policies of the supplemental output system, and user frustration data regarding previously output supplemental content. Use of the ML model to determine the topic prevents a content publisher from surreptitiously associating supplemental content with a specific topic in an effort to bypass topic-based output guardrails.
Techniques are described for managing execution of programs. In some situations, program execution is managed for multiple users using excess program execution capacity of one or more computing systems. In some such situations, excess or otherwise unused program execution capacity may be made available to execute programs on a temporary basis, such that the programs executing using the excess program execution capacity may be terminated at any time if other preferred use for the excess program execution capacity arises. The excess program execution capacity may in some situations be provided in conjunction with other dedicated program execution capacity that is allocated to particular users, such as to use unused dedicated capacity of some users as excess capacity for other users. In some situations, the techniques are used in conjunction with a fee-based program execution service that executes multiple programs on behalf of multiple users of the service.
A system comprising one or more computers implements a virtual domain control unit/virtual electronic control unit service configured to deploy vehicle code packages to one or more of a plurality of supported virtual domain control unit/electronic control unit orchestration environments, which include both a local orchestration environment and one or more remote orchestration environments. In such orchestration environments, virtual domain control units and/or virtual electronic control units are implemented that execute code included in the vehicle code packages. In some embodiments, such virtual domain control units or virtual electronic control units allow computing capacity and/or data storage capacity of a vehicle to be augmented via remotely implemented virtual domain control units and/or remotely implemented virtual electronic control units.
H04L 41/5054 - Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
H04L 67/00 - Network arrangements or protocols for supporting network services or applications
Devices, systems, and methods are provided for a front light for use with reflective displays. A display device (such as an e-reader, for example) may include a light source and a light guide able to receive first light from the light source. The light guide includes a plurality of extraction features that control optimal movement of light emitted by the light source through a display stack of the display device. The extraction feature is provided in a wedge shape at an angle such that when light refracts from the extraction feature, it is directed towards the reflective LCD display at an incidence angle close to the display normal for the reflected light to exhibit similar properties as the reflected light from the EPD panel.
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02F 1/167 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on translational movement of particles in a fluid under the influence of an applied field characterised by the electro-optical or magneto-optical effect by electrophoresis