Described herein is a controller that is communicatively coupled with a network fabric. The controller obtains performance metric data of one or more hardware components included in the network fabric. The controller collects flow information of one or more workloads that are executed on the network fabric. Further, the controller applies a configuration policy to the one or more hardware components of the network fabric based on the performance metric data and the flow information of the one or more workloads. The application of the configuration policy modifies at least one operational parameter of the one or more hardware components of the network fabric.
Described herein is a controller that is communicatively coupled with a network fabric. The controller obtains performance metric data of one or more hardware components included in the network fabric. The controller collects flow information of one or more workloads that are executed on the network fabric. Further, the controller applies a configuration policy to the one or more hardware components of the network fabric based on the performance metric data and the flow information of the one or more workloads. The application of the configuration policy modifies at least one operational parameter of the one or more hardware components of the network fabric.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR DETECTING AND MITIGATING SECURITY ATTACKS ON PRODUCER NETWORK FUNCTIONS (NFs) USING MAPPINGS BETWEEN DYNAMICALLY ASSIGNED SERVICE-BASED INTERFACE (SBI) MESSAGE IDENTIFIERS AND PROXY NF IDENTIFIERS AT PROXY NF
A method for detecting and mitigating security attacks on producer network NF using mappings between dynamically assigned SBI message IDs and proxy NF IDs includes, at a proxy NF, automatically creating a database of mappings between proxy NF IDs and SBI message IDs comprising resource IDs dynamically assigned by producer NFs in response to request messages from consumer NFs. The method further includes using the mappings to validate received inter-PLMN SBI request messages and performing network security actions for the received inter-PLMN SBI request messages for which validation fails.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR DETECTING AND MITIGATING SECURITY ATTACKS ON PRODUCER NETWORK FUNCTIONS (NFs) USING ACCESS TOKEN TO NON-ACCESS-TOKEN PARAMETER CORRELATION AT PROXY NF
A method for detecting and mitigating security attacks on producer network NFs using access token to non-access-token parameter correlation at a proxy NF includes receiving an inter-PLMN SBI request message. The method further includes obtaining, from an access token transmitted with the inter-PLMN SBI request message, at least one network- or service-identifying parameter and obtaining, externally from the access token, at least one network- or service-identifying parameter. The method further includes comparing the at least one network- or service-identifying parameter obtained from the access token and the at least one network- or service-identifying parameter obtained externally from the access token and performing a network security action when the at least one network- or service-identifying parameter obtained from the access token does not match the at least one network- or service-identifying parameter obtained externally from the access token.
Systems and methods for page load timing with visible element detection are disclosed herein. In some embodiments, a method includes detecting an outgoing communication from a browser, detecting a change in one or more document object models (DOMs) visible to a user, automatically logging a start of a span based on the detection of both the outgoing communication and the change in the one or more DOMs, executing operations relating to the one or more DOMs, determining at least one of: attaining a calm state of the one or more DOMs; or a user interaction causing an additional change to one or more DOMs, and automatically logging an end of the span based upon the determining.
G06F 11/34 - Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation
G06F 16/957 - Browsing optimisation, e.g. caching or content distillation
6.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR OVERRIDING A PREFERRED LOCALITY ATTRIBUTE VALUE USING PREFERRED LOCATION ATTRIBUTE VALUES AND TRAFFIC DISTRIBUTION ATTRIBUTE VALUES
A method for overriding a preferred locality attribute value using preferred location attribute values and traffic distribution attribute values to facilitate a desired traffic distribution among producer NFs includes receiving, by an NRF and from a query originator, an NF discovery request including a target NF type attribute value and a preferred locality attribute value. The method further includes selecting, by the NRF, NF profiles that have an NF type attribute value that matches the target NF type attribute value and a locality attribute value that matches one of a plurality of preferred location attribute values mapped to the preferred locality attribute value. The method further includes modifying at least one attribute value in the NF profiles according to traffic distribution attribute values to achieve a predetermined distribution of SBI request message traffic among producer NFs by the query originator. The method further includes generating and sending to the query originator, an NF discovery response including the NF profiles.
H04L 67/51 - Discovery or management thereof, e.g. service location protocol [SLP] or web services
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
H04L 67/61 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
7.
TECHNIQUES FOR ROTATING RESOURCE IDENTIFIERS IN PREFAB REGIONS
Techniques are disclosed for rotating resource identifiers within a region network. An identities service can receive a first request for a first identifier of a software resource within the region network from a client node. The identities service can generate the first identifier based at least in part on first attributes and send the first identifier and a first caching instruction to the client node. The identities service can receive an identity rotation instruction that includes information usable by the identities service to provide a second caching instruction in response to requests for software resource identifiers. The identities service can receive a second request for a second identifier of the software resource. The identities service can generate the second identifier based at least in part on the second attributes and send the second identifier and the second caching instruction to the client node.
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
H04L 41/0604 - Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
H04L 67/568 - Storing data temporarily at an intermediate stage, e.g. caching
Various embodiments of the present technology generally relate to systems and methods for providing an interface to securely handle messages from 5G services for network monitoring purposes. In an example, a network traffic monitoring system is provided as an interface handler. The network traffic monitoring system may receive from a plurality of network functions (NFs) in a communication exchange on a 5G network, a first plurality of messages, determine microservices to apply to the first plurality of messages, and process, by respective microservice modules, the first plurality of messages using the microservices. The network traffic monitoring system may also generate a feed based on processing the first plurality of messages using the microservices and transmit, to a network monitoring system, the feed, where the first plurality of messages is in an input format and the feed is in an output format, the input format being different than the output format.
Techniques for inventory optimization using a simulation service model are provided. In one technique, a first optimization technique is used to generate, based on demand data, first output that comprises a first plurality of output values, each value corresponding to a node in a multi-echelon system. While using the first optimization technique, a plurality of variable values, each variable value corresponding to a node in the multi-echelon system, is generated. Then, a second optimization technique that is different than the first optimization technique is used to generate, based on the demand data and the plurality of variable values, second output that comprises a second plurality of output values, each value corresponding to a node in the multi-echelon system.
Techniques may include presenting a graphical user interface for creating a pipeline to transform data from a first to a second format. The interface may include an interactive workspace and a logical entity library, and the pipeline may include two or more logical entities. Each logical entity in the logical entity library may include at least one of a data processing node, a debugging node, or an administrative node. In addition, the techniques may include receiving information identifying a logical entity for each entity in the pipeline. The techniques may include receiving information identifying a location within the interactive workspace for the logical entity and information identifying a configuration corresponding to the logical entity. The techniques may include visually representing graphical connection between each logical entity in the pipeline and at least one additional logical entity in the pipeline. Moreover, the techniques may include transforming data to produce transformed data.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
11.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR DETECTING AND MITIGATING SECURITY ATTACKS ON PRODUCER NETWORK FUNCTIONS (NFs) USING ERROR RESPONSE MESSAGES
A method for detecting and mitigating security attacks on producer NFs using error response messages includes tracking, by a proxy NF, rates of error response messages generated in response to inter-PLMN SBI request messages from consumer NFs. The method further includes receiving an inter-PLMN SBI request message, obtaining information for identifying a consumer NF and a producer NF from the inter-PLMN SBI request message, and determining that a rate of error response messages generated in response to inter-PLMN SBI request messages from the consumer NF to the producer NF exceeds a threshold rate. The method further includes, in response to determining that the rate of error response messages generated in response to SBI request messages from the consumer NF exceeds the threshold rate, performing a network security action.
Various embodiments of the present technology generally relate to systems and methods for providing an interface to securely handle messages from 5G services for network monitoring purposes. In an example, a packet encoder is provided as an interface handler. The packet encoder may receive from one or more network functions (NFs) in a communication exchange on a 5G network, an input including a plurality of messages. Upon receipt, the packet encoder may determine an input format from the input and determine an output format for a feed generated based on the plurality of messages. The packet encoder may also translate the plurality of messages from the input format to the output format, where the input format is different than the output format, and the feed includes the plurality of messages in the output format. The packet encoder may then transmit the feed to a network monitoring system.
Techniques for performing distributed rate limiting in networks in a cloud environment are described for determining an amount of network bandwidth available to be processed by flow control nodes within a cloud network for a first time period, determining a bandwidth allocation for traffic classes for the first time period, determining, a portion of the bandwidth allocation for the flow control nodes, providing, data to the flow control nodes, where the data indicates the portion of the bandwidth allocation for the traffic classes, and receiving, second data that indicates an amount of network traffic routed during the first time period by individual ones of the flow control nodes.
Embodiments determine a final occupancy prediction for a check-in date for a plurality of hotel rooms. Embodiments receive historical reservation data including a plurality of booking curves for the hotel rooms corresponding to a plurality of reservation windows, the historical reservation data including a plurality of features. Based on the historical reservation data, embodiments generate a first occupancy prediction for the check-in date using a first model and generate a second occupancy prediction for the check-in date using a second model. Embodiments determine a best performing model from at least the first model and the second model uses a corresponding occupancy prediction corresponding to the best performing model as the final occupancy prediction for the check-in date.
G06Q 10/0637 - Strategic management or analysis, e.g. setting a goal or target of an organisationPlanning actions based on goalsAnalysis or evaluation of effectiveness of goals
A pretraining computer generates a neural encoder and multiple partition decoders (PDs) for respective partitions of training inputs (TIs) in a training corpus. A training batch is generated that contains a mix of TIs from multiple partitions. For each TI in the batch, the neural encoder infers an encoding and, based on the partition of the TI, exactly one PD is used to decode the encoding, for which an individual loss is measured. The individual loss is combined into a batch loss that is based on the entire batch, and combined into a partition loss that is based on TIs only in the partition of the exactly one PD. After measuring losses for the batch, the batch loss is backpropagated into the neural encoder without backpropagating the batch loss into any PD. Into each PD is backpropagated a respective partition loss that is based on TIs only in the decoder's partition.
Techniques for generating filtered description content based on seed statements are disclosed. A system filters a set of descriptive sentences based on a relevance of the sentences to a seed statement. The system creates a set of input segments from the seed statement. The system creates a set of output segments from the set of descriptive sentences. The system generates a set of relevance scores for each input segment/output segment pair. The system compares the relevance scores to a set of relevance criteria to generate a filtered set of descriptive sentences.
Techniques for populating the fields of a form are disclosed. A machine learning model may be trained to predict associations between fields. The trained machine learning model may be applied to a plurality of forms to predict an association between a first field in a first form type and a second field in a second form type. Upon receiving a value for a first field in a first form of a first form type and based on the predicted association, the system may populate a second field of a second form of the second form type based on the value.
Described herein is a mechanism of constructing a cluster placement group in a cloud environment. A request is received from a first customer of a cloud environment, where the request corresponds to creating a cluster placement group (CPG). The CPG identifies a first set of requested resources comprising a first type of resource and a second type of resource requested by the first customer, wherein the first type of resource is different than the second type of resource. An availability domain is identified in the cloud environment that includes a second set of available resources comprising all resources included in the first set of requested resources. From the second set of available resources in the availability domain, a set of resources corresponding to the first set of requested resources to the first customer are allocated. The set of resources allocated to the first customer is associated with the CPG.
The present disclosure relates to various approaches for fast and scalable TOP K SHORTEST and CHEAPEST graph queries supporting horizontal aggregations on the group variables of a path in a distributed graph query engine. A distributed graph query processing engine may execute a graph query in a plurality of computing devices. A plurality of subjobs may be generated based at least in part on the graph query. Execution of an asynchronous pattern matching subjob of the plurality of subjobs may be initiated, and, in response to the asynchronous pattern matching subjob identifying one or more source vertices of a plurality of vertices, the execution of the asynchronous pattern matching subjob may be paused. An output context set comprising the one or more source vertices may be generated. Execution of a synchronous path matching subjob of the plurality of subjobs may be initiated, and a reachability map may be generated based at least in part on one or more matched paths between the one or more source vertices and one or more destination vertices of the plurality of vertices. The execution of the asynchronous pattern matching subjob may be resumed based at least in part on the output context set and the reachability map
Techniques and devices are described for communicating domain name system zone metadata to dynamic nameserver proxies. A method can include signing service receiving a first request for a resource record (RR). The signing service can transmit to a backend unit, a domain name system (DNS) query for information associated with a subdomain. The signing service can receive, from the backend unit of the computing system, a first DNS response comprising the information associated with the subdomain. The signing service can determine whether the information comprises a flagged nameserver record. The signing service can generate a second DNS response, content of the DNS response based at least in part on whether the information associated with the subdomain comprises the flagged nameserver record. The signing service can transmit the second DNS response to the DNS resolver.
H04L 61/4552 - Lookup mechanisms between a plurality of directoriesSynchronisation of directories, e.g. metadirectories
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 61/4511 - Network directoriesName-to-address mapping using standardised directoriesNetwork directoriesName-to-address mapping using standardised directory access protocols using domain name system [DNS]
21.
TECHNIQUES FOR ROUTING PACKETS IN OVERLAY NETWORKS
Novel techniques are described for routing of overlay packets within overlay networks in a cloud environment. A network device, located in the data path between a compute instance in an overlay network that is the source of a packet and a compute instance in the overlay network that is the intended destination of the packet, is able to route the packet using only special encoded information included in the packet's header when the packet is received by the network device. The special encoded information is in the form of a special encoded address (e.g., an encoded IP address) that is included in a field of the packet's header. The special encoded address encodes various different pieces of information that are used by the network devices in the data path from the source compute instance to the destination compute instance to route the packet in the overlay network.
Systems and methods for a virtual layer-2 network are described herein. The method can include providing a virtual Layer 3 network in a virtualized cloud environment. The virtual Layer 3 network can be hosted by an underlying physical network. The method can include providing a virtual Layer 2 network in the virtualized cloud environment. The virtual Layer 2 network can be hosted by the underlying physical network.
During pretraining, a computer generates three untrained machine learning models that are a token sequence encoder, a token predictor, and a decoder that infers a frequency distribution of graph traversal paths. A sequence of lexical tokens is generated that represents a lexical text in a training corpus. A graph is generated that represents the lexical text. In the graph, multiple traversal paths are selected that collectively represent a sliding subsequence of the sequence of lexical tokens. From the subsequence, the token sequence encoder infers an encoded sequence that represents the subsequence of the sequence of lexical tokens. The decoder and token predictor accept the encoded sequence as input for respective inferencing for which respective training losses are measured. Both training losses are combined into a combined loss that is used to increase the accuracy of the three machine learning models by, for example, backpropagation of the combined loss.
Techniques are described for determining at least one performance metric of a machine learning model. The techniques including, obtaining a dataset generated by at least using output of the machine learning model, partitioning the dataset into two or more partitions that include one or more elements from the dataset; and generating, for each respective partition, a respective first quantile sketch and a respective second quantile sketch based at least in part on each element in the respective partition. The techniques further including generating a first merged quantile sketch by merging each respective first quantile sketch, generating a second merged quantile sketch by merging each respective second quantile sketch, and determining the at least one performance metric of the machine learning model using the first merged quantile sketch and the second merged quantile sketch.
A system causes threads that are accessing shared memory segments of a memory region to terminate execution in preparation for deallocating the shared memory segments from the memory region and reclaiming the memory region. The system marks the memory region as closed and then instructs each thread to suspend execution. Responsive to determining that a thread was suspended during execution of a function that includes a memory access operation, the system causes the thread to execute, upon resuming execution, a thread-terminating instruction that terminates the thread. After the threads are terminated, the system deallocates the shared memory segments from the memory region and reclaims the memory region.
Described herein is a controller that is communicatively coupled with a network fabric. The controller obtains performance metric data of one or more hardware components included in the network fabric. The controller collects flow information of one or more workloads that are executed on the network fabric. Further, the controller applies a configuration policy to the one or more hardware components of the network fabric based on the performance metric data and the flow information of the one or more workloads. The application of the configuration policy modifies at least one operational parameter of the one or more hardware components of the network fabric.
Described herein is a data center provisioning service offered by a cloud services provider. A control plane of a service offered in a cloud environment that is provided by a first service provider receives a request to create a data center. The control plane provisions the data center in a tenancy of a customer in the cloud environment by: (i) deploying a plurality of host machines in the tenancy of the customer in the cloud environment, wherein each host machine included in the plurality of host machines includes a hypervisor that is provided by a second service provider, installed thereon, and (ii) instantiating a plurality of datastores, each datastore comprising a plurality of block volume storage units. The control plane maintains a mapping corresponding to an association of each datastore included in the plurality of datastores with at least one host machine included in the plurality of host machines.
In response to a request to replicate resources from a primary region data center to a secondary region data center, information is obtained about first resources at the primary region data center and second resources at the secondary region data center. An executable configuration file that references the resources utilizing generic resource identifiers instead of primary region identifiers used within the primary region data center is created. The executable configuration file is then executed at the secondary region data center to create replicated resources at the secondary region data center.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
Techniques for programmatically provisioning a baremetal system are disclosed. A system receives a request to reprovision a baremetal system comprising a plurality of computing components listed in a platform definition. The system retrieves component configurations for the computing components in the platform definition. A component configuration may identify a dependency on another computing component. The system generates a dependency graph from any dependencies in the computing components. The system retrieves component configuration instructions for configuring the computing components and generates system configuration instructions based on the component configuration instructions and the dependency graph. The system reprovisions the baremetal system by executing the system configuration instructions.
The technology disclosed herein automatically creates code for both simple and complex policies to provide a communication service in accordance with those policies. In a particular example, a method includes receiving design policies for providing the communication service. The design policies include standard policies and complex policies. The method includes automatically generating the standard policy code with one or more decoupled extension points for implementing the standard policies. The method further includes providing the complex policies to a developer and receiving, from the developer, the complex policy code for implementing the complex policies. The method also includes inserting the complex policy code into the standard policy code at the decoupled extension points to generate communication service code and executing the communication service code to provide the communication service.
The technology disclosed herein automatically creates code for both simple and complex policies to provide a communication service in accordance with those policies. In a particular example, a method includes receiving design policies for providing the communication service. The design policies include standard policies and complex policies. The method includes automatically generating the standard policy code with one or more decoupled extension points for implementing the standard policies. The method further includes providing the complex policies to a developer and receiving, from the developer, the complex policy code for implementing the complex policies. The method also includes inserting the complex policy code into the standard policy code at the decoupled extension points to generate communication service code and executing the communication service code to provide the communication service.
Systems, methods, and other embodiments associated with ML-based detection of amplification of vibration due to resonance are described. In one embodiment, a method includes recording vibrations of a reference asset while the reference asset is operated based on a test pattern that sweeps over a range of workload for the reference asset. Cross power spectral densities between the recorded vibrations and the test pattern are determined at intervals to identify resonance frequencies of the reference asset. Vibrations of a target asset are monitored at the resonance frequencies with a machine learning model trained to generate estimated values at the resonance frequencies that are consistent with the reference asset. Resonant vibration amplification is detected based on a dissimilarity between vibration values for the target asset at the resonance frequencies and the estimated values. And, an electronic alert that the target asset is undergoing the resonant vibration amplification is generated.
Techniques for selectively aggregating records based on a downstream function to be applied to the records are disclosed. A system obtains an instruction corresponding to a set of records and a function to be applied to the set of records. The system determines whether the function meets a particular criteria for aggregating records prior to transmitting the records to an application for executing the function on the records. If the system determines that the function does meet the records-aggregation criteria, the system stores a set of records in a buffer prior to sending the set of records to the function-executing application. The system sends the set of records to the application together as a group with an instruction to generate a set of function results that includes a separate value for each record in the set of records.
Described herein is a controller that is communicatively coupled with a network fabric. The controller obtains performance metric data of one or more hardware components included in the network fabric. The controller collects flow information of one or more workloads that are executed on the network fabric. Further, the controller applies a configuration policy to the one or more hardware components of the network fabric based on the performance metric data and the flow information of the one or more workloads. The application of the configuration policy modifies at least one operational parameter of the one or more hardware components of the network fabric.
H04L 47/263 - Rate modification at the source after receiving feedback
H04L 47/12 - Avoiding congestionRecovering from congestion
H04L 47/19 - Flow controlCongestion control at layers above the network layer
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Described herein is a data center provisioning service offered by a cloud services provider. A control plane of a service offered in a cloud environment that is provided by a first service provider receives a request to create a data center. The control plane provisions the data center in a tenancy of a customer in the cloud environment by: (i) deploying a plurality of host machines in the tenancy of the customer in the cloud environment, wherein each host machine included in the plurality of host machines includes a hypervisor that is provided by a second service provider, installed thereon, and (ii) instantiating a plurality of datastores, each datastore comprising a plurality of block volume storage units. The control plane maintains a mapping corresponding to an association of each datastore included in the plurality of datastores with at least one host machine included in the plurality of host machines.
Techniques are disclosed herein for provisioning cross-cloud services. The techniques include receiving, by a service of a first cloud environment, a request to configure a virtual resource and causing, by the service, a control plane of the first cloud environment to configure the virtual resource on a physical resource of the first cloud service provider. The first cloud environment can be implemented on a first cloud infrastructure of a first cloud service provider and the request can be received from a second cloud environment upon input of a customer of a second cloud service provider at a portal of the second cloud environment. The input can indicate at least one parameter of the request and the second cloud environment can be implemented on a second cloud infrastructure of the second cloud service provider.
Techniques are described for providing a multi-cloud gateway (MCG) in a first cloud infrastructure (included in a first cloud environment provided by a first cloud services provider). The MCG implemented in the first cloud environment, receives a first request requesting a first operation to be performed in a second cloud environment. Responsive to receiving the first request the MCG generates a first API call directed to the second cloud environment and causes the first API call to be communicated to the second cloud environment. The MCG receives a second request requesting a second operation to be performed in a third cloud environment. Responsive to receiving the second request, the MCG generates a second API call directed to the third cloud environment and causes the second API call to be communicated to the third cloud environment, wherein each of the cloud environments is provided by a unique cloud services provider.
A system selects a first garbage collection process from a group of garbage collection processes. When a first thread stores a first set of objects to a first private memory region that is exclusive of any shared objects accessible by one or more additional threads, the system executes a sweeping thread-local garbage collection process upon termination of the first thread, including reclaiming the first private memory region. When a second thread stores to a second private memory region at least one shared object accessible by one or more additional threads, the system executes the selective garbage collection process upon termination of the second thread. The selective garbage collection process includes selectively reclaiming a second subset of memory blocks from the second private memory region allocated for a subset of private objects that are inaccessible from any thread.
A data management system determines that an updated thesaurus entry does not exist for a value of a record. The data management system generates a prompt to discover a set of synonyms and/or acronyms for the value by substituting a placeholder in a thesaurus prompt template with the value. The thesaurus prompt template includes a request that specifies an output format. A large language model is prompted with the prompt to generate a set of resulting values. The data management system causes display of the value, resulting value(s) that have not been approved, and an option to mark any of the resulting value(s) as approved. Based at least in part on receiving a selection of the option that marks at least one resulting value as approved, the data management system modifies a thesaurus entry for the value to indicate approval. The thesaurus entry is used to locate the record in response to a query.
Systems and methods for federating datasets hosted on separate servers are provided herein. An example data federation process includes receiving a federation request that contains a user-defined data domain distributed across two or more datasets hosted on separate servers. The federation request includes a request for first federation data and second federation data. The data federation process includes sending the federation request to the first server, which determines that it hosts the first federation data and determines call information associated with the first federation data. The first server then determines that the second server hosts the second federation data. The first server generates a model query including a procedure call for the first federation data and the second federation data. Upon fetching the first and second federation data based on the model query, the first server combines the first and second federation data together to generate a federated dataset.
Techniques are disclosed herein for provisioning cross-cloud services. The techniques include receiving, by a service of a first cloud environment, a request to configure a virtual resource and causing, by the service, a control plane of the first cloud environment to configure the virtual resource on a physical resource of the first cloud service provider. The first cloud environment can be implemented on a first cloud infrastructure of a first cloud service provider and the request can be received from a second cloud environment upon input of a customer of a second cloud service provider at a portal of the second cloud environment. The input can indicate at least one parameter of the request and the second cloud environment can be implemented on a second cloud infrastructure of the second cloud service provider.
A Network Virtualization Device (NVD) executes a set of Virtual Network Interface Cards (VNICs). The set of VNICs includes a first VNIC that forwards packets for a set of one or more packet flows. The NVD stores a first VNIC-related information that includes information identifying a first set of one or more packet flows and associated state information The NVD in response to determining that the state information for the first VNIC is to be synchronized with another NVD, identifies a first backup NVD for the first VNIC, wherein the first backup NVD is a backup for the first VNIC, and communicates to the first backup NVD, a portion of the state information stored by the NVD for the first VNIC.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
43.
EXTRACTING KEY INFORMATION FROM DOCUMENT USING TRAINED MACHINE-LEARNING MODELS
Techniques for extracting key information from a document using machine-learning models in a chatbot system is disclosed herein. In one particular aspect, a method is provided that includes receiving a set of data, which includes key fields, within a document at a data processing system that includes a table detection module, a key information extraction module, and a table extraction module. Text information and corresponding location data are extracted via optical character recognition. The table detection module detects whether one or more tables are present in the document and, if applicable, a location of each of the tables. The key information extraction module extracts text from the key fields. The table extraction module extracts each of the tables based on input from the optical character recognition and the table detection module. Extraction results include the text from the key fields and each of the tables can be output.
Techniques for triggering a transfer of a chat conversation with a user from a chatbot to a human agent based on detection of transfer criteria are disclosed. The chatbot uses natural language processing and a generative model to collect and organize information from the chat conversation to present to the human agent in a report when the chat conversation is transferred to the human agent. The chat conversation is transferred to the human agent by presenting the report and a graphical chat interface to the human agent. The graphical chat interface displays messages from chat conversation between the human agent and the user and displays messages from chat conversations between the human agent and multiple other users. Transferring the chat conversation from the chatbot to the human agent includes presenting interface elements to the human agent for receiving user input from the human agent for transmission to the user.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
Techniques are disclosed for dynamic time-based custom model generation as part of infrastructure-as-a-service (IaaS) environment. A custom model generation service may receive a set of training data and a time-based constraints for training a machine learning model. The custom model generation service may subsample the training data and generate a set of optimized tuned hyperparameters for a machine learning model to be trained using the subsampled training data. An experimental interval time of training is determined and the machine learning model is trained on the subsampled training data according to the optimized tuned hyperparameters over a set of training intervals similar to the experimental time interval. A customized machine learning model trained in the time-based constraint is output. The hyperparameter tuning may be performed using a modified mutating genetic algorithm for a set of hyperparameters to determine the optimized tuned hyperparameters prior to the training.
In accordance with various embodiments, described herein is a system (Data Artificial Intelligence system, Data AI system), for use with a data integration or other computing environment, that leverages machine learning (ML, DataFlow Machine Learning, DFML), for use in managing a flow of data (dataflow, DF), and building complex dataflow software applications (dataflow applications, pipelines). In accordance with an embodiment, the system can provide support for auto-mapping of complex data structures, datasets or entities, between one or more sources or targets of data, referred to herein in some embodiments as HUBs. The auto-mapping can be driven by a metadata, schema, and statistical profiling of a dataset; and used to map a source dataset or entity associated with an input HUB, to a target dataset or entity or vice versa, to produce an output data prepared in a format or organization (projection) for use with one or more output HUBs.
G06Q 10/0637 - Strategic management or analysis, e.g. setting a goal or target of an organisationPlanning actions based on goalsAnalysis or evaluation of effectiveness of goals
47.
SPECULATIVE JSON PATH EVALUATION ON OSON-ENCODED JSON DOCUMENTS
The present disclosure relates to improving the performance of evaluating path expressions on hierarchical data objects represented by binary encoded documents. An abstract syntax tree (AST) representing a path expression may be generated, wherein the AST comprises one or more syntax nodes implementing one or more respective execution steps of an evaluation of the path expression, and the path expression is included in a query to a database management system (DBMS). The AST may be modified based at least in part on profiling information and compiled into machine code. Using the machine code, the path expression may be executed on a binary-encoded hierarchical document.
In a computer-implemented embodiment, an interaction machine learning model is trained based on many interactions on many resources. A context lexical token is inferred that represents a current operational context of a user. The context lexical token is inserted into a sequence of other inferred lexical tokens. From the context lexical token within the sequence of tokens, the interaction machine learning model infers a predicted resource that will be accessed next. In an embodiment, accelerated matchmaking entails suitability measurement by a dot product of a) a dynamically inferred user embedding that is based on the context lexical token and b) a statically inferred item embedding.
Techniques are disclosed herein for improving model robustness on operators and triggering keywords in natural language to a meaning representation language system. The techniques include augmenting an original set of training data for a target robustness bucket by leveraging a combination of two training data generation techniques: (1) modification of existing training examples and (2) synthetic template-based example generation. The resulting set of augmented data examples from the two training data generation techniques are appended to the original set of training data to generate an augmented training data set and the augmented training data set is used to train a machine learning model to generate logical forms for utterances.
Techniques for generating a dashboard are disclosed. The system may obtain a set of one or more characteristics of a target user. A set of candidate data metrics that are relevant to the target user may be determined by applying a metric selection model to the set of characteristics. The set of candidate data metrics may be presented as a set of recommend data metrics. Input may be received from a user selecting a particular data metric from the set of recommended data metrics. A visualization selection model may be applied to the particular data metric and/or the set of user characteristics to select a visualization type for the particular data metric. A visualization of the particular data metric that accords to the selected visualization type may be generated based on a set of values associated with the particular data set. The visualization may be presented in the user dashboard.
Systems and methods for providing an iconification functionality for generation of data visualizations are described herein. For example, a method of generating iconified visualizations includes receiving, by an iconification function, a visualization request from a client device, determining, by a prompt engine of the iconification function, data to iconize based on the visualization request, and generating, by the prompt engine, a first prompt based on the data to iconize. The method also includes determining, by the iconification function, descriptors based on the first prompt, generating, by the prompt engine, a second prompt based on the descriptors, and generating, by the iconification function, an image based on the second prompt. The method further includes generating, by the iconification function, an icon image based on the image, generating, by the iconification function, a visualization including the icon image, and transmitting, by the iconification function, the visualization to the client device.
Techniques for programmatically provisioning a baremetal system are disclosed. A system receives a request to reprovision a baremetal system comprising a plurality of computing components listed in a platform definition. The system retrieves component configurations for the computing components in the platform definition. A component configuration may identify a dependency on another computing component. The system generates a dependency graph from any dependencies in the computing components. The system retrieves component configuration instructions for configuring the computing components and generates system configuration instructions based on the component configuration instructions and the dependency graph. The system reprovisions the baremetal system by executing the system configuration instructions.
Techniques for performing NAT operations to send packets between networks are described. In an example, a network device receives a packet that comprises a header. The header indicates a source address of a first computing resource in a first network and a destination address of a second computing resource in a second network. The network device determines a pool of identifiers allocated for the first network and the second computing resource and identifies a packet flow based on the header. The network device also determines that no identifier from the pool of identifiers has been allocated for the packet flow and determines an identifier available to allocate for the packet flow from the pool of identifiers. The network device performs a NAT operation on the packet based on the identifier.
Techniques for generating an interactive visualization tool for building nested queries are disclosed. The interactive nested query visualization tool allows a user to observe, analyze, and modify query characteristics and attributes of a set of nested queries. A system displays an interactive visual depiction of a set of nested queries. Visual representations of the nested queries are positioned relative to each other based on the relationships between the nested queries. The system displays, simultaneously with the set of nested queries, editable fields for a selected query. The system modifies a functionality of a user interface based on which of the nested queries is selected.
In accordance with an embodiment, described herein is a system and method for supporting partitions in a multitenant application server environment. In accordance with an embodiment, an application server administrator (e.g., a WLS administrator) can create or delete partitions; while a partition administrator can administer various aspects of a partition, for example create resource groups, deploy applications to a specific partition, and reference specific realms for a partition. Resource groups can be globally defined at the domain, or can be specific to a partition. Applications can be deployed to a resource group template at the domain level, or to a resource group scoped to a partition or scoped to the domain. The system can optionally associate one or more partitions with a tenant, for use by the tenant.
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1025 - Dynamic adaptation of the criteria on which the server selection is based
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
56.
FEDERATION OF DISPARATE DATASETS HOSTED ACROSS SEPARATE SERVERS
Systems and methods for federating datasets hosted on separate servers are provided herein. An example data federation process includes receiving a federation request that contains a user-defined data domain distributed across two or more datasets hosted on separate servers. The federation request includes a request for first federation data and second federation data. The data federation process includes sending the federation request to the first server, which determines that it hosts the first federation data and determines call information associated with the first federation data. The first server then determines that the second server hosts the second federation data. The first server generates a model query including a procedure call for the first federation data and the second federation data. Upon fetching the first and second federation data based on the model query, the first server combines the first and second federation data together to generate a federated dataset.
A computer system causes structured data to be stored in data structures of a database according to a database schema. The structured data includes static nodes and dynamic nodes, which are defined based on values of the static nodes. The system receives new data that is not referenced by the dynamic node(s), but a particular dynamic node may be redefined to depend on the new data. The system receives a request to make a prediction based on a fixed value for the particular dynamic node, and, in response to the request, generates a copy of a subset of the hierarchical data to store simulated data that results from the prediction. The particular dynamic node is used to generate a reverse formula and update the copy of the subset of data by assigning new values determined by the reverse formula. Other formulas may also be used to propagate data for the prediction, and the prediction is used to generate a visualization.
Data structures and methods are described for converting a text format data-interchange file into size efficient binary representations. A method comprises receiving a request to convert a data-interchange file, comprising a hierarchy of nodes, into a binary file. The method further comprises generating a tree representation of the nodes that reference a plurality of leaf values. The method further comprises, in response to determining that the binary file is to be compressed, embedding relative node jump offsets when generating the tree representation. The method further comprises, in response to determining that the data-interchange file is immutable, deduplicating the plurality of leaf values in a space optimized manner. The method further comprises, in response to determining that the data-interchange file is mutable, deduplicating the plurality of leaf values in a stream optimized manner. The method further comprises storing the deduplicated plurality of leaf values in the binary file.
Machine learning techniques directed to span prediction for textual data are disclosed. As used herein, span prediction is the process of predicting the possible spans of text that can be assigned to a given entity type of a set of predefined entity types. To this end, a machine learning model can be trained to generate values that indicate the predicted probability that a given span of an identified set of spans within text of interest is appropriate for association with a given entity type of the set of predefined entity types. The predicted probability values may be used to determine whether a given span or spans is associated with a given entity type. The predicted spans can also be scored in some examples.
Various embodiments of the present technology generally relate to systems and methods for providing an analytics service governance system to provide data visualizations of system-level metadata. In certain embodiments, a method may comprise operating a data analytics system to implement an analytics service governance process to generate data visualizations (DVs) of system-level metadata using an analytics service configured to generate DVs of user-provided datasets. The analytics service governance process may include establishing a connection to an analytics system database containing the system-level metadata as a dataset, generating a governance dashboard providing access to pre-generated DV options based on selected metadata related to user-created objects in the analytics system database, receiving a user selection of object categories and filters from pre-generated DV options, and producing a governance DV based on the user selection depicting a relationship of objects at the system level from the object categories.
A system performs a set of cryptographic operations at least by utilizing an API to cause execution of a set of one or more secure element (SE) applications within the SE platform runtime environment of a first computing entity. The set of cryptographic operations include generating a first shared secret, generating a ciphertext at least by encapsulating the first shared secret with a first public key associated with a second computing entity in accordance with an encapsulation algorithm, and transmitting the ciphertext from the first computing entity to the second computing entity. The second computing entity derives the first shared secret by decapsulating the ciphertext with a private key corresponding to the first public key. The first computing entity and the second computing entity then exchange at least one encrypted message, encrypted with an encryption key that includes, or is based at least in part on, the first shared secret.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
The present disclosure relates to improving the performance of evaluating path expressions on hierarchical data objects represented by binary encoded documents. An abstract syntax tree (AST) representing a path expression may be generated, wherein the AST comprises one or more syntax nodes implementing one or more respective execution steps of an evaluation of the path expression, and the path expression is included in a query to a database management system (DBMS). The AST may be modified based at least in part on profiling information and compiled into machine code. Using the machine code, the path expression may be executed on a binary-encoded hierarchical document.
Constraints may be generated in a target programming language from traces of operations specified in a source input. Operations specified in an input, such as a data structure like a Directed Acyclic Graph (DAG) or source programming language, that access data may be traced. Based on the traces, a code template may be generated in a target programming language or data structure to test the operations of the traces to determine which traces are valid in order to add a portion of code in the target programing language that allow external code to be run only on valid traces.
The present disclosure relates to a system and techniques for resolving dangling references resulting from a dependency relationship between computing resource objects uncovered during a harvesting process. The techniques include, adding a computing resource object from a catalog of computing resource objects to a computing resource collection for a client and identifying one or more dependencies for the computing resource object. The techniques further include determining at least one unresolved dependency from the one or more dependencies, the at least one unresolved dependency including a second dependency on a second computing resource object outside of the computing resource collection. The techniques further include resolving the at least one unresolved dependency after the second computing resource object associated with the unresolved dependency has been added to the computing resource collection.
Techniques are described for failing over from a primary database to the replicated logical database of the primary database. Techniques are also described for the client-side object references to be preserved when failing over to the logical replica database, for AS OF and other queries preserved, and for versioning of checksums, signatures and structures across logical replicas.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
A system performs a set of cryptographic operations at least by utilizing an API to cause execution of a set of one or more secure element (SE) applications within the SE platform runtime environment of a first computing entity. The set of cryptographic operations include generating a first shared secret, generating a ciphertext at least by encapsulating the first shared secret with a first public key associated with a second computing entity in accordance with an encapsulation algorithm, and transmitting the ciphertext from the first computing entity to the second computing entity. The second computing entity derives the first shared secret by decapsulating the ciphertext with a private key corresponding to the first public key. The first computing entity and the second computing entity then exchange at least one encrypted message, encrypted with an encryption key that includes, or is based at least in part on, the first shared secret.
One or more embodiments perform a set of digital signature operations in a secure element (SE) platform runtime environment executing on a SE processor of a SE hardware device. A system initializes a signature generation object in an SE platform runtime environment. The system determines, via the signature generation object, a private key corresponding to a hash-based signature protocol. The system generates, via the signature generation object, a digital signature of a message digest by utilizing the private key to execute the hash-based signature protocol on the message digest. The system outputs the digital signature to a hardware device.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
68.
AUTOMATED MIGRATION FROM A DOCUMENT DATABASE TO A RELATIONAL DATABASE
Techniques for automatically migrating documents from a document database to a relational database are provided. In one technique, it is determined whether a set of documents, from a document database system, can be stored in a relational database system. If so, one or more entities to be normalized are identified based on a hierarchical structure of the set of documents. One or more scripts are generated based on the identified one or more entities. In a related technique, a set of documents from a document database system is stored. It is validated that the set of documents can be converted to one or more duality views. Data of the set of documents is normalized for storing in a relational database system. A script is generated that, when executed, generates the one or more duality views.
One or more embodiments include operations associated with semantic classification of data columns. The operations may include receiving a set of data elements corresponding to a data column to be semantically classified, applying a machine learning model to the set of data elements to predict a set of candidate semantic types for the set of data elements, selecting a particular semantic type from the set of candidate semantic types based at least in part on a semantic fit score corresponding to the particular semantic type predicted by the machine learning model, and presenting the particular semantic type as a recommended semantic classification for the data column.
One or more embodiments perform a set of digital signature operations in a secure element (SE) platform runtime environment executing on a SE processor of a SE hardware device. A system initializes a signature generation object in an SE platform runtime environment. The system determines, via the signature generation object, a private key corresponding to a hash-based signature protocol. The system generates, via the signature generation object, a digital signature of a message digest by utilizing the private key to execute the hash-based signature protocol on the message digest. The system outputs the digital signature to a hardware device.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
71.
TECHNIQUES FOR COMPUTING PERFORMANCE METRICS FOR MULTIOUTPUT-MULTILABEL MACHINE LEARNING MODELS
The present disclosure relates to machine learning (ML) models, and more particularly to novel techniques for computing performance metrics for Multioutput-Multilabel ML models. Novel techniques are described for computing the performance metrics in a parallel and distributed without having to store the entire dataset for which metrics are to be computed in the memory of a data processing system. Novel data structures are provided for performing the computations.
Read, write, and array tracking may be performed for a tree representing a program or a portion of a program. A tree may be obtained from a compiler or an interpreter. The tree may be traversed to generate a map data structure in a single pass, constructing a chain of dependencies between results observed at each node visited as part of traversing the tree.
Templates may be generated for exploring models with discrete random variable distributions. Models may be received at a compiler that includes a description or translates to a Directed Acyclical Graph (DAG) that represents a probabilistic model, including one or more random variables. Traces starting from the one or more random variables and leading to a node in the DAG may be generated to convert into a set of groups of traces. Individual traces may be sorted in the groups of traces according to dependencies between traces in the groups of traces and code templates generated that explore a state space of possible sample values drawn from the one or more random variables.
A hardware-assisted Distributed Memory System may include software configurable shared memory regions in the local memory of each of multiple processor cores. Accesses to these shared memory regions may be made through a network of on-chip atomic transaction engine (ATE) instances, one per core, over a private interconnect matrix that connects them together. For example, each ATE instance may issue Remote Procedure Calls (RPCs), with or without responses, to an ATE instance associated with a remote processor core in order to perform operations that target memory locations controlled by the remote processor core. Each ATE instance may process RPCs (atomically) that are received from other ATE instances or that are generated locally. For some operation types, an ATE instance may execute the operations identified in the RPCs itself using dedicated hardware. For other operation types, the ATE instance may interrupt its local processor core to perform the operations.
Techniques are described herein for performing thread-local garbage collection. The techniques include automatic profiling and separation of private and shared objects, allowing for efficient reclamation of memory local to threads. In some embodiments, threads are assigned speculatively-private heaps within memory. Unless there is a prior indication that an allocation site yields shared objects, then a garbage collection system may assume and operate as if such allocations are private until proven otherwise. Object allocations in a private heap may violate the speculative state of the heap when reachable outside of the thread. When violations to the speculative state are detected, an indication may be generated to notify the garbage collection system, which may prevent thread-local memory reclamation operations until the speculative state is restored. The garbage collection system may learn from the violations to reduce the allocation of invalidly private objects and increase the efficiency of the garbage collection system.
Techniques for automatically migrating documents from a document database to a relational database are provided. In one technique, it is determined whether a set of documents, from a document database system, can be stored in a relational database system. If so, one or more entities to be normalized are identified based on a hierarchical structure of the set of documents. One or more scripts are generated based on the identified one or more entities. In a related technique, a set of documents from a document database system is stored. It is validated that the set of documents can be converted to one or more duality views. Data of the set of documents is normalized for storing in a relational database system. A script is generated that, when executed, generates the one or more duality views.
One or more embodiments initialize a signature validation object in a secure element (SE) platform runtime environment and utilize the signature validation object to perform signature validation operations. A system accesses an authentication path that includes a set of hash values corresponding to a set of nodes of a tree structure associated with a hash-based signature protocol utilized to generate a digital signature. The system computes a root hash value corresponding to a root node of the tree structure based on the set of hash values of the authentication path. The system verifies the root hash value against a root public key associated with the digital signature. The system determines that the digital signature is valid responsive at least in part to successfully verifying the root hash value against the root public key.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
78.
STREAMING CHANGE LOGS FROM STREAMING DATABASE TABLES
As with database tables in general, a streaming table and its columns are defined within by a database. DML commands are issued against a streaming table that specify to insert rows and values into their defined columns. Change records in a change log (e.g., redo log) are generated to record the changes specified by DML commands. The change log may be used to stream data captured by the change log. However, unlike with database tables in general, the rows of a streaming table are not persistently stored in the database at commit time. Alternatively, rows are stored persistently but without values for columns defined as streaming columns. Columns defined as streaming columns may be set to values specified by DML commands but are not stored in the database; streaming column values are captured in change logs, however.
A histogram-augmented dynamic sampling approach is provided for determining cardinality of a two-table join. The approach has a pre-processing phase in which data structures are created that will be used during a compilation phase for cardinality estimation. These data structures include a row histogram and a key histogram, which are created for selected columns of a first table. A cardinality estimation phase uses the data structures to estimate the cardinality of various joins at the time of query compilation. In this phase, the system executes queries that join the histograms with a second table, to perform the cardinality estimation.
An estimator is provided that can be used to get an estimate of final graph size and peak memory usage of the graph during loading, based on sampling of the graph data and using machine learning (ML) techniques. A data sampler samples the data from files or databases and estimates some statistics about the final graph. The sampler also samples some information about property data. Given the sampled statistics gathered and estimated by the data sampler, a graph size estimator estimates how much memory is required by the graph processing engine to load the graph. The final graph size represents how much memory will be used to keep the final graph structures in memory once loading is completed. The peak memory usage represents the memory usage upper bound that is reached by the graph processing engine during loading.
Systems, methods, and other embodiments associated with tracking execution paths in dynamic systems are described. In one embodiment, a method includes, for a first set of input values, executing a first execution of dynamic functions to generate a first final result, wherein calculation attributes have a first set of values during run-time. In response to a change, re-executing the dynamic functions to generate a second final result in a second execution. During the second execution, track the dynamic functions that were executed and determine an execution path during the second execution. The method determines, from the functions that were executed, how calculation expressions are calculated during the second execution. The execution path is visually displayed that visually shows a sequence of executed dynamic functions and/or a summary of tracking results to visually identify attribute value changes between the first execution and the second execution.
Systems, methods, and other embodiments associated with high-performance arctangent computation at arbitrarily high precision are described. In one embodiment, an example method brackets an angle to a working range of an arctangent approximation polynomial. A closest index of a lookup table to the bracketed angle is determined. An angle shift for the bracketed angle is generated that is configured to move the bracketed angle to a high-precision segment of the range segments. A shifted angle is generated based on the bracketed angle and the angle shift. The arctangent approximation polynomial is evaluated at the shifted angle to produce an estimated arctangent of the shifted angle. A pre-computed arctangent corresponding to the closest index in the lookup table is retrieved from the lookup table in proximate memory. An augmented-precision arctangent is then generated from the estimated arctangent and the pre-computed arctangent.
An approach is provided for a thorough and clean way of handling graph overflows in graph execution on single instruction, multiple threads (SIMT) hardware with resumable graph support. The solution does not assume that the input and output fit in the buffers allocated in the SIMT hardware. The approach maintains state of the execution for each kernel and uses multiple iterations of graph execution, making progress in each iteration until all data items are processed through the graph on SIMT hardware. This iterative processing of the graph is transparent to the end user. For resumability, the approach treats buffers as circular buffers instead of serial buffers. With the help of counters, the approach keeps track of the start and end indexes of input and output buffers, thus achieving seamless graph resumability when re-execution is required for only a subset of kernels.
The illustrative embodiments provide a framework for executing data-centric workflows within the database session, thus achieving workflow execution with no additional process creation, virtualization, or network overhead. The user-provided serverless functions that comprise the application workflow are compiled into native-image binary executables and deployed as stored procedures. Using native-image stored procedures greatly reduces memory and execution time overhead compared to traditional serverless computing frameworks. An application workflow is configured using workflow metadata, which specifies the native-image stored procedures that make up the application workflow and transaction boundaries at runtime. The framework ensures low overhead fault tolerance using run-to-completion and exactly-once semantics.
Techniques are described for applying topological graph changes and traversing the modified graph. In an implementation, a set of compile processes schedules the graph changes caused by a DML (Data Manipulation Language) statement. Based on the requested graph operation in a received query for graph, a set of graph operation processes generate extensions to the graph that capture the changes to the graph by the DML. The received graph operation(s) are then performed by traversing both the existing graph and the generated extensions.
Systems and methods for providing autonomous user-directed insights and recommendations are provided herein. For example, a system includes a non-transitory computer-readable medium and a processor communicatively coupled to the non-transitory computer-readable medium. The processor is configured to execute processor-executable instructions to determine, by an insight engine, first usage tracking information associated with a first client device and generate, by the insight engine, a user-directed insight based on the first usage tracking information associated with the first client device. The user-directed insight includes a natural language insight. The processor is also configured to execute processor-executable instructions to generate, by a recommendation engine, recommendations based on the user-directed insight and the first usage tracking information, where each of the recommendations includes a recommendation response and one of a recommendation for a dashboard profile corresponding to the user-directed insight or a recommendation for creating a dashboard corresponding to the user-directed insight.
The technology disclosed herein enables continuity of a call recording when a recording system restarts. In a particular example, a method includes establishing a recording session between a session recording client and a session recording system. The recording session is associated with a communication session being recorded. The method also includes periodically transmitting requests for counter values from the session recording system. The counter value indicates a number of times the session recording system has restarted. The method further includes receiving a first counter value of the counter values from the session recording system and receiving subsequent counter values of the counter values from the session recording system after receiving the first counter value. In response to determining that one of the subsequent counter values does not match the first counter value, the method includes ending the recording session.
A computer obtains multipliers of a sensitive feature. From an input that contains a value of the feature, a probability of a class is inferred. Based on the value of the feature in the input, one of the multipliers of the feature is selected. The multiplier is specific to both of the feature and the value of the feature. The input is classified based on a multiplicative product of the probability of the class and the multiplier that is specific to both of the feature and the value of the feature. In an embodiment, a black-box tri-objective optimizer generates multipliers on a three-way Pareto frontier from which a user may interactively select a combination of multipliers that provides a best three-way tradeoff between fairness and accuracy. The optimizer has three objectives to respectively optimize three distinct validation metrics that may, for example, be accuracy, fairness, and favorable outcome rate decrease.
Techniques for implementing and enforcing a security policy in a secure element are disclosed. The secure element enforces the security policy to grant and/or deny access, such as from an application processor, to configuration of the device peripheral components and access to data of the device peripheral components across one or more bus architectures, such as an I3C bus. Implementing an access control policy in a secure element allows execution of code within the isolated secure element hardware processor, preventing software attacks that may emanate from code running in the application processor. This design also benefits from hardware protections against physical attacks.
A sampling approach for time-window based multi-stage sampling. The sampling approach can determine whether received communications are of a stratum that is rare and determine a sampling mechanism for the communication based on whether the stratum is rare. The sampling system defines multiple time windows for sampling communications received by a computing system. The time windows are segmented into multiple time intervals. A portion of the multiple time intervals are randomly selected for sampling. A portion of the communications received during the selected time intervals are captured for security assurance purposes.
A computer obtains multipliers of a sensitive feature. From an input that contains a value of the feature, a probability of a class is inferred. Based on the value of the feature in the input, one of the multipliers of the feature is selected. The multiplier is specific to both of the feature and the value of the feature. The input is classified based on a multiplicative product of the probability of the class and the multiplier that is specific to both of the feature and the value of the feature. In an embodiment, a black-box tri-objective optimizer generates multipliers on a three-way Pareto frontier from which a user may interactively select a combination of multipliers that provides a best three-way tradeoff between fairness and accuracy. The optimizer has three objectives to respectively optimize three distinct validation metrics that may, for example, be accuracy, fairness, and favorable outcome rate decrease.
A method may include generating a first cloud network associated with a first security level and including data associated with a service. The method may include generating a second cloud network associated with the first security level and deploying the service and the data associated with the service to the second cloud network and generating a first ingress channel to permit data to be transmitted to the second cloud network. Restricted data associated with a tenant may be deployed to the second cloud network. The method may include generating a third cloud network associated with the first security level and including the service and the data associated with the service and generating a second ingress channel to permit data to be transmitted to the third cloud network. A data sync may be implemented between the second and third cloud networks to deploy the restricted data to the third cloud network.
Systems and methods for providing autonomous user-directed insights and recommendations are provided herein. For example, a system includes a computer-readable medium and a processor communicatively coupled to the computer-readable medium. The processor is configured to execute processor-executable instructions to determine, by an insight engine, first usage tracking information associated with a first client device and generate, by the insight engine, a user-directed insight based on the first usage tracking information associated with the first client device. The user-directed insight includes a natural language insight. The processor is also configured to execute processor-executable instructions to generate, by a recommendation engine, recommendations based on the user-directed insight and the first usage tracking information, where each of the recommendations includes a recommendation response and one of a recommendation for a dashboard profile corresponding to the user-directed insight or a recommendation for creating a dashboard corresponding to the user-directed insight.
The illustrative embodiments provide techniques that utilizes graph topology information to partition work according to ranges of vertices so that each unit of work can be computed independently by different worker processes (inter-process parallelism). The illustrative embodiments also provide an approach for decomposing the graph neighbor matching operations and the property projection operation into fine-grained configurable size tasks that can be processed independently by threads (intra-process parallelism) without the need for expensive synchronization primitives. For graph neighbor matching operations, a given set of source vertices is split into smaller tasks that are assigned to dedicated threads for processing. Each thread is responsible for computing a number of matching source vertices and propagating them to the next graph match operator for further processing. For property projection operations, the computed graph paths are organized into rows that contain the requested properties for each element of the path (vertices and/or edges).
Techniques are described for applying topological graph changes and traversing the modified graph. In an implementation, a set of compile processes schedules the graph changes caused by a DML (Data Manipulation Language) statement. Based on the requested graph operation in a received query for graph, a set of graph operation processes generate extensions to the graph that capture the changes to the graph by the DML. The received graph operation(s) are then performed by traversing both the existing graph and the generated extensions.
Operations of a system may include executing a provisioning process that includes provisioning a network entity with a digital certificate for use in a stateless validation protocol. After provisioning the network entity with the digital certificate, the system may include receive a credential request from the network entity that includes the digital certificate and a request for an access credential for accessing a cloud resource. In response to the credential request, the system may execute an access-authorization process with respect to the network entity, including authenticating the digital certificate in accordance with the stateless validation protocol. Upon determining that the network entity authorized to receive an access credential, the system may provision the network entity with the access credential. The network entity may then use the access credential to access the cloud resource.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
97.
GENERATION OF SYNTHETIC DOCTOR-PATIENT CONVERSATIONS
Knowledge graph guide and entity controlled techniques for generating synthetic doctor-patient conversations. In one particular aspect, a method is provided that includes obtaining an original dataset containing textual dialogue associated with a plurality of individual doctor-patient conversations for training a machine learning model, constructing input data by using named entity recognition to capture and categorize named medical entities present in the dialogue, generating prepared input data by arranging the input data in an annotated turn-by-turn conversation format using an input data preparation algorithm having various control parameters, training the machine learning model using the prepared input data, utilizing a knowledge graph to identify a plurality of symptoms mapped to a randomly selected disease, and causing the trained machine learning model to generate a synthetic doctor-patient conversation by inputting the plurality of symptoms to the machine learning model as a first control parameter of a conversation generation control algorithm.
G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
G06F 40/169 - Annotation, e.g. comment data or footnotes
Techniques for extensions of graphical user interfaces (GUIs) are disclosed. The system executes an application that displays a GUI. The system selects one or more interface elements for displaying within the GUI at runtime while executing the application. The system identifies a primary data type corresponding to content that is to be displayed or currently being displayed by the GUI. The system determines that the primary data type is mapped to a first target data type. Responsive to determining that the primary data type is mapped to the first target data type, the system identifies a first function associated with the first target data type. The system generates a first interface element for initiating execution of the first function associated with the first target data type. The system displays the first interface element concurrently with a display of the content within the GUI.
A build system is disclosed that identifies the inputs used by a build process for securely building and deploying a piece of software to production. The build system comprises a build container and a build proxy server. The build container receives a set of initial inputs for performing a build and generates a build output (e.g., a target artifact) as a consequence of performing the build. The build proxy server monitors both internal interactions as well as external interactions (e.g., input dependency fetches from external artifact repositories) of the build container within and outside a network boundary defined around the build container. Based on the monitored interactions, the build proxy server identifies all the additional input components and/or input component dependencies used by the build container for successfully performing the build. The build container uses the identified components to perform the build and generate a target artifact.
A system includes a host network entity associated with a computing network. The host network entity may establish a first connection with a client network entity via a provisioner account in response to a connection request from a client network entity. The host network entity may receive a digital certificate from the client network entity via the first connection. The digital certificate may include an instruction set with a first instruction to generate an operator account for the client network entity. The host network entity may perform a validation of the digital certificate and the instruction set based on a public key associated with a certificate authority that is trusted by the host network entity, and responsive to the validation, the host network entity may generate the operator account based on the first instruction and establish a second connection with the client network entity via the operator account.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system