Apparatus and method for managing cross-domain connectivity, interoperability, and authentication. For example, some implementations rely on an external credential manager which securely stores user credentials for each respective service domain for which integration is to be performed. An introspection service searches an existing codebase to locate connector code that can be used for the integration, which are presented as options to the user within an integration development application. The connector code includes identifiers which indicate the credentials to request from the credential management service. The credentials are never provided directly to the integration platform, which generates a mapping between the placeholders and unique identifiers which identify the corresponding credentials. The mapping and the associated connector code are provided to the integration, which uses the mapping to inject the credentials into the connector code in accordance with the placeholders. The credentials are used for authenticating with service domains.
Disclosed herein are mobile device, method, and computer program product embodiments for an improved integrated mobile AI assistant. The mobile device may launch a mobile application including an integration component, where the integration component is in communication, through the mobile application, with a data service and a user interface (UI) service. The integration component may receive a response including data and a data type from the data service generated by a large language model responsive to a natural language query. The integration component may customize an interface at the integration component using a rendering configuration received from the UI service to display the data, the rendering configuration generated by decomposing the data type into a predefined type.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
3.
SYSTEMS AND METHODS FOR TRAINING AND EVALUATING MULTIMODAL NEURAL NETWORK BASED LANGUAGE MODELS
Embodiments described herein provide a method of building an artificial intelligence (AI) agent to respond to a task request from a user. The method includes: receiving a set of single-modal data samples of a plurality of modalities; selecting a first single-modal data sample of a first modality and a second single-modal data sample of a second modality; generating a question associated with the first single-modal data sample and the second single-modal data sample; generating an answer with a reasoning to the question based on a second input prompt; training, a second neural network based language model, using a dataset comprising the question and the answer to generate a candidate answer in response to a training query; building the AI conversation bot through an application programming interface to the trained second neural network language model; and generating, using the AI conversation bot, a response to the task request.
Fine-tuning AI models is described. According to some aspects, one of a number of pre-trained AI models is selected based on the explicit input and the implicit input. In addition, one of a number of fine-tuning methods is selected. Also, a set of one or more of a plurality of categories is selected, where a categorized data set associated with an organization was classified into the categories using a classifier, and where the selected set of categories identify a selected subset of the categorized data set. A version of the selected subset is used to fine-tune the selected AI model using the selected fine-tuning method.
Embodiments described herein provide A method of fine-tuning a neural network based model. In some embodiments, a system receives, via a data interface, a training dataset including a plurality of input samples. The system generates, via a pre-trained neural network based model, a first response based on a first input sample of the plurality of input samples, and a second response based on the first input sample. The system generates, via a trained reward model, a first reward score based on the first input sample and the first response, and a second reward score based on the first input sample and the second response. The system computes a loss function based on the first prompt, the first response, the second response, the first reward score, and the second reward score. The system updates parameters of the neural network based model based on the loss function.
Disclosed are some implementations of systems, apparatus, methods and computer program products for synchronizing data. A source device processes an update to data in a database. The source device transmits, via a message bus, a first event message pertaining to the update, the first event message having an associated indicator. A target device accessing the message bus detects the indicator. Responsive to detecting the indicator, the target device skips the first event message on the message bus and identifies a snapshot link in a second event message subsequent to the first event message. The target device accesses a snapshot event identified by the snapshot link, stores data of the snapshot event, and processes one or more event messages subsequent to the snapshot event.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Methods, systems, apparatuses, devices, and computer program products are described. A processing device may support a large language model (LLM) for automatically improving pull requests to a codebase. To use the LLM, the processing device may create and maintain a vector space tracking information relating to historical pull requests to the codebase. The processing device may receive a new pull request indicating a change to code in the codebase and may determine, from the vector space, a vector corresponding to a code chunk affected by the pull request. The processing device may send, as an input to the LLM, a prompt including the code chunk affected by the pull request and one or more comments from a set of historical comments relating to the code chunk and indicated by the determined vector. The processing device may modify the pull request based on the one or more comments.
Techniques are disclosed relating to database query optimizers. In some embodiments, a system receives, from a query optimizer, a plurality of query plans for a database maintained by the database system. The system retrieves a set of database statistics for the database and generates, via a data synthesizer, a plurality of synthetic datasets, where generating a given synthetic dataset is performed based on a given query plan of the plurality of query plans and the set of database statistics, and includes generating a plurality of synthetic data tuples. The system executes the plurality of query plans on the plurality of synthetic datasets and updates the query optimizer based on results of executing the plurality of query plans on the plurality of synthetic datasets. The disclosed data synthesis may advantageously improve query performance due to more efficient query plans being selected for execution of requested queries.
Techniques for generating and displaying a summary of multiple virtual spaces are discussed herein. A communication platform may determine whether to generate a summary of the content posted across a set of virtual spaces. For instance, the communication platform can identify a set of virtual spaces that the user is a member of. For a pre-determined period, the communication platform can determine a first number of content items posted to the set of virtual spaces. The communication platform may also determine, over the same period, a second number indicating the number of the content items the user has yet to view. Based on the second number meeting or exceeding a threshold, the communication platform may generate a summary for the user. Accordingly, the communication platform may generate a summary of the content posted to the set of virtual spaces and display the summary via a user interface of the user.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
11.
Applied Artificial Intelligence Technology for Narrative Generation Based on Explanation Communication Goals
Artificial intelligence (AI) technology can be used in combination with composable communication goal statements to facilitate a user's ability to quickly structure story outlines using “explanation” communication goals in a manner usable by an NLG narrative generation system without any need for the user to directly author computer code. This AI technology permits NLG systems to determine the appropriate content for inclusion in a narrative story about a data set in a manner that will satisfy a desired explanation communication goal such that the narratives will express various ideas that are deemed relevant to a given explanation communication goal.
A policy-based approach to execution of commands in a distributed environment involves applying policies to determine permissions for executing commands. In some implementations, a user inputs a command at a web portal, causing a request to be sent to a computer system. The web portal also sends an indication of one or more machine components of a remote system to which the command is to be applied. After identifying a policy associated with the user, the computer system evaluates a rule in the policy to determine whether the user is permitted to execute the command with respect to the one or more machine components. The computer system routes the command to the remote system for execution based on determining that the rule is satisfied. This enables the command to be executed without providing the user with direct or unrestricted access to the remote system.
Embodiments described herein provide a video generation framework built on a decoupled multimodal cross-attention module to simultaneously condition the generation on both an input image and a text input. The video generation may thus be conditioned on the visual appearance of a target object reflected in the input image. In this way, zero-shot video generation may be achieved with little fine-tuning efforts.
A method for avoiding exposure of sensitive data in a web application running in a browser due to rendering content generated by a generative artificial intelligence platform (“GenAI content”). A Web Application Firewall or Gateway (“WAF”) receives, from a web application, a first response to forward to the browser. The WAF modifies the first response by inserting detection code that causes the browser to: obscure a rendered version of the GenAI content in a first browser window, render the GenAI content in a second browser window that is not visible, convert the rendered GenAI content in the second browser window to an image, obtain an assessment from a sensitive data scanning engine on whether the image contains sensitive data, and based on the assessment, determine whether to unobscure the rendered version of the GenAI content in the first browser window. The WAF sends the modified response to the browser.
A method of generating a code output in response to a natural language problem description. The method includes: receiving the natural language problem description; generating, by a neural network based language model, a first candidate code snippet based on a first input prompt combining the natural language problem description and a first instruction; executing, at a code execution environment, the first candidate code snippet based on a unit test thereby producing a first feedback reflecting a correctness of the first candidate code snippet; generating, by the neural network based language model, a second candidate code snippet based on a second input prompt combining the natural language problem description, the first candidate code snippet, and the first feedback; and executing, at the code execution environment, the second candidate code snippet based on a runtime test thereby producing a second feedback reflecting a runtime efficiency of the second candidate code snippet.
Database trigger firing techniques for reducing unnecessary trigger firings are disclosed. In one embodiment a computer system stores trigger information relating to initiating execution of at least one trigger instruction for a database in connection with a database operation statement. The trigger information includes a first set of one or more database field identifiers for a first set of one or more fields in the database and a second set of one or more database field identifiers for a second set of one or more fields in the database. The computer system receives a database operation statement and makes determinations that at least one field within the first set of fields and at least one field within the second set of fields is specified by the database operation statement. Based at least in part on the determinations, the computer system initiates execution of the at least one trigger instruction.
The disclosed techniques for generating a migration plan include identifying one or more entities that are eligible for data migration to a destination database from a source database. The techniques include generating, using planning procedures that include a workload balancing procedure, a data migration plan for the eligible entities and executing the migration plan. The workload procedure includes mapping, based on data metric values of the eligible entities, different ones of the eligible entities to instances in the destination database, where the mapping is performed based on utilization metric values of the instances, and where the instances are of a storage service that collectively implements the destination database. The workload balancing procedure includes altering the mappings of entities to instances in the destination database, where the remapping is based on a standard deviation of data for entities mapped to instances in the destination database not meeting a threshold standard deviation.
System, method and interface for generating data visualizations are provided. The system receives a user input to specify a natural language command directed to a data source. The system also generates a prompt for generating a data visualization based on relevant data fields and data values, rules that characterize the data visualization, and a context free grammar. The system also prompts a trained large language model using the prompt to generate a structured document following a domain-specific schema based on a shorthand notation. The system also uses a parser that uses the context free grammar to map the structured document to a visual specification. The visual specification specifies the data source, visual variables, and data fields from the data source. The system also generates and displaying a data visualization based on the visual specification, including displaying visual marks representing data, retrieved from the data source, for the data fields.
Embodiments described herein provide a method of jointly generating a code output. A first language model (LM) generates a code output in response to a task description. Second and third LMs generate critiques based on the task description and the generated code. The second LM may critique the accuracy of the generated code, and the third LM may critique the safety of the generated code (e.g., susceptibility to hacks). The first LM may revise the generated code based on the critiques. The revised code may be executed, and based on the results of the execution, the first LM may revise the code again. The process of critiques, revisions, and execution may be repeated. The final generated code is output to a user (e.g., in a programming environment).
Embodiments described herein provide a reinforcement learning framework for neural network models to generate outputs that align with desired human preference. In at least one embodiment, cross-prompts are generated from an original prompt to elicit a response from the neural network model.
Embodiments described herein provide A method of training a neural network based model for predicting time series data. The method may include receiving, via a data interface, multi-variate time-series data; generating a plurality of tokens based on flattening the multi-variate time-series data; generating a first intermediate representation via a first cross-attention layer of the neural network based model with a plurality of dispatcher tokens as the query, and the plurality of tokens as the key and value; generating a second intermediate representation via a second cross-attention layer of the neural network based model with the plurality of tokens as the query, and the first intermediate representation as the key and value; generating a predicted time-series value based on the second intermediate representation; computing a loss based on a comparison of the predicted time-series value and a ground-truth value; and training the neural network based model based on the loss.
Methods, systems, apparatuses, devices, and computer program products are described. A processing device may receive a natural language query asking a question about a data metric. The processing device may use a large language model (LLM) to generate a summary of the natural language query for vector embedding. The processing device may determine one or more query response portions indicating possible answers to the query based on the summary and a vector database including vector representations of data summaries. To expand the scope of the answers, the processing device may recursively expand a set of data metrics for analysis. For example, the processing device may determine additional data metrics adjacent to the data metric of the query and may search the vector database for additional query response portions based on the additional data metrics. The processing device may use the query response portions to answer the natural language query.
Methods, apparatuses, and computer-program products are disclosed. The method may include transmitting, to a first generative artificial intelligence (AI) model, a request to convert a natural language description of a generative AI behavioral policy into a pseudo-code expression of the generative AI behavioral policy, where the generative AI behavioral policy describes conditions and actions to be performed based on the conditions; transmitting, to a second generative AI model, a prompt generated based on the pseudo-code expression of the generative AI behavioral policy, a user request, and an instruction that the second generative AI model is to generate a response to the user request; and receiving, from the second generative AI model and based on the prompt, an output of the second generative AI model and the output conforms with the user request and the pseudo-code expression of the generative AI behavioral policy.
Techniques are disclosed that pertain to profiling database statement execution. A computer system receives a request to profile the execution of a database statement by a database process. The request specifies a process identifier (ID) associated with the database process. The computer system initializes a profiling process to establish a profiling session in which the profiling process profiles the execution of the database statement to generate profiling results that identify a set of performances metrics associated with the execution of the database statement. The process ID is provided to the profiling process to establish the profiling session. The computer system detects an occurrence of a trigger event indicating that the profiling process should be terminated. The computer system terminates the profiling process in response to the occurrence of the trigger event. The profiling results may be stored in a storage repository accessible to the computer system.
Systems, devices, and techniques are disclosed for transaction coordination across services. A service including virtual threads receives a transaction that is part of a multi-part transaction. The service determines the transaction does not already exist in transaction state data. The service updates the status of the transaction in the transaction state data to pending. The service processes the transaction. The service updates the status of the transaction in the transaction state data to completed. The service uses receives at and endpoint a request for the status of the transaction. The service responds to the request for the status of the transaction. The service sends a request for status to an endpoint another service that is processing another transaction of the multi-part transaction. The service receives a response from the another service of a status of completed.
Methods, apparatuses, and computer-program products are disclosed. The method may include generating a first system message indicative of a role for the generative artificial intelligence (AI) model; generating a query-response message pair that includes a query message that may include an action invocation and a response message that includes information responsive to the action invocation; obtaining one or more interaction messages; generating a second system message that includes an instruction for the generative AI model to generate an utterance and an indication of one or more actions available to the generative AI model; transmitting, to the generative AI model, a prompt that may include the first system message, the query-response message pair, the one or more interaction messages, and the second system message; and receiving, from the generative AI model and based on the prompt, an output of the generative AI model.
Techniques for generating tasks from message data are discussed herein. A user profile may view content displayed within a communication-based virtual space. The user profile may request that the communication platform generate a task based on a content item displayed therein. That is, the user profile may be viewing a communication-based virtual space and select a message to use as the basis for creating a task. The communication platform may display one or more lists with which the user profile can associate the task. The user profile may select a list from the one or more of the lists and based on the selection, the communication platform may generate a task associated with the list. The communication platform may cause the task to be displayed via a user interface of a mobile device used by the user profile.
Techniques for modifying the display of task data are discussed herein. A user profile may request to modify a user interface object from a compressed state to an expanded state. Based on the request, the communication platform may convert the user interface object into an expanded state such that one or more columns of the task are presented in a single user interface. The communication platform may determine an updated organization of the columns and use such data to determine a modified size of the user interface object. The communication platform can generate a modified user interface object in an expanded state that includes an increased number of the columns. The communication platform may cause the modified user interface object to be displayed via a user interface of a mobile device.
H04M 1/72469 - User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
A method may include receiving, at a database that may include extension-based functionality, a database query request to perform a machine learning inference operation on data stored in the database, the machine learning inference operation to be performed at the database in accordance with the extension-based functionality. The method may include instantiating, in accordance with the extension-based functionality, a user-defined function (UDF) for performing machine learning inference operations. The method may include calling, with the UDF, the machine learning inference operation to process, at the database, the data retrieved from a table of the database. The method may include transmitting a response to the database query request, the response that may indicate an output of the machine learning inference operation, the output that may include a processed version of the data.
Output metric values may be determined by applying a machine learning model to corresponding input metric values characterizing one or more operating conditions of a database system. The machine learning model may be pre-trained to project the input metric values into a latent space having a level of dimensionality lower than that of the input metric values and to project the latent space into the output metric values. The output metric values may be compared to the corresponding input metric values to identify corresponding discrepancy values indicating one or more discrepancies between the output metric values and the corresponding input metric values. A determination may be made that a database incident implicating operating conditions corresponding with a portion of the database system has occurred based on the corresponding discrepancy values, and an instruction may be transmitted to the database system to implement a policy to address the database incident.
Embodiments described herein provide a unified framework to control LLM agent behavior using a state graph. The agent's behavior is articulated through the state graph where each node represents a distinct state correlating with predefined agent executions, viewed as deterministic actions.
Embodiments described herein provide an optimization framework to control LLM agent behavior using dynamically optimized principles as part of the generation context. Specifically, a principle may take a form of a set of logic, parameters or text that describe the conditions for using that action. An LLM agent may generate a next step action conditioned on a set of principles corresponding to a set of available actions, and an execution trajectory. A reflector model (such as an LLM) may then generate a reward score based on the generated trajectory and the set of principles. Based on the reward scores, an optimizer (such as an LLM) may revise the set of principles to better align with observed conditions.
Embodiments described herein provide a method of generating a response to a user prompt by a function-calling artificial intelligence (AI) agent. The method comprises generating, via an LLM based on a prompt template, a training pair including a generated prompt and a first executable function call; including or excluding the training pair in a training dataset depending on a validation decision of the training pair; training the function-calling AI agent based on the training dataset; generating, by the function-calling AI agent, a second executable function call based on the user prompt; and executing the second executable function call via local execution on the one or more processors or via API call to a system remote from the one or more processors, wherein the response to the user prompt is based on a result of the executing the second executable function call.
Techniques are disclosed that pertain to a database system having a log owner and log tailers. The log owner maintains a transaction log and the log tailers replay the transaction log. A log tailer may receive a set of requests to perform a database transaction that involves a write operation to write a record and a subsequent read operation to read that record. As a part of performing the transaction, the log tailer may issue a request to the log owner to log the write operation in the transaction log and the log tailer may insert the record into a local memory structure of the log tailer. After receiving a response from the log owner that the write operation has been logged, the log tailer may permit the subsequent read operation to access the record from the local memory structure without requesting the record from the log owner.
Techniques are disclosed relating to upgrade groups. A node of a computer system may access metadata assigned to the node during deployment of the node. The node may be one of a plurality of nodes associated with a service that is implemented by the computer system. The node may perform an operation on the metadata to derive a group identifier for the node and the group identifier may indicate the node's membership in one of a set of groups of nodes managed by the service. The node may then store the group identifier in a location accessible to the service.
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
41.
Query Semantics for Multi-Fact Data Model Analysis Using Shared Dimensions
A computing device receives user input specifying a first dimension data field and a second dimension data field that are associated with different objects in an object model, for generating a first data visualization. The device constructs a dimension subquery. The device executes the dimension subquery to retrieve first tuples. The device constructs one or more measure subqueries. Each of the measure subqueries references one or more measure data fields in the object model and the one or more measure data fields include at least a shared measure data field. The device executes the measure subqueries to retrieve second tuples. The second tuples include data values corresponding to the shared measure data field. The device forms extended tuples by combining the retrieved first tuples and the retrieved second tuples. The device also generates and causes display of the first data visualization according to the extended tuples.
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
G06F 3/04812 - Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
Computing systems, for example, multi-tenant systems deploy software artifacts in datacenters created in a cloud platform. The system receives multiple version maps. Each version map provides version information for a particular context associated with the datacenter. The context may specify a target environment, a target datacenter entity, or a target action to be performed on the cloud platform. The system generates an aggregate pipeline comprising a hierarchy of pipelines. The system generates an aggregate version map associating datacenter entities of the datacenter with versions of software artifacts targeted for deployment on the datacenter entities and versions of pipelines. The system executes the aggregate pipeline in conjunction with the aggregate version map to perform requested operations on the datacenter configured on the cloud platform, for example, provisioning resources or deploying services.
A computing device displays first data describing a dataset. At least a portion of the first data is encoded with metadata that links the first data to data values and/or data fields of the dataset. The computing device receives a user interaction with a first affordance. The user interaction specifies a first portion of the first data, which includes at least a first data field of the dataset. In response to receiving the user interaction, the computing device retrieves metadata corresponding to the first portion of the first data, and generates second data describing the dataset according to (i) the at least the first data field and (ii) data values of the at least the first data field specified in the metadata, corresponding to the first portion of the first data. The computing device concurrently displays the first data and the second data describing the dataset.
G06F 16/383 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Implementation(s) for multi-factor network segmentation are described. A plurality of packets at a higher layer of a network stack is processed, where at least one packet of the plurality of packets was previously determined, as part of processing the at least one packet at lower layers of the network stack, to be authorized to be processed by the higher layer. Specifically, responsive to successful authentication of a cryptographic certificate received during the handshake process, a second service is identified from the cryptographic certificate. It is determined, based on a security policy, that the second service is authorized to access the first service. Responsive to the determination, a configuration is caused such that packets sent using the source address are now authorized to be processed by the higher layer.
Media, methods, and systems are disclosed for ad hoc, ambient, synchronous multimedia collaboration in a group-based communication system. Embodiments of the invention provide a way for users to quickly discover and initiate real-time collaboration sessions among groups of other users without the burden and overhead of a conventional call or video meeting. Users can quickly and easily discover and switch into and out of these synchronous multimedia collaboration sessions at any time, without disrupting the sessions for other participating users. This enables a diverse set of users to experience a rich multimedia collaboration session collaboration as a convenient ad hoc forum rather than a burdensome scheduled event.
Described herein are techniques and mechanisms for geographic routing. A database system may store data records corresponding with user accounts and including historical routing information characterizing geographic routes determined in association with the user accounts. A communication interface may receive a request from a remote computing device authenticated to a user account and identifying an initial geographic location and a terminal geographic location. A geographic routing engine may determine a route including turn-by-turn instructions for moving from the initial geographic location to the terminal geographic location. A generative language model interface may complete a navigation prompt to include novel text identifying supplemental route information. A user interface generation interface may transmit an instruction to the remote computing device to present a user interface that includes the turn-by-turn instructions and some or all of the supplemental route information.
Disclosed are methods, apparatus, systems, and computer readable storage media for interacting with components across different domains in a single user interface in an online social network. The user interface includes a first component and a second component, where the first component exposes content from a first database system at a first network domain and the second component exposes content from a second database system at a second network domain. A first interaction with the first component is received at a computing device, followed by a reference being provided in the second component, where the reference includes information related to the first interaction. A second interaction with the second component regarding the reference can be received at the computing device. Interactions between the components hosted on different database systems can occur through an application programming interface (API).
G06Q 30/02 - MarketingPrice estimation or determinationFundraising
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
Techniques are disclosed relating to implementing a cache layer for a distributed database system. In some embodiments, a distributed computing system that includes a plurality of physical nodes implementing a hosting service deploys, to a first of the plurality of physical nodes, a container that implements a cache for a distributed database system hosted by the hosting service. The container is executable to store the cache in a memory internal to the first physical node. The container receives, from the database system, a data request for data maintained in a persistent storage external to the first physical node. In response to determining that the requested data resides in the cache, the container services the data request from the internal memory of the first physical node.
Embodiments described herein provide a method of sanitizing a user input. A system receives the user input, and may retrieve one or more documents from a database based on the user input. The system then generates, via a first neural network based language model, a sanitized version of the user input in response to a determination to sanitize based on at least one of the user input or the one or more documents. The system then generates, via a second neural network based language model, an output text based on a prompt, the one or more documents, and the sanitized version of the user input.
In some systems, a set of sentences of a relatively large document may be vectorized into a set of vectors via an embedding model for summarization. Further, a subset of vectors of the set of vectors may be selected via a farthest point sampling (FPS) procedure based on a vector-space distance between respective vectors of the subset of vectors. Moreover, the subset of vectors that are associated with a subset of sentences may be ordered based on the order of the subset of sentences within the set of sentences of the document. Further, to generate a summary of the document, a query may be transmitted to a large language model (LLM) that includes a summarization prompt and the subset of sentences that correspond with the selected subset of vectors. A summary of the document may then be received from the LLM based on transmitting the query.
Techniques are disclosed relating to orchestrating locking between database nodes of a database system. A database node can determine that an execution of a database transaction at the database node involves acquiring a lock. The database node acquires, from a separate lease manager node, a lease object that permits the database node to create the lock for the database transaction. As a part of provisioning that lease object to the database node, the lease manager node ensures that a lease object for creating locks that conflict with the lock is not held by another database node. The database node creates the lock for the database transaction based on the acquired lease object. As a part of creating that lock, the database node ensures that the lock does not conflict with a lock held by another database transaction executing at the database node.
A method and system have been developed for delivery of a message to a recipient who is currently offline. First, a message is generated by a sender that is scheduled for a delivery to the recipient at a time later than generation of the message. The message is transmitted to a messaging device of the recipient immediately upon generating the message and it is stored in local memory storage of the messaging device. The message is then delivered to the recipient from the local memory storage of the messaging device of the recipient at the scheduled delivery time.
A flow template user interface (UI) is generated for development of a flow of a campaign. The flow template UI is associated with a communication channel and includes an audience field, a content field, a scheduling field, and an activation field including an activation button to enable activation and deactivation of the flow. A target audience is received via the audience field, content is received via the content field, and a schedule is received via the scheduling field. A command is received to activate the flow via the activation button. The content is transmitted to the target audience via the communication channel in accordance with the schedule in response to activation of the flow.
A system may receive a natural language description of a desired output to be generated by one or more artificial intelligence (AI) models. The system may generate, using at least one of the AI models and based on the natural language description, multiple generation plans, each containing a first set of instructions for generating the desired output. The system may rank these generation plans using at least one of the AI models to select a potential generation plan. The system may generate, using at least one of the AI models and following the instructions in the selected plan, multiple outputs. The system may rank these outputs using at least one of the AI models to select a potential output. The system may validate the selected output based on validation parameters associated with the desired output.
G06F 21/54 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by adding security routines or objects to programs
Techniques are described herein for a method of obtaining a token based on a conversation in real time. The method further includes predicting, using a large language model (LLM) and the token, a next token. The method further includes predicting, using a classifier and the next token, a completion of a user turn. The method further includes triggering a next turn of the conversation in real time using the completion of the user turn.
Display screen or portion thereof with animated graphical user interface or a mirror or portion thereof having a display screen with animated graphical user interface
Disclosed herein are system, method, and computer program product embodiments for providing an architecture to support a semantic validation technique. The system includes a governance console that carries out data management functionalities to support the validation. Such functionalities include generating, storing and publishing validation profiles that are used by a validation service for validating an asset, a validation reporter that receives and stores validation reports and performs notification functions to notify relevant individuals of the validation results, as well as a profile runner and associations manager that directly support the validation service.
In some embodiments, a method receives a file. The file is packed using a packing method. An entropy profile is generated for the file. The entropy profile describes an entropy of data over positions in the file. The method generates a rule to detect the entropy profile of the file by analyzing entropy values from the entropy profile in slices in the file. The rule is output. The rule is usable to detect in other files that use the packing method based on analyzing entropy in slices of the other files.
Techniques for displaying workflow responses based on determining topics associated with user requests are discussed herein. In some examples, a user may post a request (e.g., question) to a virtual space (e.g., a channel, thread, board, etc.) of a communication platform. The communication platform may input the request into a machine learning model trained to identify topics associated with the request and confidence levels associated with topics. In such examples, the communication platform may associate a topic with the user request based on the confidence level of the topic. In some examples, the communication platform may determine that the topic is associated with a graphical identifier (e.g., emoji). The communication platform may cause the graphical identifier to be displayed to the virtual space within which the user request was posted. In response to displaying the graphical identifier, the communication platform may display a workflow response to the virtual space.
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
69.
Systems, Methods, And Devices For Customizable Computing Platforms
Systems, methods, and devices disclosed herein provide integration of on-demand applications with generative artificial intelligence platforms. For example, a computing platform may be implemented using a server system, where the computing platform is configurable to cause receiving application data from an on-demand application hosted by the computing platform, generating a data model based, at least in part, on the application data, the data model being a calendar data structure associated with a calendaring application, and generating, using an application model, additional application data, the application model being a machine learning model. The computing platform may be further configurable to cause updating the calendar data structure of the data model based, at least in part, on the additional application data, wherein the updating is performed, at least in part, via a plurality of custom data fields of a plurality of custom data objects.
G06N 3/0895 - Weakly supervised learning, e.g. semi-supervised or self-supervised learning
G16H 40/20 - ICT specially adapted for the management or administration of healthcare resources or facilitiesICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
70.
Display screen or portion thereof with animated graphical user interface
A plurality of permissions associated with the on-demand computing services environment may be identified. Each of the permissions may identify a respective one or more actions permitted to be performed within the on-demand computing services environment. Each of the permissions may be granted to a respective one or more user accounts within the on-demand computing services environment. A degree of overlap between a first group of the user accounts granted a first one of the permissions and a second group of the user accounts granted a second one of the permissions may be determined. When the degree of overlap exceeds a designated threshold, a designated permission set that includes the first permission and the second permission may be created.
Embodiments described herein provide a document summarization framework that employs an ensemble of summarization models, each of which is a modified version of a base summarization model to control hallucination. For example, a base summarization model may first be trained on a full training data set. The trained base summarization model is then fine-tuned using a first filtered subset of the training data which contains noisy data, resulting in an “anti-expert” model. The parameters of the anti-expert model are subtracted from the parameters of the trained base model to produce a final summarization model which yields robust factual performance.
Database systems and methods are provided for securing actions associated with graphical user interface (GUI) elements within an instance of a web application using a web application firewall. One method of securing an action associated with a GUI element within a GUI display of an instance of a web application involves monitoring a location associated with the GUI element associated with the action within the GUI display of the instance of the web application, detecting an event associated with the GUI element within the location of the GUI display, capturing event metadata associated with the event within a context of the instance of the web application, authenticating the event when the event metadata corresponds to authentication configuration metadata associated with the GUI element, and providing event data corresponding to the event to the GUI element to initiate the action in response to authenticating the event.
G06F 21/54 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by adding security routines or objects to programs
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
A plurality of permissions associated with the on-demand computing services environment may be identified. Each of the permissions may identify a respective one or more actions permitted to be performed within the on-demand computing services environment. Each of the permissions may be granted to a respective one or more user accounts within the on-demand computing services environment. A degree of overlap between a first group of the user accounts granted a first one of the permissions and a second group of the user accounts granted a second one of the permissions may be determined. When the degree of overlap exceeds a designated threshold, a designated permission set that includes the first permission and the second permission may be created.
A plurality of permissions associated with the on-demand computing services environment may be identified. Each of the permissions may identify a respective one or more actions permitted to be performed within the on-demand computing services environment. Each of the permissions may be granted to a respective one or more user accounts within the on-demand computing services environment. A degree of overlap between a first group of the user accounts granted a first one of the permissions and a second group of the user accounts granted a second one of the permissions may be determined. When the degree of overlap exceeds a designated threshold, a designated permission set that includes the first permission and the second permission may be created.
Techniques are described herein for a method of determining a similarity of each neuron in a layer of neurons of a neural network model to each other neuron in the layer of neurons. The method further includes determining a redundant set of neurons and a non-redundant set of neurons based on the similarity of each neuron in the layer. The method further includes fine tuning the set of non-redundant neurons using a first set of training data. The method further includes training the set of redundant neurons using a second set of training data.
An utterance modification system may receive a first utterance from a first user during an interactive conversation session between the first user and a second user. The utterance modification system may further receive a second utterance from the second user that is in a speech-based format. The utterance modification system may then transmit a prompt that includes the second utterance in a text-based format and a set of prompt parameters to a large language model (LLM). In response, the utterance modification system may receive a third utterance from the LLM that may be based on the second utterance and associated with a target user tone. Further, the utterance modification system may transmit the third utterance to the first user in a speech-based format.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
Techniques are disclosed that pertain to linked database systems. A computer system implements a first database system that manages a table storing data for a tenant. The computer system may receive an indication to provision, at a second database system having a database management engine of a different type than a database management engine of the first database system, data of the tenant stored in the table. The computer system provisions the data in data structures at the second database system and permits the tenant to perform, on the data, a first set of operations at the first database system and a second set of operations at the second database system. The second set of operations includes functionality not included in the first set of operations. The computer system may receive a result of processing by the tenant using the second database system and store the result in the table.
An application server may receive, from a client and at an interface for accessing a large language model, a prompt for a response from the large language model. The application server may receive, via a model interface, a streaming output of the large language model, where the streaming output includes a first portion of the response and a threshold number of tokens. The application server may then provide the first portion of the response to a scoring model that determines a first incremental score indicating a first probability that the first portion of the response includes content from one or more content categories. The application server may transmit, to the client and based on the first probability, the first portion of the response, an indication of the first incremental score, or both.
A computing device of a data processing system may receive an indication from a user to create a communication process flow that includes a set of actions that control electronic communications between an entity and a set of users. The computing device may further receive one or more user inputs that indicate at least two first action variations of a first action of the set of actions and at least two second action variations of a second action of the set of actions. The system may then generate the communication process flow to include a set of paths for a set of combinations of the at least two first action variations and the at least two second action variations based on receiving the one or more user inputs. The system may then execute the communication process flow that includes the set of paths for the set of users.
A computing device of a data processing system may receive an indication of a creation of a communication process flow object that includes a set of paths for a set of actions that control electronic communications between an entity and a set of users. The computing device may receive a first user input indicating a goal for the set of paths where the goal is based on data stored within an external data platform. The computing device may then route at least a subset of the set of users via one or more paths of the set of paths based on a result of the one or more paths satisfying the goal for the set of paths of the communication process flow. Further, the computing device may distribute a subset of the electronic communications to the subset of the set of users in accordance with the one or more paths.
Some embodiments comprise integrating information from a social network into a multi-tenant database system. A plurality of information from the social network is retrieved, using a processor and a network interface of a server computer in the multi-tenant database system, wherein the plurality of information is associated with a message transmitted using the social network. Metadata related to the transmitted message is generated, using the processor. A conversation object is generated, using the processor, based on the plurality of information associated with the transmitted message and the metadata related to the transmitted message. The conversation object is then stored in an entity in the multi-tenant database system, using the processor of the server computer.
G06F 16/9535 - Search customisation based on user profiles and personalisation
G06F 16/958 - Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
To recover from a first database application that is in an unstable condition, a computer system deploys a first instance of a second database application with an override condition that causes the first instance to use a first database catalog used by the first database application. Content of the first database catalog is different than content expected by the second database application. The computer system performs a process to create a second database catalog that includes the content expected by the second database application. The process may include communicating with the first instance to access catalog objects from the first database catalog and insert them into the second database catalog. The computer system then deploys a second instance of the second database application without the override condition to cause the second instance to use the second database catalog that includes the content expected by the second database application.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
95.
Display screen or portion thereof with a graphical user interface
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for backing up environments. One of the methods includes maintaining, for a cloud computing environment, first data that indicates one or more previously active sandbox environments; determining second data that indicates one or more most recently active sandbox environments; determining, using the second data, a newly added sandbox environment; determining, using a first identifier for the newly added sandbox environment and a second identifier for a prior sandbox environment from the one or more previously active sandbox environments, whether the newly added sandbox environment is likely a refresh of the prior sandbox environment; and performing one or more actions for the newly added sandbox environment using a result of the determination whether the newly added sandbox environment is likely a refresh of the prior sandbox environment.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/11 - File system administration, e.g. details of archiving or snapshots
Techniques are disclosed relating to query planning and execution. A computer system can receive a database statement that comprises a LIKE predicate that defines a set of pattern parameters. The computer system may generate first and second query paths for a query plan associated with the database statement. The first query path utilizes an index associated with a database table specified by the database statement while the second query path does not utilize the index. The computer system executes the database statement in accordance with the query plan and values that are provided for the set of pattern parameters. As a part of executing the database statement, the computer system may evaluate those values to determine whether they are prefix constants and execute the first query path instead of the second query path if all the values are prefix constants.
Methods, systems, apparatuses, devices, and computer program products are described. A system may obtain a set of documents associated with a knowledge base for retrieval-augmented generation (RAG). The system may generate multiple representations of the information included in the documents using multiple knowledge extraction pipelines. For example, the system may generate a set of metadata-based vector embeddings based on the documents, a set of knowledge graphs based on the documents, and a set of hierarchical tree representations based on the documents. The system may receive a user query and may retrieve contextual information from the set of vector embeddings, the set of knowledge graphs, and the set of hierarchical tree representations to augment the user query for a large language model (LLM) prompt. The system may input the prompt to the LLM, and the LLM may output a response based on the user query and the contextual information.
Systems and methods for generating an event occurrence feedback report after receipt of an event occurrence completion indicator, the event occurrence completion indicator associated with an event occurrence identifier and received from a third party event scheduling resource, and to present the event occurrence feedback report to a client device associated with an event occurrence creator identifier are provided herein.
A system, method, and computer-readable media for creating a collaboration container in a group-based communication system are provided. A request to create the collaboration container may be received. The collaboration container may comprise a collection of multimedia files. Multiple users may add multimedia files to the collaboration containers. The multimedia files may be stored in a storage order. The multimedia files in the collaboration container may be sorted based on a sort label, such as by multimedia file topic. Upon playback, the multimedia files may be played back in a sort order distinct from the storage order. During playback, a user may comment on a multimedia file of the collaboration container. When subsequent users playback the collaboration container, the comment may be displayed with the associated multimedia file.
H04L 65/402 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel non-real time sessions, e.g. downloading a file in a parallel FTP session, initiating an email or combinational services
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences