Methods, systems, apparatuses, devices, and computer program products are described. A system may obtain a set of documents associated with a knowledge base for retrieval-augmented generation (RAG). The system may generate multiple representations of the information included in the documents using multiple knowledge extraction pipelines. For example, the system may generate a set of metadata-based vector embeddings based on the documents, a set of knowledge graphs based on the documents, and a set of hierarchical tree representations based on the documents. The system may receive a user query and may retrieve contextual information from the set of vector embeddings, the set of knowledge graphs, and the set of hierarchical tree representations to augment the user query for a large language model (LLM) prompt. The system may input the prompt to the LLM, and the LLM may output a response based on the user query and the contextual information.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for backing up environments. One of the methods includes maintaining, for a cloud computing environment, first data that indicates one or more previously active sandbox environments; determining second data that indicates one or more most recently active sandbox environments; determining, using the second data, a newly added sandbox environment; determining, using a first identifier for the newly added sandbox environment and a second identifier for a prior sandbox environment from the one or more previously active sandbox environments, whether the newly added sandbox environment is likely a refresh of the prior sandbox environment; and performing one or more actions for the newly added sandbox environment using a result of the determination whether the newly added sandbox environment is likely a refresh of the prior sandbox environment.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/11 - File system administration, e.g. details of archiving or snapshots
Techniques are disclosed relating to query planning and execution. A computer system can receive a database statement that comprises a LIKE predicate that defines a set of pattern parameters. The computer system may generate first and second query paths for a query plan associated with the database statement. The first query path utilizes an index associated with a database table specified by the database statement while the second query path does not utilize the index. The computer system executes the database statement in accordance with the query plan and values that are provided for the set of pattern parameters. As a part of executing the database statement, the computer system may evaluate those values to determine whether they are prefix constants and execute the first query path instead of the second query path if all the values are prefix constants.
Systems and methods for generating an event occurrence feedback report after receipt of an event occurrence completion indicator, the event occurrence completion indicator associated with an event occurrence identifier and received from a third party event scheduling resource, and to present the event occurrence feedback report to a client device associated with an event occurrence creator identifier are provided herein.
A system, method, and computer-readable media for creating a collaboration container in a group-based communication system are provided. A request to create the collaboration container may be received. The collaboration container may comprise a collection of multimedia files. Multiple users may add multimedia files to the collaboration containers. The multimedia files may be stored in a storage order. The multimedia files in the collaboration container may be sorted based on a sort label, such as by multimedia file topic. Upon playback, the multimedia files may be played back in a sort order distinct from the storage order. During playback, a user may comment on a multimedia file of the collaboration container. When subsequent users playback the collaboration container, the comment may be displayed with the associated multimedia file.
H04L 65/402 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel non-real time sessions, e.g. downloading a file in a parallel FTP session, initiating an email or combinational services
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
6.
Display screen or portion thereof with graphical user interface
Disclosed are some implementations of systems, apparatus, methods and computer program products for optimizing database backup transactions. A backup poller in a private subnet tracks a database backup operation and detects a failure of the database backup operation. More particularly, the backup poller detects that a failure occurred at a specific point in the database backup operation. The backup poller instructs a backup decentralizer in a public subnet to continue the database backup operation from the specific point. The backup decentralizer monitors a health of a plurality of service providers and selects a service provider of the plurality of service providers based on the health of the service provider. The backup decentralizer continues the backup operation from the specific point using the selected service provider.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
8.
INTEGRATING AND MANAGING SOCIAL NETWORKING INFORMATION IN AN ON-DEMAND DATABASE SYSTEM
Some embodiments comprise integrating information from a social network into a multi-tenant database system. A plurality of information from the social network is retrieved, using a processor and a network interface of a server computer in the multi-tenant database system, wherein the plurality of information is associated with a message transmitted using the social network. Metadata related to the transmitted message is generated, using the processor. A conversation object is generated, using the processor, based on the plurality of information associated with the transmitted message and the metadata related to the transmitted message. The conversation object is then stored in an entity in the multi-tenant database system, using the processor of the server computer.
G06F 16/9535 - Search customisation based on user profiles and personalisation
G06F 16/958 - Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
9.
METHOD AND SYSTEM FOR APPLICATION PROGRAMMING INTERFACE BASED CONTAINER SERVICE FOR SUPPORTING MULTIPLE MACHINE LEARNING APPLICATIONS
A method and system for an application programming interface (API) based container service for supporting multiple machine learning (ML) applications is described. In particular, a scoring service container includes a base scorer to interface with a ML serving infrastructure using the API. The scoring service container also includes an application specific scorer, which itself includes a model loader and a scoring function. A model identifier is provided to the model loader, and it provides a model object. At least some parameters in a request from a client application are passed to the scoring function, which produces a scoring. The base scorer returns the scoring according to the API to the ML serving infrastructure for delivery to the client application.
Disclosed herein are system, method, and computer program product embodiments for using generative AI for prompt comparison. The system may receive a prompt. The prompt may include a request for information. The system may identify a stored prompt based on a similarity value between a received prompt for a large language model (LLM) and the stored prompt, the stored prompt including a response generated by the large language model (LLM). The system may generate a response using the LLM if the similarity value between the received and stored prompts is below a predefined threshold. The system may then modify the response by applying a first rule associated with a first designated phrase, and a second rule associated with a second designated phrase, where the first designated phrase comprises a banned phrase and where the second designated phrase comprises a selected phrase.
A retrieval augmented generation (RAG) based query reformulation pipeline for a Query and Answer (QA) system is described. This pipeline leverages a Directed Acyclic Graph (DAG) and involves several operations, including retrieval of documents and knowledge graph triplets based on the initial query, reranking of retrieved elements based on relevance, refinement and summarization of relevant document chunks and knowledge triplets, reformulation of the initial query, and generation of a natural language response. The response is generated using a large language model (LLM) and is grounded in the knowledge base, which supports factual accuracy and consistency.
Methods, apparatuses, and computer program products are disclosed. The method may include receiving a first request to ingest a document. The method may include generating, using a large language model (LLM), a knowledge graph including a plurality of graph triples, each graph triple including a first node, a second node, and an edge connecting the first node and the second node, where each first node corresponds to a first element type of the document, where each second node corresponds to a second element type of the document, and where each edge corresponds to a third element type of the document. The method may include receiving a second request to generate a generative response with the LLM. The method may include presenting a response to the second request, the response generated by the LLM based at least in part on the knowledge graph.
Methods, systems, apparatuses, devices, and computer program products are described. A system may obtain a set of documents for input into a query response system, generate a set of vector embeddings based on the set of documents and a semantic vector augmentation pipeline, and generate a set of knowledge graphs based on the set of documents and a knowledge graph augmentation pipeline, where each knowledge graph includes a set of multiple knowledge graph triplets. The system may receive a user query and augment the user query to generate an augmented prompt using at least one or more vector embeddings from the set of vector embeddings and one or more knowledge graph triplets from the set of knowledge graphs. The system may provide, to a large language model (LLM), the augmented prompt as an input and may receive, as an output of the LLM, a response to the augmented prompt.
A method of training a neural network model for improved embedding performance is provided. A first plurality of data samples are received via a data interface. A plurality of batches are generated, including a first batch that includes data samples associated with a single first task, and a second batch that includes data samples associated with a single second task. A training process to the neural network model is performed using the plurality of batches. The training includes computing a first loss based on a first loss objective function customized for the first task and a second loss based on a second loss objective function customized for the second task, and updating parameters of the neural network model based on the first loss and the second loss via backpropagation.
A computing services environment may include a database system may store database records for client organizations accessing computing services including a conversational chat assistant. The computing services environment may also include an application server receiving natural language user input for the conversational chat assistant and a generative language model interface providing access to one or more generative language models. The computing services environment may also include an orchestration and planning service configured to analyze the natural language user input via a generative language model of the one or more generative language models to identify a plurality of actions to execute via the computing services environment to fulfill an intent expressed in the natural language user input. The computing services environment may be configured to execute the plurality of actions to determine a natural language response message.
A computing services environment may include a database system storing a plurality of database records for a plurality of client organizations accessing computing services including a conversational chat assistant, an application server receiving user input for the conversational chat assistant, a generative language model interface providing access to one or more generative language models, an orchestration and planning service configured to identify a plurality of actions based on the user input and to execute the plurality of actions to determine a natural language response message, and/or a metadata framework. The metadata framework may specify information related to the conversational chat assistant. The metadata framework may include a definition associated with an action of the plurality of actions. The definition may include one or more inputs, one or more outputs, and one or more operations performed via the computing services environment.
A computing services environment may include a database system storing a plurality of database records for a plurality of client organizations accessing computing services including a conversational chat assistant. The computing services environment may also include an application server may receive user input for the conversational chat assistant, a generative language model interface, an orchestration and planning service configured to identify one or more actions based on the user input, to execute the one or more actions to determine a natural language response message, and to determine a recommended action for selection via a conversational chat interface. The computing services environment may also include a communication interface configured to transmit the natural language response message and a user interface generation instruction executable by the client machine to provide a selection affordance for selecting the recommended action.
A computing services environment may include a database system storing database records for client organizations accessing computing services including a conversational chat interface, an application server providing access to the conversational chat interface, a metadata repository storing metadata entries describing and defining interaction data for interacting with agents, and an orchestration service configured to execute an orchestration process based on a natural language request message received via the conversational chat interface. An input prompt including the natural language request message and agent descriptions selected from the plurality of metadata entries may be determined and transmitted to a generative language model. A prompt completion including a selection of the designated agent based on the plurality of agent description may be received from the generative language model. Novel text responsive to the natural language request message may be generated by transmitting a request to the designated agent.
A computing services environment may include a database system storing database records for client organizations accessing computing services including a conversational chat interface, an application server providing access to the conversational chat interface, a metadata repository storing metadata entries characterizing a actions capable of being performed via the computing services environment, and an orchestration service configured to execute an orchestration process based on a natural language request message received via the conversational chat interface. An input prompt including the natural language request message and descriptions of actions selected from the metadata entries may be determined and transmitted to a generative language model. A prompt completion including a plan that includes a subset of the actions and a natural language description of the plan may be received from the generative language model and sent to a client machine via the conversational chat interface.
In some embodiments, a method receives a schema registry in a file. The schema registry aggregates schema files for a data model that is associated with a database system used by servers and consumer devices. The schema files is in a first software language that describes objects in the data model, and the schema registry is in a second software language used by the consumer devices. A first function in the schema registry is executed for an object to retrieve an original schema for the object in the schema files to create first software code for the object with the original schema for an application that uses the data model. A second function in the schema registry is executed for the object to generate a new object from the original schema for the object to create second software code for a new object with the original schema for the application.
An entity-level visibility statistic may be determined for a database entity in a database system based on one or more visibility rules providing access to instances of the database entity to one or more user accounts. A user-level visibility statistic quantifying a set of instances of the database entity accessible to a user account via the one or more visibility rules may be determined based at least in part on the entity-level visibility statistic. A request may be received by the user account to execute an input database query retrieving one or more of the instances of the database entity. A database object retrieval query including two or more data security subqueries evaluating accessibility of the one or more instances of the database entity and positioned based at least in part on the user-level visibility statistic may be determined based on the input database query.
A computing services environment may include a database system storing a plurality of database records for client organizations accessing computing services including a conversational chat assistant accessible via various communication channels. The computing services environment may also include a communication interface configured to receive an input message from a client machine via a communication channel, a generative language model interface providing access to one or more generative language models, and an orchestration and planning service. The orchestration and planning service may be configured to analyze the input message to determine a novel text passage via a generative language model, to determine novel text formatting information based on designated text formatting configuration information specifying one or more parameters for formatting text generated for transmission via the communication channel, and to transmit the novel text passage and the novel text formatting information to the client machine via the communication interface.
Techniques for providing application contextual information. One or more sets of database context identifiers corresponding to events that occur within the database are generated by the database. The one or more sets of database context identifiers have at least one application context field. A session identifier corresponding to a session to be monitored is sent from the application to the database. Information to be stored in the database with the session identifier is sent to the database. Database logs and application logs are correlated using at least the session identifier.
Disclosed herein are system, method, and computer program product embodiments for the design, architecture, and implementation of various aspects of an API gateway. A computer implemented method may access, by an API portal, a catalog comprising a plurality of APIs. The catalog may be configured to return a subset of the plurality of APIs based on a search. Each API at the catalog may include at least one feature comprising an API type. The method may then download one or more APIs from the plurality of APIs to the API portal. The method may further manage access to the API portal, where the access is associated with one or more users. The method may customize a layout of the API portal, where the layout includes at least one customizable feature comprising a color scheme. The method may then generate logs and metrics corresponding to each API at the API portal.
A computing services environment may include a database system storing database records for client organizations accessing computing services including a conversational chat interface, an application server providing access to the conversational chat interface, and an orchestration service configured to execute an orchestration process based on a natural language request message received via the conversational chat interface. An information enrichment and disambiguation process may be executed to determine candidate values corresponding to a text portion of the natural language request message. Novel clarification text requesting clarification of the candidate information may be determined and transmitted via the conversational chat interface. Updated information may be determined based on the candidate information and clarification input received via the conversational chat interface. Novel response text responsive to the natural language request message may be determined based on the updated information.
Embodiments described herein provide a method for building a hierarchical structure of a plurality of neural network models for performing a task. The method includes the following operations. A task instruction is received via a data interface. A first neural network model generates a first sub-task from the task instruction. A second neural network model is selected from the plurality of the neural network models based on the first sub-task. A first connection is built via a first API, between the first neural network model and the second neural network model. The first neural network model generates a first sub-task package in a format compliant with the second neural network model. A first output is received via the first connection from the second neural network model that executes the first sub-task package. The first neural network model generates a second sub-task based on the task instruction and the first output.
Techniques are disclosed relating to automating authentication decisions for a multi- factor authentication scheme based on computer learning. In disclosed embodiments, a mobile device receives a first request corresponding to a factor in a first multi-factor authentication procedure. Based on user input approving or denying the first request, the mobile device sends a response to the first request and stores values of multiple parameters associated with the first request. The mobile device receives a second request corresponding to a factor in a second multi-factor authentication procedure where the second request is for authentication for a different account than the first request. The mobile device automatically generates an approval response to the second request based on performing a computer learning process on inputs that include values of multiple parameters for the second request and the stored values of the multiple parameters associated with the first request. The approval response is automatically generated and sent without receiving user input to automate the second request.
Systems, methods, and devices facilitate generation of application programming interface (API) objects. Methods may discover, using one or more components of a cloud computing platform, ingress and egress API traffic associated with one or more hosted applications executing on the cloud computing platform. Methods may collecting API traffic data for a service used by the one or more hosted applications, where the API traffic data is associated with calls to the service made by a first client application executing on a device external to the cloud computing platform. Methods may form one or more API objects based on the API traffic data, the one or more API objects being formed based, at least in part, on one or more API specifications. Methods may provide, based on a request from a second client application, the one or more API objects.
In some embodiments, a method determines a first representation for an entity that received a request. Second representations are searched in a prompt store to retrieve a second representation that is determined to match the first representation. A prompt template for a model is associated with the second representation. The method searches for relevant documents for the request in a knowledge base store and retrieves information from a document that is considered relevant to the request. The information provides context for the request. The method inserts at least a portion of the information into the prompt template to generate a prompt that is based on the context and submits the prompt to the model to receive a response. The method responds to the request using the response.
Database systems and methods are provided for managing usage of large language models (LLMs). One method involves dividing text data into primary chunks using input criteria associated with an LLM service, generating secondary chunks by merging respective pairs of adjacent primary chunks, and inputting a respective secondary chunk to the LLM service when a semantic similarity between a conversational input to a user interface and the respective secondary chunk of the one or more secondary chunks is greater than a threshold. The LLM service generates response data responsive to the conversational input based at least in part on a subset of the text data associated with the respective secondary chunk, and a response is provided to the conversational input at the user interface based at least in part on the response data generated by the LLM service.
Embodiments described herein provide a unified LLM training pipeline that hands the diversity of various data structures and formats involving LLMs agent trajectories. These pipelines are specifically designed to transform incoming data into a standardized representation, ensuring compatibility across varied formats. Furthermore, the data collection undergoes a filtering process to ensure high-quality trajectories, adding an additional layer of refinement to the dataset. In this way, the training pipeline not only unifies trajectories across environments but also enhances the overall quality and reliability of the collected data for LLM training.
A database system in a computing system may store data records communication contact information for accounts. A communication package repository may store a communication package definition configured by an external entity and defining access information for a communication channel outside of the computing system via an external computing system managed by the external entity. A tenant space may store packages installed for a tenant. A communication interface may expose a communication access service receiving from the external computing system a request to establish communication with the tenant from a remote computing device. An agent client machine interface may create a communication session between the remote computing device and an agent client machine authenticated to an agent account.
A request to establish a communication session from an end point to a tenant of a computing services environment may be received from an external computing system managed by an external entity in accordance with a designated communication package definition configured by the external entity and defining access information for a designated communication channel outside of the computing services environment. An agent account of a plurality of agent accounts associated with the tenant may be determined based on routing configuration information specified in the designated communication package definition. The communication session may be established via the designated communication channel from the end point through the external computing system and the computing services environment to a first client machine authenticated to the agent account. One or more messages may be transmitted from the end point to the first client machine via the communication session.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
A communication session may be established via a designated communication channel from an end point through an external computing system and the computing services environment to a client machine authenticated to a database system account for an agent of a tenant of the computing services environment. The communication session may be established in accordance with a designated communication package definition configured by an external entity and defining protocol information for the designated communication channel. Messages may be sent from the client machine to the end point through the computing services environment via the communication session in accordance with the designated communication package definition. Transmitting the messages may include receiving an indication of an event detected at an event handler in a user interface component included in a user interface presented at the client machine.
Systems and methods for receiving, at a server that includes a reschedule handler, a request with a service resource by a requestor. The server determines one or more available resources of the service resource to handle the request. At least one match between one or more criteria of the request and the availability of the one or more resources of the service resource is determined. A time slot for the requestor with the service request is scheduled based on the determined at least one match and based on an acceptance of the request by the service resource that is received by the server. The server transmits the scheduled time slot with the service resource to the requestor.
Embodiments described herein provide a Transformer architecture for time series data forecasting. Specifically, the Transformer based time series model may be built on a transformer architecture having one or more multi patch size projection layers in the encoder and the decoder, and an any-variate attention module. The Transformer based time series model may receive multivariate time series and consider all variates as a single sequence. Patches of the input are subsequently projected into vector representations via a multi patch size input projection layer. The output tokens of forecasted time series data are then decoded via the multi patch size output projection layers in the parameters of the mixture distribution.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
44.
UPGRADING MULTI-INSTANCE SOFTWARE USING ENFORCED COMPUTING ZONE ORDER
Techniques for preventing deadlock when upgrading a plurality of instances of a software service that is distributed across multiple different computing zones. Upgrade software executing on a cloud computer system receives an upgrade request to upgrade the plurality of instances. Respective upgrade processes are initiated in parallel. Node acquisition portions of the respective upgrade processes have a constraint on parallelization, as they are performed using a common upgrade procedure in which a given instance is upgraded by acquiring nodes in different ones of the computing zones according to a specified order. After acquiring the nodes according to the specified order, an updated instance is deployed to the acquired nodes to update the given instance. The acquiring of the nodes may be performed by node-securing pods in some embodiments, with the specified order enforced with affinity and anti-affinity rules.
Artificial intelligence (AI) technology can be used in combination with composable communication goal statements to facilitate a user's ability to quickly structure story outlines in a manner usable by an NLG narrative generation system without any need for the user to directly author computer code. Narrative analytics that are linked to communication goal statements can employ a conditional outcome framework that allows the content and structure of resulting narratives to intelligently adapt as a function of the nature of the data under consideration. This AI technology permits NLG systems to determine the appropriate content for inclusion in a narrative story about a data set in a manner that will satisfy a desired communication goal.
Techniques are disclosed relating to database configuration settings overrides. In some embodiments, a database system stores a set of default configuration settings that control operation of the database system. The database system receives a query requesting data from the database system, and metadata about the query. The database system determines, based on the query and the metadata, that a configuration settings override has been specified for the query, where the configuration settings override indicates that one or more of the default configuration settings are to be replaced with one or more configuration settings specific to the query. In response to the determining that a configuration settings override has been specified, the database system executes the query using the one or more specific configuration settings.
Techniques are disclosed that relate to skip lists. A computer system maintains a skip list having towers of varying depths and entries storing pointers to other towers. A first tower includes an entry at a particular depth storing a pointer to access an entry of a second tower. The pointer includes first similarity information indicating an amount of similarity between a key of the first tower and a key of the second tower. The computer system performs a traversal of the skip list for a search key. The computer system generates second similarity information indicating an amount of similarity between the first tower's key and the search key. Based on a comparison involving the first and second similarity information and without accessing the second tower to obtain information about its key, the computer system determines whether to traverse to the second tower using the pointer or descend the first tower.
Techniques for generating a summary using a machine-learning model native to the operating system running on a user device are discussed herein. The communication platform may receive an instruction to generate a summary to be displayed to a user profile. In such cases, the communication platform may determine whether to generate the summary using on-device systems or using systems in a server of the communication platform (e.g., a device separate from the user device). Based on determining to generate the summary using the on-device systems, the communication platform may identify data to summarize. The communication platform may input the data into a machine-learning model (or large language model (LLM)) residing within the operating system of the user device and receive, as output, a summary. In such cases, the communication platform may cause the summary to be displayed via the user interface of the user device associated with the user profile.
Systems, devices, and techniques are disclosed for network security policy generation and distribution. A security policy written using a Domain Specific Language (DSL) for network security may be received. The security policy may be associated with a service owner and a control plane. A representation of the security policy may be generated from the security policy. A configuration bundle of the service owner may be updated with the representation of the security policy. The security policy may be determined to be approved. A rule set may be generated from the representation of the security policy. A differential between the rule set and a current rule set may be determined. A security component associated with the control plane based on the differential may be configured.
A computer-implemented method is disclosed for predicting, based on a previous usage of a cloud-based computing resource by a number of users of the cloud-based computing resource, a future usage of the cloud-based computing resource. The method includes predicting, based on the predicted future usage of the cloud-based computing resource, an anomaly event at the cloud-based computing resource. The method also includes implementing a first anomaly mitigation action, based on the prediction of the anomaly event at the cloud-based computing resource and re-evaluating a status of the anomaly event at the cloud-based computing resource after the implementation of the first anomaly mitigation action. The method further includes implementing a second anomaly mitigation action at the cloud-based computing resource, based on the re-evaluation of the status of the anomaly event.
H04L 41/069 - Management of faults, events, alarms or notifications using logs of notificationsPost-processing of notifications
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
Methods, systems, apparatuses, devices, and computer program products are described. An application server or a data processing system may generate a set of candidate natural language queries that correspond to a data object (e.g., document, report, assert) based on inputting a set of metadata associated with the data object into a large language model (LLM). The system may embed the candidate natural language queries into a first set of vectors, where a query space may include a collection of the first set of vectors related to the data object. In addition, the system may embed a natural language query received from a user into a second vector. The system may perform a vector-space comparison of the second vector to the first set of vectors or the query space, and retrieve a data object associated with the natural language query based on the comparison.
A method to approximate a segment count for a normalized dataset. The method includes sampling items in the primary database object to generate a sample, executing a segmentation count query on the sample to determine how many items in the sample satisfy a set of segment criteria, determining an error value based on an estimated sample size of the sample, a number of items in the sample that satisfy the set of segment criteria, and a confidence level value, determining a range of counts for the segment count based on the number of items in the sample that satisfy the set of segment criteria, the error value, and a total number of items in the primary database object, and providing the range of counts representing an approximated segment count for the normalized dataset.
Methods, systems, apparatuses, devices, and computer program products are described. An application server or a data processing system may convert a set of metadata associated with a data object (e.g., document, record, asset) from a first structured format into a second serialized format. The set of metadata in the second serialized (e.g., unstructured) format may be input in a large language model (LLM). The LLM may generate a first natural language summary associated with the data object based on the set of metadata. After receiving a natural language query from a user, the LLM may generate a second natural language summary associated with the data object based on the natural language query. The natural language summaries may be vectorized, and the vectorized versions may be compared. Based on the comparison, an indication of the data object corresponding to the natural language query may be displayed.
Techniques are disclosed relating to implementing an end-to-end orchestration for a datacenter on a cloud platform. A datacenter may be orchestrated on a cloud platform according to a declarative specification that describes dependences between datacenter entities (e.g., services) in the datacenter. Some datacenter entities may execution dependencies, which include activities, steps, or events that need to be completed for the datacenter entities to be ready for orchestration. Accordingly, in order for an orchestration workflow to be able to execute from beginning to end without interruption, all the execution dependencies for all the datacenter entities in the orchestration workflow need to be completed. The techniques disclosed include automatically initiating execution of the execution dependencies and waiting for indications that the execution activities are completed before executing the orchestration workflow.
Techniques are disclosed relating to implementing an incremental update to an existing datacenter on a cloud platform. The datacenter may have been built on a cloud platform according to a declarative specification that describes dependencies between datacenter entities in the datacenter. When an update is requested for the datacenter (e.g., by a customer or other entity), the system determines datacenter entities that are being changed in association with the update and execution dependencies associated with the update request. The system then initiates execution of the execution dependences and waits for the execution dependencies to be completed. Once the execution dependencies are completed, the system initiates orchestration of the datacenter in order to update the datacenter on the cloud platform with the addition or removal of datacenter entities on the datacenter.
Techniques are disclosed relating to implementing a statement-level INSTEAD OF trigger. In one embodiment a computer system stores trigger information associated with a statement-level database trigger executable to initiate execution of at least one trigger instruction for a database instead of performing a particular database operation, on a database view, specified by a database operation statement. The computer system receives a first database operation statement specifying performance of the particular database operation on the database view and identifies a set of target rows, within the database view, targeted by the first database operation statement. In addition, the computer system generates a reference table associated with the database view, where the reference table includes rows corresponding to the target rows. The computer system executes the statement-level database trigger instead of executing the database operation statement, where executing the statement-level database trigger includes accessing the reference table.
Techniques are described for securing secrets in software build workflows. In some implementations, build instructions call for execution of a first program module and a second program module, where the first program module has been approved to make a privileged request, but the second program module has not. The first program module can be stored in a trusted repository, separately from the second program module. When the first program module is loaded for execution, a cryptographic signature can be validated to determine that the first program module is authentic and as a condition for passing a privileged credential to the first program module. The second program module has no access to the privileged credential. Instead, when the second program module is loaded for execution, a determination can be made whether the second program module makes any privileged requests. Any privileged requests from the second program module will not be fulfilled.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Embodiments described herein provide bootstrapping language-images pre-training for unified vision-language understanding and generation (BLIP), a unified VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP enables a wider range of downstream tasks, improving on both shortcomings of existing models.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
A computer-implemented method is disclosed for predicting, based on a previous usage of a cloud-based computing resource by a number of users, a future usage of the cloud-based computing resource and then predicting, based on the predicted future usage, an anomaly event at the computing resource. The method also includes identifying a top contributing user that is responsible for the anomaly event and throttling an access of the top contributing user to the computing resource. The method further includes evaluating a speed of data requests received at the computing resource from the top contributing user after the throttling, and a utilization level of the computing resource. The method also includes dynamically adjusting the speed of data requests received at the computing resource, based on the evaluation of the utilization level of the computing resource, to maintain the utilization level of the computing resource within a predetermined target range.
A computer-implemented method is disclosed for predicting a future usage of a cloud-based computing resource based on a previous usage of the resource by users, and predicting an anomaly event at the resource. The method also includes identifying a top contributing user responsible for the anomaly event, throttling an access of the top contributing user, evaluating a speed of data requests received from the top contributing user, and maintaining a utilization level of the resource within a predetermined target range. The method further includes dynamically controlling the speed of data requests based on the evaluation of the speed of data requests and a controlling speed of data request recommended by a first artificial intelligence model. The recommendations of the first artificial intelligence model may be validated by a human reasoning based model configured to monitor and mitigate a risk associated with a counter-intuitive recommendation of the first artificial intelligence model.
A computer system may determine to perform an upgrade operation to deploy a second database application that is a different version than a first database application associated with a first database catalog that stores catalog objects. The computer system performs the upgrade operation, including preparing a second database catalog and deploying the second database application to manage a database based on the second database catalog. To prepare the second database catalog, the computer system may create the second database catalog and store, in the second database catalog, system catalog objects that are associated with the second database application. The computer system may further identify, from the catalog objects stored in the first database catalog, user catalog objects that were created by users of the database and then copy the identified user catalog objects from the first database catalog to the second database catalog.
Techniques for generating graphical elements via a communication platform are discussed herein. For example, one or more machine-learning models associated with a communication platform may be configured to receive, as input and from a user of the communication platform, a sentiment and/or a graphical element. The machine-learning model may be trained, using prior natural language statements and prior confidence levels associated with previous graphical elements, to output one or more graphical elements associated with the input. The one or more graphical elements may be shared via the communication platform and used to accurately and effectively convey thoughts, emotions, reactions, and ideas, for example.
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Disclosed herein are system, method, and computer program product embodiments for implementing variable a declarative authentication engine. The system generates a schema that includes a field and has a format defined by an authentication protocol associated with a service. The system then validates a connection request based on comparing the field of the generated schema to a field of the connection request for the service, wherein the connection request is formatted according to the schema and received from a client device. The system then provides the client device access to the service according to the connection request based on a result of the validating.
A method and system for classifying a triage-related message related to a software application security technical problem is provided. A triage-related classification is generated for the triage-related message by applying a processor-implemented machine learning model that has been trained to analyze the text of the triage-related message. The generated triage-related classification is sent to a user for remediating the software application security technical problem.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
Techniques are disclosed relating to database query optimizers. In some embodiments, a system receives, from a query optimizer, a plurality of query plans for a database maintained by the database system. The system retrieves a set of database statistics for the database and generates, via a data synthesizer, a plurality of synthetic datasets, where generating a given synthetic dataset is performed based on a given query plan of the plurality of query plans and the set of database statistics, and includes generating a plurality of synthetic data tuples. The system executes the plurality of query plans on the plurality of synthetic datasets and updates the query optimizer based on results of executing the plurality of query plans on the plurality of synthetic datasets. The disclosed data synthesis may advantageously improve query performance due to more efficient query plans being selected for execution of requested queries.
Techniques are disclosed relating to determining query plans for execution by database systems. In various embodiments, a query optimizer determines a first query plan to implement a query requesting data from a database system. The determining includes selecting one of a plurality of query plans evaluated based on a cost analysis and caching plan fragments of the unselected query plans. The database system can then determine a second query plan for the query by replacing a plan fragment in the first query plan with one of the cached plan fragments of the unselected query plans.
A runtime agent that is executable on a virtual machine may obtain one or more identifiers that correspond to one or more software classes from a first configuration file that is configured for the runtime agent. The runtime agent may monitor for loading of the one or more software classes by a first computer program that is being executed on the virtual machine. Further, the runtime agent may execute one or more actions based on detecting the loading of the one or more software classes by the first computer program where the one or more actions may impact the execution of the first computer program on the virtual machine.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
Techniques for determining video transcoding setting(s) for a video content based on information associated with a video content request and encoding the video contents into one or more encoded video contents based on the video transcoding settings are discussed herein. For example, a communication platform may receive a request associated with a video content. The communication platform may determine, based at least in part on the request, device information associated with one or more receiver devices. The communication platform may determine, based at least in part on the device information associated with the receiver devices provided by the communication platform, one or more video transcoding settings associated with the video content. The communication platform may further send one or more encoded video contents encoded based on the one or more video transcoding settings to the receiver devices.
Systems, devices, and techniques are disclosed for verification of backup data across a data pipeline. Records from a first storage may be received at a first end of a data pipeline. The records may be hashed to generate first hashes. A first hash tree may be generated from the first hashes. The records may be received at a second end of the data pipeline. Bits of bitmaps that correspond to the records may be set. The records may be hashed to generate second hashes. The records may be stored in a second storage. A second hash tree may be generated form the second hashes. Using the bitmaps, whether all of the records or any duplicate records were received may be determined. The first hash tree and the second hash tree may be compared to determine if any of the records stored in the second storage are corrupt.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
70.
Scalable Mapping for Database Extent Storage on Physical Nodes
Techniques are disclosed relating to storing database extents in physical storage nodes. To store the extents, a physical storage node of a computer system first accesses assignment metadata, which includes determining 1) virtual groupings of database extents assigned to the physical storage node and 2) database extents associated with the determined one or more virtual groupings. For a given database extent, a corresponding virtual grouping is determinable by performing a first hashing operation that uses an identifier for the given database extent. The physical storage node then accesses and stores the determined database extents. The physical storage node can now service requests for data of the database system that are stored at the first physical storage node.
Techniques are disclosed relating to migrating database extents between physical storage nodes. A database system stores current and new assignment metadata mapping virtual groupings of database extents physical storage nodes, the new assignment metadata reflecting a data migration of extents between the plurality of physical storage nodes. During the migration, the database system receives 1) read requests, which it responds to by reading data from a first physical storage node identified using the current assignment metadata and 2) write requests, which it responds to by writing data to a second physical storage node identified using the new assignment metadata. Upon the migration being complete, the system identifies physical storage nodes accessed by subsequent read and write requests using the new assignment metadata.
G06F 16/21 - Design, administration or maintenance of databases
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
72.
SYSTEM AND METHOD FOR GENERATING CRYPTOGRAPHIC SIGNATURE FOR ARTIFICIAL INTELLIGENT GENERATED CONTENT
Embodiments described herein provide a method for content transmission using a cryptographic signature. The method includes: generating, by a neural network model employing a plurality of state parameters and implemented on one or more processors, an output content; generating a string of Hash values based on the output content; creating a cryptographic signature by encrypting the string of Hash values and one or more state parameters of the neural network model using a private key; embedding the cryptographic signature in the output content; and transmitting, via a communication interface, the output content embedded with the cryptographic signature to a destination server.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
73.
Automated Retries for Orchestration of a Datacenter on a Cloud Platform
Techniques are disclosed relating to implementing automated retries during orchestration of a datacenter on a cloud platform. Generating an orchestration workflow for the datacenter may include generating an aggregate pipeline for the orchestration. The aggregate pipeline includes instances of datacenter entity pipelines that include stages for provisioning and deployment of datacenter entities. The disclosed techniques include adding retry stages to the datacenter entity pipelines that are automatically invoked in the event of failure of a datacenter entity pipeline. The retry stages are placed at the end of individual datacenter entity pipelines and conditional expressions are included that invoke retry strategies defined by owners of the datacenter entity.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
74.
DETECTING MISCONFIGURATION OF GUEST ACCOUNT SECURITY PERMISSIONS
Systems, devices, and techniques are disclosed for detecting misconfiguration of guest account security permissions. User personas may be generated from user activity data generated by access using guest accounts to controllers of a cloud computing server system. Clusters of user personas may be generated from the user personas. Anomalous user personas may be identified based on the clusters of user personas. A database query that was made to a database of the cloud computing server system, is associated with an identified user persona, and requested sensitive data may be identified from database query logs. A size of a response from the database to the identified database query may be identified from the database query logs. The size of the response may indicate that the response included the sensitive data. The cloud computing server system may prevent use of the guest account associated with the user persona.
A system may include a communication interface receiving information characterizing a customer of a first database tenant of a plurality of database tenants accessing customer relations management services. The system may also include a database system storing one or more database records including the information characterizing the customer and being stored in a profile corresponding with the customer. The database system may receive a request to determine content to provide to the customer in association with an interaction between the customer and a second database tenant. A recommended content item may be determined based at least in part on the one or more database records. A message including an instruction for presenting the recommended content item in a user interface may be transmitted from the database system to a client machine associated with the customer.
Techniques are disclosed that pertain to upgrading a database application. A computer system may determine to upgrade a database application from a current version associated with a first instance of a database catalog that defines the structure of a database that is managed by that database application. The first instance is associated with a first catalog signature that is indicative of the first instance of the database catalog. The computer system generates a second catalog signature that is indicative of a second instance of the database catalog that is associated with the different version. The computer system compares the first catalog signature and the second catalog signature to determine whether the database catalog changes between the current and different versions. Based on the comparing, the computer system then selects one of multiple upgrade processes performable to upgrade the database application to the different version and performs the selected upgrade process.
Disclosed herein are system, method, and computer program product embodiments for secure user interface (UI) customization in an embedded application. An embodiment operates by generating an embedding code and an application configuration corresponding to an updated version of an embedded code of an embedded web application in response to a determination that the embedded web application was published successfully. The embodiment then stores the embedding code, the application configuration, and a particular version of a web component at an application server. The particular version of the web component is designated for use by the embedded web application during runtime of the embedded web application. The embodiment then configures an application endpoint to prevent the embedded web application from accessing, during runtime of the embedded web application, another version of the web component that is different from the particular version of the web component stored at the application server.
A method for testing connectivity comprises receiving, by one or more computing devices, a request for a connectivity test, and determining, by the one or more computing devices, whether a point-to-point connectivity test or a service-to-service connectivity test is to be performed. The method further comprises initiating, by the one or more computing devices, the connectivity test in response to the request and based on the determining, where initiating the connectivity test comprises invoking a connectivity testing mechanism. The method further comprises displaying, by the one or more computing devices, a location of a connectivity issue based on the connectivity test, and displaying, by the one or more computing devices, a next step to solve the connectivity issue based on the connectivity test.
A method to manage domain-based security profiles in a content delivery network (CDN) is disclosed. The method includes receiving security events detected by one or more security solutions implemented by one or more CDN instances of the CDN, determining, for each of a plurality of domains, a risk score for the domain based on the security events, determining possible next level domains for a CDN instance of the CDN, determining an updated order of an auto-adjusting list maintained by the CDN instance based on risk scores for the domains included in the auto-adjusting list and the possible next level domains for the CDN instance, and sending an update to the CDN instance to cause the CDN instance to update the order of the auto-adjusting list to reflect the updated order, wherein the order of the auto-adjusting list indicates an eviction priority for the domains included in the auto-adjusting list.
A method and apparatus for autonomous container management configuration changes to container clusters during runtime and autonomous configuration-based release orchestration. A release manager manages a staggered feature release that includes staggers, stagger order, and container clusters included in each stagger. A logging service manages logs generated by the container clusters and/or app containers. An update service determines container management configuration changes based on analysis of data provided by the logging service. A shared engine attempts to implement instructions provided by the release manager and the update service at different times. The release manager receives an indication of success or failure of the attempted deployment of the feature release to the current stagger. The release manager, responsive to the indication of success or failure, determines to perform one of a plurality of actions, including attempting to deploy the feature release to the next stagger, and rolling back.
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
H04L 41/0859 - Retrieval of network configurationTracking network configuration history by keeping history of different configuration generations or by rolling back to previous configuration versions
H04L 43/062 - Generation of reports related to network traffic
A method and apparatus for autonomous configuration-based release orchestration. A first engine obtains stagger configuration data that includes an indication of container clusters in each stagger and a stagger order, selects a current stagger based on the order, and attempts to deploy the feature release to the current stagger by causing an app config update to be sent to a second engine within each container cluster of the current stagger, and receives an indication of success or failure of the attempted deployment of the feature release to the current stagger. Responsive to the indication of success or failure, the first engine performs one of a plurality of actions that include attempting to deploy the feature release to a next one of the staggers according to the order responsive to the indication indicating success, and causing a roll back of the current stagger responsive to the indication indicating failure.
A method and apparatus for autonomous release orchestration that supports staggered releases across a plurality of container clusters. A representation of a risk level for a current release is obtained. Based on the risk level, a set of one or more attributes of a stagger configuration is determined. An attempt to deploy the current release to the plurality of container clusters in accordance with the stagger configuration is caused.
A method and apparatus for autonomous configuration-based release orchestration that supports staggered feature releases across a plurality of container clusters. A release seeking goal is obtained. An unprocessed stagger is selected as a current stagger based on a stagger order. The current stagger is processed by attempting to cause a deployment of the feature release to the container clusters in the current stagger, receiving an indication of success or failure of the attempted deployment, and determining whether to roll back the current stagger based on the indication. A determination is made whether the release seeking goal can still be met. If the release seeking goal can no longer be met, a release level rollback is caused, and otherwise the selecting, processing, and determining is repeated for the next unprocessed stagger based on the stagger order.
Techniques are disclosed relating to a monitoring service executing in a public cloud computer system. A method may include receiving metrics for a database system implemented on a single instance of a virtual machine in the public cloud computer system. The metrics may include a set of metrics indicative of status of the database system, a set of metrics indicative of status of the virtual machine, and a set of metrics indicative of status of the public cloud computer system. The method may also include continuously determining a primary database candidate from a set of standby databases, and detecting that metrics correspond to one of a plurality of disruption scenarios. The method may further include issuing, based on the detecting, a command to trigger a failover to the primary database candidate.
Techniques are disclosed relating to query planning and execution. A computer system can receive a database statement that comprises a LIKE predicate that defines a set of pattern parameters. The computer system may generate first and second query paths for a query plan associated with the database statement. The first query path utilizes an index associated with a database table specified by the database statement while the second query path does not utilize the index. The computer system executes the database statement in accordance with the query plan and values that are provided for the set of pattern parameters. As a part of executing the database statement, the computer system may evaluate those values to determine whether they are prefix constants and execute the first query path instead of the second query path if all the values are prefix constants.
Methods, apparatuses, and computer-program products are disclosed. The method may include training a generative artificial intelligence (AI) model on a plurality of data sources and generating, based on the training, training log metadata indicating individual data sources of the plurality of data sources. The method may include receiving, from a user device, a generative AI query and generating, using the trained generative AI model and based on one or more data sources of the plurality of data sources, a response to the generative AI query. The method may include mapping one or more portions of the response to the one or more data sources of the plurality of data sources based on the training log metadata and transmitting, to the user device, the response and one or more indications of the one or more data sources based on the mapping and the training log metadata.
Systems, methods, and devices provide on-demand environment simulation. A computing platform may be implemented using a server system, where the computing platform is configurable to cause receiving a message from a graphics engine, the message identifying at least one object included in a graphics rendering environment, and further identifying status information associated with the at least one object, and identifying, based on the received message, an instance of an on-demand application associated with the graphics rendering environment. The computing platform may be further configurable to cause mapping the status information to an operation associated with the instance of the on-demand application based on a designated mapping of graphics engine assets to the instance of the on-demand application.
Methods and systems are provided for managing environmental conditions and energy usage associated with a site. One exemplary method of regulating an environment condition at a site involves a server receiving environmental measurement data from a monitoring system at the site via a network, determining an action for an electrical appliance at the site based at least in part on the environmental measurement data and one or more monitoring rules associated with the site, and providing an indication of the action to an actuator for the electrical appliance.
G05B 15/02 - Systems controlled by a computer electric
G05F 5/00 - Systems for regulating electric variables by detecting deviations in the electric input to the system and thereby controlling a device within the system to obtain a regulated output
H02J 13/00 - Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the networkCircuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network
H04L 12/28 - Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for backing up a data object in blocks. One of the methods includes determining, for a data object of a backup process, whether a size of the data object or an estimated backup time of the data object satisfies a criterion that, when satisfied, indicates that at least two blocks of the data object should be separately fetched from the source system by different workers; determining one or more markers for end points of the at least two blocks using data from a prior backup of the data object; and causing, at least partially concurrently for two or more blocks from the at least two blocks, a respective backup worker to fetch the respective block from a source system using at least one marker from the one or more markers that defines an end of the respective block.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
94.
Applied Artificial Intelligence Technology For Natural Language Generation Using A Graph Data Structure And Configurable Chooser Code
Natural language generation technology is disclosed that applies artificial intelligence to structured data to determine content for expression in natural language narratives that describe the structured data. A graph data structure is employed, where the graph data structure comprises a plurality of nodes. Each of a plurality of the nodes (1) represents a corresponding intent so that a plurality of different nodes represent different corresponding intents and (2) is associated with one or more links to one or more of the nodes to define relationships among the intents. A processor executes chooser code based on a plurality of operating rules and/or parameters that control how the chooser code traverses the graph data structure to determine which of the nodes to use for content to be expressed in the natural language narratives, wherein the operating rules and/or parameters are configurable to change strategies for choosing which nodes are used for the content to be expressed in the natural language narratives.
Techniques are disclosed relating to managing database queries. In some embodiments, a server system receives a query from a computer system and determines a set of aspects for the query, including at least a number of columns specified in the query and a computational cost of executing the query. The system generates a query vector based on the set of aspects determined for the query. The system then compares the query vector with a plurality of clusters, ones of the plurality of clusters comprising two or more previously generates query vectors generated based on aspects of queries previously received by the server system. Based on the comparing, specifically a distance between the query vector and the plurality of clusters of previously generated query vectors, the system classifies the query. Based on a classification of the query determined during the classifying, the system manages the query.
Database systems and related customization methods are provided. One exemplary method of modifying a database to support a new functionality involves receiving user input indicative of the new functionality from a client device coupled to a network, identifying existing customizations associated with a user of the client device in the database, determining a plurality of different solutions for implementing the new functionality based at least in part on the existing customizations associated with the user, providing a graphical user interface display at the client device including graphical indicia of the plurality of different solutions for implementing the new functionality, and in response to receiving indication of a selected solution of the plurality of different solutions from the client device, automatically instantiating a new customization corresponding to the selected solution in the database.
A computing services environment may include a database system, a vector store, a generative language model interface, and/or an incident response system. The database system may be configured to detect a database system incident affecting database system availability or performance and to generate a database incident report characterizing the database system incident. The generative language model interface may be configured to determine a textual description of the database system incident and identify one or more records of the plurality of records by completing an incident evaluation prompt via a generative language model. An incident response engine may be configured to determine an instruction to resolve the database incident based on the textual description and the one or more records, wherein the database system is configured to execute the instruction to update one or more configuration parameters.
Techniques for generating a prebuilt workflow using one or more machine learning models are discussed herein. In some examples, a user may request to generate a workflow configured to automatically perform a series of steps to facilitate the completion of a task(s). In response, the communication platform may present a workflow builder associated a machine learning model(s). In some examples, the machine learning model(s) may receive, as input, a prompt defining a task(s) to be completed and generate, as output, a prebuilt workflow including a suggested series of steps to complete the task(s). The communication platform may receive user input to publish the prebuilt workflow.