Disclosed herein are system, method, and computer program product embodiments for providing data aggregation based on hash map data structures. An embodiment operates by receiving a query specifying an input table and an aggregate function. The embodiment then generates a first thread-local hash map and a second thread-local hash map for the input table and performs a probing function associated with the first thread-local hash map. The embodiment then determines, based on the first thread-local hash map, an index cardinality associated with the input table. The embodiment then, in response to the determination that the index cardinality exceeds the threshold, performs a duplicate function associated with the second thread-local hash map and generates a second thread-local copy map. The embodiment then merges the first thread-local hash map and the second thread-local copy map, thereby generating a merged hash map.
When a query targeting a database object is detected, a database management system determines whether a row level security policy is defined for the database object. If a row level security policy is defined for the database object, the database management system dynamically generates a filter predicate string based on the row level security policy. Then, the filter predicate string is converted into a query optimizer predicate. Next, the query optimizer predicate is injected into a query plan corresponding to the query. Then, a first query result set is generated during execution of the query plan and the query optimizer predicate is applied to the first query result set. In an example, applying the query optimizer predicate to the first query result set results in the creation of a second query result set which is a truncated version of the first query result set.
A computer-implemented method for improved backorder processing (BOP) in an enterprise resource planning system is disclosed. The method can receive one or more user prompts from a user interface and create a BOP segment using a large language model. The BOP segment selects a subset of a plurality of order requirements using one or more filters determined based on the one or more user prompts. A filter is defined by an attribute, an operator, and one or more attribute values. The method can create a BOP variant using the large language model. The BOP variant defines a confirmation scheme for the BOP segment based on the one or more user prompts. The method can further execute the BOP variant using the large language model, including batch processing the subset of the plurality of order requirements using the confirmation scheme.
Methods, systems, and computer-readable storage media for a video generation platform that automatically generates videos, also referred to herein as stories, based on story templates, story data, and story metadata. The video generation platform provides interfaces for third-party systems to render videos and publish the videos as a story for defined channels and recipients.
A computer-implemented method may comprise receiving, from a client application running within a client network, a request for a server application running within a server network to perform an action, and then generating, by the client network, a modified version of the request for the server application to perform the action, where the modified version of the request for the server application to perform the action comprises an access token configured to be used by the server network to allow an update of an access control list for the server application. The client network may then send, to the server network, the modified version of the request for the server application to perform the action.
A data aggregation service is engineered to generate aggregated data values based on encrypted data received from a variety of providers, without being exposed to the underlying plaintext data. A homomorphic encryption scheme is used in a threshold cryptography scenario that allows aggregation of the encrypted data without requiring decryption. An independent decryption service can partially decrypt the aggregated result, which can ultimately be decrypted to plaintext for use by the provider. Bitwise operations can be defined to support aggregation with error tolerance, and the operations can be constrained to a smaller bit size to reduce circuit complexity.
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
7.
DYNAMIC CONVERSATION INSIGHTS USING LARGE LANGUAGE MODELS
In an example embodiment, several different fine-tuned LLMs are utilized to provide a system where an administrator can request conversational log insights using natural language and be presented with insights generated by an LLM, without necessitating the passing of any personal or sensitive data to a third-party. Furthermore, the use of several fine-tuned LLMs reduces the number of input tokens needing to be submitted to any one particular LLM, overcoming a key technical limitation of LLMs.
Systems and methods include input of a code generation prompt to a text generation model, reception of code from the text generation model in response to the input code generation prompt, execution of the received code, determination of execution information associated with the execution of the received code, input of a repair prompt, the code generation prompt and the execution information to the text generation model, and reception of an updated code generation prompt from the text generation model in response to the input repair prompt, code generation prompt and execution information.
Disclosed herein are a system, method, and computer program product embodiments for retrieving and ranking knowledge resources relevant to a query from knowledge base(s). For example, a query for resources from knowledge base(s) may be received. Based on the query, a first set of candidate resources are obtained from the knowledge base(s) having a lexical similarity to the query search terms, and a second set of candidate resources are obtained from the knowledge base(s) having a semantical similarity to the search terms. For each of the first and second sets of candidate resources, a confidence level indicating the relevance of the candidate resource to the query is determined. The sets of candidate resources are ranked based on at least the confidence levels to generate a ranked list of candidate resources. A query response comprising at least a subset of the ranked list candidate resources is provided to a GUI.
G06F 16/383 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 40/284 - Lexical analysis, e.g. tokenisation or collocates
10.
OPTIMIZATION OF DATABASE QUERIES HAVING MULTIPLE VIEWS
Arrangements for optimization of database queries having multiple views are provided. A query defining multiple views may be received. A parse tree may be generated based on the query. The query may be preprocessed using the parse tree. The parse tree may be traversed to identify the multiple views. View unfolding may be executed to generate a view subtree. The view subtree may be attached to the parse tree and a view query compile tree may be generated. It may be determined whether there is another view defined by the query. In response to determining that there is not another view, the parse tree may be traversed to calculate tree depth. In response to determining that there is another view defined by the query, calculation of the tree depth may be skipped.
Methods, systems, and computer-readable storage media for receiving a current story representative of a function that is to be added to an application, generating a current story embedding, determining a set of historical stories at least partially by comparing the current story embedding to historical story embeddings in a set of historical error embeddings, identifying a sub-set of historical stories from the set of historical stories, the sub-set of historical stories including one or more historical stories, retrieving a historical code snippet and a test case set associated with each historical story in the sub-set of historical stories, generating a code snippet for the current story using a large language model (LLM) system, and releasing the code snippet to a code repository for integration in the application.
As described herein, a machine learning model is used to accurately predict the future resource usage (e.g., memory usage, processor usage, network usage, and the like) of one or more applications and/or databases. By analyzing historical data and patterns, the machine learning model provides insights into the expected resource usage for a predetermined period of time (e.g., three days or seven days). The machine learning model may be optimized for time series forecasting. For example, the AutoARIMA or Theta algorithms may be used for training. A user interface may be provided that enables easy visualization of the forecasted resource usage. A predetermined threshold may be defined for one or more of the sources being forecast. For example, a threshold for memory usage may be set. If the predicted memory usage for any application or database exceeds the predetermined threshold, a notification is sent to one or more users.
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program receives, from a client device, a request for information associated with a category. In response to the request, the program further accesses a storage to retrieve a first value associated with the category. The program also determines a set of values associated with the category based on a plurality of transactions. The program further determines an optimization level value associated with the category. The program also determines a second value associated with the category based on the first value, the set of values, and the optimization level value. The program further provides, by an application operating on the device, a graphical user interface (GUI) to the client device, the GUI comprising the second value.
Some embodiments may be associated with a cloud computing environment. A computer processor of a traffic prediction server may retrieve performance stack trace logs from a traffic performance stack trace log repository that stores traffic information of the cloud computing environment. The traffic prediction server parses the performance stack trace logs as an objects list including parent/child object relationships and stores the parsed objects list in a graph database. The traffic prediction server may then transform the graph database into training data including spatial and temporal information and use the transformed training data to train a transformer model. According to some embodiments, the traffic prediction server also provides previous traffic input data to the transformer model when generates predicted traffic output data based on the previous traffic input data (e.g., to facilitate cloud load management).
A computer implemented method can receive a first data table having a first column and a second data table having a second column, and obtain a dictionary shared by the first and second columns. The dictionary maps a plurality of unique data values to corresponding unique value identifiers. The method can generate a first data vector for the first column and a second data vector for the second column. The first data vector includes first value identifiers corresponding to data values stored in the first column, and the second data vector includes second value identifiers corresponding to data values stored in the second column. The method can join the first and second data tables based on the first and second data vectors. The joining generates one or more matching records between the first and second data tables. Related systems and software for implementing the method are also disclosed.
Methods, systems, and computer-readable storage media directed to a machine learning (ML) model training system for training ML models by leveraging a large language model (LLM) for knowledge distillation to provide training data and using multi-task learning to train ML models using the training data.
Various examples are directed to systems and methods for copying a source database client to a target database client and exit runtime system may execute a first plurality of exits in a database access trace mode. This may generate first exit trace data describing accesses by the first plurality of exits to at least one of the source database client or the target database client. The system may determine, using the first exit trace data, that a first table of the source database client is accessed by a first exit of the first plurality of exits and a second exit of the first plurality of exits. Based on the determining, the system may modify at least one of the first exit, the second exit, or first execution order data describing an order for executing the first plurality of exits.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
A virtual solution architect (VSA) for facilitating the design of technology solution architectures via a natural language interface is provided. In one set of embodiments, the VSA can collect various types of knowledge relevant to technology solution design, such as a knowledge graph of business processes and their relationships, information pertaining to the application programming interfaces (APIs) of packaged business capabilities (PBCs), and so on. The VSA can further receive a natural language query pertaining to a technology solution architecture, retrieve at least a portion of the collected knowledge based on the query, and build a prompt for a large language model (LLM) using the query and the retrieved knowledge. The VSA can then provide the prompt as input to the LLM, thereby causing the LLM to output a natural language answer to the natural language query, and can provide the answer to the query originator.
In a computer-implemented method queries, a task executor queries for which tasks of a list of tasks are relevant for a software application installation. The task executor queries for an up-to-date sequence of relevant tasks. The task executor executes the relevant tasks as a completion sequence of tasks used to compute an individual order valuation matrix organized with task identifications in columns and rows. The task executor sends the individual order valuation matrix to a task ordering service. The task executor provides which tasks of a list of tasks are relevant for the software application installation. The task executor receives, from the task ordering service, a newly up-to-date sequence of tasks based on a holistic order valuation matrix.
Various examples described herein are directed to systems and methods for debugging a software application. A computing system may access a call stack. The call stack may describe a first plurality of function calls made by a software application prior to a first crash of the software application and an order of the first plurality of function calls. The computing system may filter the call stack to generate a first filtered call stack and determine a similarity score for the first crash and a second crash of the software application. The determining of the similarity score may be based on comparing the first filtered call stack to a second filtered call stack. The second filtered call stack may describe a second plurality of function calls made by the software application prior to the second crash of the software application and an order of the second plurality of function calls.
A computer implemented method can detect performance regression of executing a query using a current query plan. Responsive to detecting the performance regression, the method can automatically search for one or more candidate solutions for resolving the performance regression, and select, from the one or more candidate solutions, an effective solution that resolves the performance regression. The selecting includes evaluating performance of executing the query using one or more alternative query plans generated by the one or more candidate solutions. The method can store the effective solution for future execution of the query. The effective solution is configured to generate an updated query plan selected from the one or more alternative query plans. The updated query plan has better performance than the current query plan for executing the query. Related systems and software for implementing the method are also disclosed.
A computer implemented method can detect, in a first tenant, performance regression of executing a query using a current query plan. Responsive to detecting the performance regression, the method can evaluate one or more candidate solutions for resolving the performance regression, and identify, from the one or more candidate solutions, an effective solution that resolves the performance regression. The effective solution is configured to generate an updated query plan, which has better performance than the current query plan for executing the query. The method can construct a knowledge object based on the detected performance regression and the identified effective solution and distribute the knowledge object to a second tenant. Related systems and software for implementing the method are also disclosed.
Various examples are directed to systems and methods for testing a database management system. A testing system may execute a graph neural network using first performance data describing a plurality of operations executed by a database management system to implement a first query, based at least in part on the graph neural network output, generate first query execution signature data describing the execution of the first query at the database management system. The testing system may compare the first query execution signature data to second query execution signature data describing execution of a second query at the database management system.
Various examples are directed to systems and methods for providing a user interface to a user. A computing system may receive, from a user, a request for a first page of the user interface. The computing system may access context data for the user indicating that the user is associated with a first role and also associated with a second role. The computing system may select either a first adaptation configuration file associated with the first role or a second adaptation configuration file associated with the second role. The computing system may render the first page by applying at least one modification associated with the selected adaptation configuration file and serve the first page to the user.
Systems and methods include determination of a first feature and a second feature, generation of first prompts to prompt determination of a relationship analysis algorithm based on first feature metadata and second feature metadata and to prompt determination of a function to generate a description of a relationship analysis result, reception of the function from a text generation model in response to the first prompts, execution of the function to generate the description of the relationship analysis result, generation of second prompts to prompt determination of a relationship visualization based on the description and to prompt determination of a second function to generate the relationship visualization incorporating the description, reception of the second function from the text generation model in response to the second prompts, execution of the second function, and presentation of the relationship visualization.
Disclosed herein are system, method, and computer program product embodiments for creating a tailored access profile for improving the security of an access control system. An embodiment operates by extracting application access requirements for a role from a role description using a first large language model. The embodiment then generates an embedding corresponding to the application access requirements using a second large language model. The embodiment then searches for a first access profile in a data store based on the embedding. The embodiment then generates a second access profile based on the application access requirements using the first large language model. The embodiment then selects the first access profile or the second access profile based on the application access requirements. The embodiment finally tailors the selected access profile based on feedback, thereby creating the tailored access profile.
Systems and methods described herein relate to the use of generative artificial intelligence to facilitate rendering of user interface elements in a user interface associated with a digital assistant. A backend response is automatically generated in response to user input provided via the user interface associated with the digital assistant. Prompt data is generated. The prompt data includes an instruction to generate an intermediate representation of an output data structure supported by the digital assistant. The prompt data is provided to a generative machine learning model to obtain the intermediate representation. The intermediate representation is processed to obtain the output data structure. One or more user interface elements are rendered based on the output data structure. The one or more user interface elements present the response data via the user interface associated with the digital assistant.
Automated performance profiling can be performed for software modules imported during a backend session of a software platform. An import request for an entity (e.g., a requested function or requested class) received from a client application during the backend session can be intercepted by an automatic performance profiling process which wraps a software module implementing the entity with profiling. The request can include an indication to enhance the entity with profiling. Responsive to the request, the server identifies and imports the software module implementing the entity with a software engine and determines whether the software module includes profiling. If the software module does not yet include profiling, the server transforms the requested entity into a profiling-enhanced entity by wrapping the requested entity with profiling. The profiling-enhanced entity is then output to fulfill the request.
Using a data analysis activity (DAA) definition, a DAA associated with a software application is triggered. An instance selector query is executed to generate a set of instance values as input for a data query. A data query to generate a data set is executed using instance values of the set of instance values. Using the data set, an instruction for an artificial intelligence (AI) engine is computed. A result based on the instruction for an AI engine is received from the AI engine. The result based on the instruction for an AI engine is stored into an AI Result History Store. Prior results from earlier DAA executions is read from the AI Result History Store. A notification to a defined target audience is sent using the software application.
The present disclosure involves systems, software, and computer implemented methods for dynamic data chunking and intelligent lazy loading. An example method includes determining a scrolling velocity of a user interacting with a graphical user interface that is displaying a first data portion of a data set. A size is determined, of a second portion to retrieve, based at least on the scrolling velocity. A first request is sent to retrieve the second portion of the data set. The second portion is received and at least a portion of the second portion is included in the interface. A threshold portion of the second portion is determined at which to send a second request. A determination is made that the user has scrolled the interface such that at least the threshold portion of the second portion is displayed. The second request is sent to retrieve a third portion of the data set.
This disclosure describes systems, software, and computer implemented methods for synchronizing between a primary and a secondary computing system, including receiving instructions to perform a change to a primary computing system database; performing the change to the database; logging the change in a database log; receiving instructions to change the portion of the database stored in the cache memory; performing a change to a registry based on the change of the portion of the database; generating a sync message by: encoding the change to the registry in a message with an identifier of the registry changed; obtaining a log indicating previous database changes logged in the database change log since a prior sync message was sent; appending the previous database changes since the prior sync message was sent to the message; and encoding the sync message as an object.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
32.
NATURAL LANGUAGE GENERATOR SUPPORT FOR SOFTWARE MAINTENACE
Techniques and solutions are provided for facilitating the documentation, resolution, and review of software support incidents. In one aspect, a natural language generator is provided with information about a software support incident and creates an incident report. In another aspect, a natural language generator receives information about a software support incident and information about prior software support incidents. The natural language generator proposes solutions to resolve the software support incident. In another aspect, the natural language generator analyzes software support incidents, including resolutions, and provides a summary of software support incidents, and can provide suggested actions to reduce the occurrence of future incidents. The present disclosure also provides techniques for extracting and standardizing incident data, which can improve the quality of results generated by the natural language generator.
Systems and methods provide reception of a request to an application, determination of values of request characteristics based on the request, and determination that the request is a heavyweight request based on the values of the request characteristics. In response to determining that the request is a heavyweight request, execution environments capable of executing the application are determined, operational metric values of one of the execution environments are determined, and it is predicted that the request will not timeout at the one execution environment based on the values of the request characteristics and the operational metric values. In response to predicting that the request will not timeout at the one execution environment, the request is sent to the one execution environment.
A computer implemented method can generate a current query execution plan for a query and serialize the current query execution plan into a current query plan object. The current query plan object specifies a query tree which defines a plurality of query operators of the current query execution plan. The method can compare the current query plan object with one or more stored query plan objects contained in a plan repository. The one or more stored query plan objects were serialized from previous query execution plans generated for the query. Responsive to finding that no stored query plan object matches the current query plan object, the method can store the current query plan object in the plan repository. Related systems and software for implementing the method are also disclosed.
Described herein are techniques for intelligently pruning a machine learning or generative AI model. The model may first be split up into subunits. Each subunit may be analyzed to calculate a suitable measure such as a stochastic independence score or mutual information score. The subunits may in turn be ranked by their associated score and the lowest ranked subunit or subunits may be pruned from the model. The pruned model is then retrained, and accuracy of the pruned model is evaluated. A determination is then made whether to prune more or to return the pruned model.
Methods, systems, and computer-readable storage media for reading a page stored in a database system, each page storing rows of data, for a first index row of a first page, determining that the first index row is absent from being recorded in a hash table, and in response, storing a first record of the first index row in the hash table, the first record including a first hash value representative of the first index row, for a first data row of a second page, providing a second index row based on one or more values of one or more fields of the first data row, and determining that the second index row is recorded in the hash table as the first index row in the first record, and in response, removing the first record from the hash table, and outputting index consistency results based on the hash table.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Techniques and solutions are provided for improving the performance and capabilities of natural language generators. A natural language generator is progressively presented with proper subsets of a set of capabilities. Some of the capabilities correspond to discrete agents, whose execution can be triggered by a selection of a discrete agent by the natural language generator. Other capabilities correspond to categories that are used to organize capabilities corresponding to discrete agents or other categories. The natural language generator can progressively select capabilities until a capability corresponding to a discrete agent is selected. The discrete agent can then be executed, and execution results can be provided to the natural language generator. The present disclosure also provides for computer-implemented categorization of capabilities.
Described herein are techniques for stacking machine learning models to better capture deterministic relations in a dataset. In some instances, a first machine learning model may not be capable of capturing all of the deterministic relations in a dataset due to the limitations of the model. Supplemental models may be trained so that the corrections generated by the supplemental models, when combined with the first machine learning model, perform better at capturing the deterministic models in the dataset. Techniques are described for training supplemental models to capture deterministic relations associated with ordinal data and nominal data and continuous data.
A database execution engine generates a first query execution plan in response to receiving a first query, where a thread limit is specified for worker threads launched by the database execution engine. A first main executor thread is launched to process the first query and a first plurality of tasks are created to be performed in response to the first query. Then, a first plurality of worker threads are launched to perform the first plurality of tasks, where the first plurality of worker threads is less than or equal to the thread limit. In response to parallelizing processing of the first query execution plan, the first main executor thread is restricted to a first period of execution time before entering a waiting phase. The first main executor thread is woken up after the first plurality of worker threads have completed the first plurality of tasks.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
Database instances can be clustered using various attributes, such as attributes reflecting resources assigned to a database instance, attributes reflecting configuration or software version information, or attributes that reflect users or usage of a database instance. As it can be resource intensive to capture database workloads, one or more database instances can be selected from one or more of the clusters and workloads can be captured from such instances. In a similar manner, various attributes can be used to cluster database instances, and one or more instances can be selected from one or more of the clusters, and a captured workload can be replayed on such instances.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/34 - Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation
G06F 16/21 - Design, administration or maintenance of databases
System, method, and various embodiments for a data storage management system are described herein. An embodiment operates by receiving a request to create an index based on a portion of a database, the portion comprising one or more entries that correspond to one or more ongoing transactions. The index is generated based on the portion of the database, and a subset of entries that correspond to one or more ongoing transactions are auto-committed prior to a completion of the one or more ongoing transactions. A command to rollback the request to create the index is detected. The index is scheduled for asynchronous garbage collection, that remove information associated with the generated index from both memory and disk based upon a completion of one or more parallel transactions. The information associated with the generated index is removed from both the memory and disk in accordance with the asynchronous garbage collection.
In an example embodiment, an early filter is applied with a query plan using intra-pipeline predicate back-propagation. Specifically, the query plan may be thought of as a pipeline of operations. A runtime variable var may be introduced, and a specialized filter using var may be pushed down below the join operation. Var is a dynamic variable that is updated to track a value from the sort or similar operation (such as max (heap), reflecting the maximum value of a max-heap used by the sort or similar operation). The runtime variable gets initialized once the heap reaches a minimum number of elements (such as K in the case of a top K sort). Thus, before the heap reaches that minimum number of elements, the filter does not apply. Once the heap does reach that minimum number of elements, the filter does apply and acts to filter elements. Since the filter has been pushed down below the join operation, this saves processing cycles.
Disclosed herein are system, method, and computer program product embodiments for identifying and removing orphan data in a Database-as-a-Service (DBaaS) cloud computing platform. In embodiments, a workflow is executed that accesses an event service database of the platform to obtain a list of deleted customer entities, queries a control plane of the platform to obtain a list of existing DBaaS instances, determines that a particular existing DBaaS instance is associated with a particular deleted customer entity, identifies the particular existing DBaaS instance as a potential orphan DBaaS instance, validates that the particular existing DBaaS instance is an orphan DBaaS instance, and in response to validating that the particular existing DBaaS instance is an orphan DBaaS instance, performs one or more of generating an alert that identifies the particular existing DBaaS instance as an orphan DBaaS instance or removing the particular existing DBaaS instance from the DBaaS cloud computing platform.
System, method, and various embodiments for a federated execution system are described herein. An embodiment operates by determining that an managed flow for a transfer of data from a source to a target is managed by an orchestrator system that communicates with each of a plurality of data engines during the transfer. Flow metadata is generated for each of the plurality of data engines. Each of the plurality of data engines is configured with a flow component configured to process the flow metadata and provide output from the respective data engine to the component data engine in accordance with the flow metadata. A component flow for the transfer of data from the source to the target is initiated.
Methods, systems, and computer-readable storage media for receiving an input, generating a bias-detection prompt based on the input, the bias-detection prompt including context representative of bias relevant to the input and to be applied in processing of the bias-detection prompt, prompting a LLM using the bias-detection prompt to receive a first response, the first response representative of bias responsive to the input and being in a Javascript object notation (JSON) format defined in a JSON schema of the bias-detection prompt, modifying the input based on the first response to provide modified input, generating a prompt at least partially based on the modified input, prompting the LLM using the bias-detection prompt to receive a second response, the second response representative of at least a portion of a task related to an operation of an enterprise, and executing the task using the second response.
In an example embodiment, a framework is provided to enable more robust and reliable code suggestions for developers, to better encourage them in using AI tools. This framework may be integrated into an AI tool as an additional evaluation layer (before the code suggestion is made to the developer), thus providing them with more reliable suggestions.
A machine learning model is trained with a training dataset, where the machine learning model comprises a plurality of layers. During training, values of a plurality of coefficients of one or more layers are monitored. In response to detecting a change of a given coefficient by more than a threshold during a given training run, a given reference to a given input dataset of the given training run is stored. In response to detecting an output error of a trained version of the machine learning model, the given reference to the given input dataset is retrieved if the given coefficient is located on a backward path providing more than a threshold contribution to the output error. Next, the given reference is provided to an application analyzing the trained version of the machine learning model in order to determine a cause of the output error.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
The present disclosure provides techniques and solutions for linking elements of different knowledges graph and for using such links during knowledge graph processing. When an element of a knowledge graph is created, such as a class, a property, or a class instance, it can be determined whether a corresponding element exists in another knowledge graph. If so, the elements can be operatively linked. When a query is executed against a knowledge graph, if an element is linked to an element of another knowledge graph, the other knowledge graph can be accessed for query processing. When statements are made about a knowledge graph element that is defined in a first knowledge graph element and where the element is defined with respect to an element of a second knowledge graph, the scope of the statement can be limited to the second knowledge graph.
A processor or other cache manager may be configured to perform eviction of data from a cache memory. The processor detects a request to store a first data record into the cache memory. The request includes the first data record and a first weight, which may be specified by an application. The processor writes the first data record and its first weight into the cache memory. The processor calculates a first score of the first data record, which may be based on a first ratio of a first idle time of the first data record to the first weight. The processor compares the first score to a second score of a second data record stored in the cache memory. The processor then deletes the first data record from the cache memory, based on the comparing of the first score to the second score.
G06F 12/123 - Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
50.
INTELLIGENT CONTENT GENERATION FOR PROCESS AUTOMATION
Arrangements for intelligent content generation for process automation are provided. A domain model, being structured into tasks according to a defined schema, may be exported for processing by a large language model. A prompt and a context window associated with a task of the domain model may be received. A task template associated with the task may be modified. The modified task template may be enriched with data from a backend system. Content validation may be performed on content of the modified task template enriched with the data from the backend system. Schema validation may be performed for validating the modified task template enriched with the data from the backend system against the defined schema. Correction of invalid tasks may be performed in an iterative loop until the modified task template enriched with the data from the backend system is validated. Then, changes to the domain model may be applied.
In an example embodiment, a knowledge graph is used to provide human-readable names and further contextual and descriptive information of data in database views and tables. This makes this information findable, accessible, identifiable, and reusable, and enables the re-use of such information across use cases. Further, an LLM is used to generate descriptive information that can then be used to generate embeddings to compare natural language questions provided by developers with objects in an ERP.
Methods, systems, and computer-readable storage media for receiving tabular data, serializing the tabular data to provide serialized data, generating a prompt comprising a persona, a set of chain-of-thought (CoT) steps, and a thinking style, the persona being specific to an operation of the enterprise and including a natural language description of a role for executing the operation, the CoT steps defining a sequence of actions that a LLM is to perform in processing the prompt, the thinking style including a natural language description of how the LLM is to process the prompt, transmitting the prompt and serialized data to the LLM, receiving output of the LLM responsive to the prompt, and executing at least one operation using the output.
Methods, systems, and computer-readable storage media for determining a set of queries corresponding to a set of configuration settings of an application, for each query in the set of queries, querying a database to return a set of chunks, each chunk in each set of chunks including a portion of a requirements document, providing a set of prompts, each prompt corresponding to a query in the set of queries and including a respective set of chunks as context, receiving, from a large language model (LLM), a set of responses, each response corresponding to a prompt in the set of prompts, querying a knowledge graph based on the set of responses to provide a set of knowledge graph results, providing a configuration file using the set of responses and the set of knowledge graph results, and configuring the application using the configuration file.
Disclosed herein are a system, method, and computer program product embodiments for recommending software patch notification(s) to computing system(s). For example, a representation of a notification indicating that a software patch configured to update a particular software component is available for installation is provided as an input to a machine learning model. The machine learning model is configured to predict a particular computing system, of a plurality of computing systems on which the particular software component is installed, that is to receive the software patch. A prediction is received from the machine learning model. The prediction indicates that at least one computing system of the plurality of computing systems is to receive the software patch. A recommendation to apply the software patch on the at least one computing system is provided.
A paging mechanism is provided for objects that are to be archived. An archiving component keeps track of the number of objects for which it has determined it is not allowed to archive (here called the paging amount). When a request to archive data is received by the archiving component, rather than necessarily attempt to archive the oldest objects, the archiving component instead skips a number of the oldest objects equal to the paging amount. The archiving component is invoked (e.g., each day), and then the paging amount can change based on the paging amount from the prior period (e.g., yesterday) and the newly requested objects to be archived that it determines are not allowed to be archived.
The present disclosure relates to computer-implemented methods, software, and systems for managing access to protected resource and aim at mitigating the risk of denial of services for the resources. A first request is received by an access policy manager from an automation tool to obtain access policy metadata of a first resource provided at a first data storage. A second request is sent to access an interface at the first data storage to obtain the access policy metadata. The second request is generated according to a type of the first data storage. The access policy metadata relevant for the first resource is obtained to provide the access policy metadata to the automation tool.
Embodiments describe techniques for validating the integrity and compliance of software artifacts through the use of attestations. An attestation manager is capable of retrieving an attestation file from storage, validating the software artifact and the attestation chain within the attestation, and optionally generate new attestations to add to the attestation chain when the software artifact and the attestation chain have been validated. A public-key encryption scheme may be applied to validate attestations while a fingerprint comparison scheme may be applied to validate the software artifact.
Examples described herein relate to a digital assistant that utilizes generative artificial intelligence. Prompt data provided to a generative machine learning model includes user input and function data. The function data can include dependency data that identifies at least one function dependency. A first function is invoked based on a first response from the generative machine learning model to obtain first output data. After updating the prompt data to include the first output data and receiving a second response from the generative machine learning model, a second function is invoked to obtain second output data. The digital assistant can maintain model-accessible data and non-model-accessible data for a digital conversation. Automated validation can be performed on parameter values of the first function or the second function. Parameter values may be explicitly confirmed by the digital assistant via a user-confirmation operation before invoking the first function or the second function.
A system, a method, and a computer program product for solutions recommendations. For example, a computer-implemented method may include receiving a query indicating a request for a change that provides a solution to an existing system; triggering a machine learning model to provide a list of one or more solutions that are responsive to the query; validating the one or more solutions provided by the machine learning model based on a comparison using solutions included in a product master database; preparing a recommended list of solutions, the recommended list prepared based on customer data that is clustered based on a region, a country, a company size, an industry type, and/or a sentiment value; and responding to the query with the recommended list of the one or more solutions. Related systems, methods, and articles of manufacture are also disclosed.
G06Q 10/0637 - Strategic management or analysis, e.g. setting a goal or target of an organisationPlanning actions based on goalsAnalysis or evaluation of effectiveness of goals
G06Q 10/067 - Enterprise or organisation modelling
Described herein are techniques for providing persistent access to versioned design-time artifacts without the need of the design-time environment. A new data structure is introduced for runtime artifacts that allows a compressed version of the design-time artifacts to be stored as part of the runtime artifact. A build-time environment may include a bundler for generating the compressed version and to bundle the compressed version along with the runtime modules as part of the runtime artifact. A runtime environment may include a debundler for decompressing the design-time artifacts so that they may be accessible in the runtime environment.
A method for dynamic determination of inference-time parameters to control the stochastic generation process of a generative neural network. The method may include dynamically determining for an inference request, at least from operational context information, at least one of the inference-time parameters.
A computer-implemented method includes translating into a routing configuration, tenant-specific preferences for primary and secondary datacenter locations. A service mesh is set up for communication between services within and across the primary and secondary datacenter locations. Service persistencies with endpoints in datacenter locations are used to configure replication agents between the service persistencies. Using service endpoints, configuring Virtual Services that implement the service mesh. An Ingress Gateway is configured to route end user requests into the service mesh to a first service instance in the tenant-selected primary datacenter. According to the tenant-specific preferences, data replication is configured to copy data to redundant storage. Using endpoints of persistent storage replication agents for each service persistence in the tenant-selected primary datacenter, configuring persistent storage replication agents for each service persistence in the tenant-selected primary datacenter.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
LLMs use probabilistic methods to create coherent responses, sometimes going beyond the training data. This can result in “LLM hallucination.” Asset metadata is used in constructing prompts for the LLM, enhancing the LLM's understanding of available assets. Including a limited set of candidate assets in an LLM prompt template as part of a chain-of-thought prompt can solve the hallucination issue. Multiple rounds of interaction with the LLM may be used, allowing for more dynamic and responsive user engagement. The LLM-based recommender system for data catalogs may comprise asset metadata, user data, and a recommender model. The output of the LLM-based recommender system is a recommended asset list. The prompts may explicitly instruct the LLM to limit elements of the list to the candidate set of assets. The LLM's advanced context understanding and reasoning abilities enable it to deliver accurate and interpretable personalized recommendations.
The present disclosure provides techniques and solutions for facilitating the creation of application extensions, including in an extension environment that provides improved extension execution. An interface or model for an application extension can be provided. The extension can represent a particular point in an application's processing where extensions, if present, can be called. The interface or model specifies general features for the extension, such as arguments that are provided when an extension is called or return values that may be expected in response to extension execution. The interface or model can also specify functionality of the base application that an extension implementation can call during its execution. Thus, guidance is provided to developers in writing extension implementations, facilitating their development. The extension implementation, and optionally other code, such as the model, can be compiled for use in an extension runtime.
In an example embodiment, rather than send the state of the UI screen as a whole to an LLM to generate or modify one or more values on the UI screen, the identifiers and data types of data on the UI screen are gathered and sent to the LLM. The LLM is instructed to return a list of expressions that define what needs to be done to satisfy a user prompt. The calling process then evaluates the list of expressions to actually manipulate the data content.
Briefly, embodiments of a system, method, and article for scanning a database for a containerized environment to acquire information for a set of containers for applications. The information may include at least namespace associations and metadata for individual containers of the set of containers. One or more orphan containers may be detected within the set of containers for applications, where the one or more orphan containers may be associated with unknown owner information. Defined rules may be applied to determine owner information for at least one of the one or more orphan containers based at least in part on the namespace associations and metadata for the at least one of the one or more orphan containers. The determined owner information for the at least one of the one or more orphan containers may be stored in a storage device. A time may be initialized in response to detecting at least one remaining orphan container of the one or more orphan containers for which the owner information remains unknown in response to the applying of the defined rules to determine the owner information. The at least one remaining orphan container for which owner information remains unknown may be deleted in response to expiration of the timer.
The present disclosure involves systems, software, and computer implemented methods for custom processing in data privacy integration protocols. One example method includes a customer of a data privacy integration (DPI) service providing custom logic for customizing the DPI service. The DPI service generates and sends a DPI work package in response to receiving a DPI protocol request from the customer. Responder applications that receive the DPI work package evaluate the DPI work package and send work package responses. At least one responder application includes in at least one work package response at least one responder feedback flag. The DPI service evaluates the DPI work package responses including the evaluation of at least one responder feedback flag using the custom logic provided by the customer. The DPI service sends a response to the DPI protocol request that includes an overall status of DPI processing of the DPI protocol request.
Methods, systems, and computer-readable storage media for remediation management. Remediation rule definitions are received. The definitions include a replacement remediation action defining placeholder parameters and conditions for replacing erroneous entries of a plurality of data tables to remedy the erroneous entries of the plurality of data tables. The remediation rule definitions are mapped to the data tables. An identification of data tables to be verified is received. Data of the data tables is verified to identify erroneous entries. Remediation plans including applicable remediation rule definitions mapped to the one or more data tables are selected using a prediction model. A remediation plan includes remediation rule definitions to correct the erroneous entries in each of the data tables. The remediation plan is applied to replace the erroneous entries in the one or more data tables with corrected entries.
Example methods and systems are directed to categorizing support tickets for more efficient handling by support staff. Support staff may be divided into multiple support groups. An incoming support ticket is converted to a machine representation and provided as input to one or more trained machine learning models. Based on the output from the one or more trained machine learning models, the support ticket is routed to one of the support groups. As a result, some tickets will be directly routed to higher-level support groups instead of having all tickets first be evaluated by L1 support personnel. Accordingly, support staff resources are conserved. A model stacking technique may be used in which models of varying complexities, ranging from very simple to highly complex, are stacked in sequence one after another.
The present disclosure involves systems, software, and computer implemented methods for dynamically balancing different interests in data privacy integration protocols. One example method includes receiving, by different applications in a multiple-application landscape, consent information from data subjects that indicates consent or objection to processing performed by a data controller for a specified purpose. Consent information received by different applications is distributed to the different applications in the landscape. Each application synchronizes consent information obtained by the application and consent information obtained by other applications and distributed to the application to generate synchronized consent information. A processing action to be performed by a first application for a first purpose for a data subject is identified. Synchronized consent information for the data subject is retrieved and evaluated to determine whether the first processing action can be performed for the first purpose by the first application for the data subject.
The present disclosure involves systems, software, and computer implemented methods for custom processing in data privacy integration protocols. One example method includes receiving custom logic from a customer of a DPI (data privacy integration) service for customizing applications to evaluate requesting ground values received from the DPI service. The DPI service receives a protocol request that includes a requesting ground value. The DPI service generates and sends a work package that includes the requesting ground value. Each responder application that receives the work package evaluates the work package using the custom logic. The responder applications that received the work package send work package responses to the DPI service that include status information of the responder applications processing the work package. The DPI service evaluates the work package responses and sends a response to the protocol request that includes an overall status of DPI processing of the protocol request.
According to some embodiments, systems and methods are provided including a memory storing processor-executable code; and a processing unit to execute the processor-executable program code to cause the system to: receive a request for a sequence, the sequence including two or more components; identify one or more possible sequences based on the request; build a sequence based on the identified one or more possible sequences, wherein building the sequence includes adding an extension to the sequence; validate the built sequence; and store the validated sequence. Numerous other aspects are provided.
Systems and methods include reception, at a first microservice of a microservice-based application, of an indicator of a workload of an entry microservice of the microservice-based application, determination, based on the indicator of the workload, of an estimated future workload of the first microservice, and re-allocation of computing resources to the first microservice based on the estimated future workload.
Methods and systems are disclosed for database management with controlling documents. Tasks addressed include: identification of requirements in the documents, tracking changes as documents evolve, mapping documents or requirements to database entries, identifying gaps between documents and the database, and proposing database updates. Disclosed embodiments address these tasks using a combination of sequential program logic, machine-learning tools, and client interaction. Workflows address one or more tasks. Examples pertaining to regulatory documents are presented. Variations are disclosed.
In an example embodiment, a software application is introduced that is able to automatically detect whether a conversation in a chat interface is with a human or an artificial intelligence. More specifically, the software application is able to identify how the chat interface is interacted with and replicate that mechanism to allow the software application to directly contact the other party (whether human or AI) on the other side of a chat conversation.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 51/06 - Message adaptation to terminal or network requirements
A computer-implemented method receives a request to explain a configuration key which represents a set of rules for controlling a process flow of an entity of an enterprise resource planning (ERP) system, the set of rules defining a data operation scheme based on a plurality of tables stored in a database of the ERP system. The method generates a data object from the plurality of tables, the data object including a group of key-value pairs which collectively define the set of rules, preserving the hierarchy of the involved tables. The method generates a prompt based on the data object generated from the plurality of tables, prompts a large language model using the prompt, receives a response from the large language model, and based on the response, outputs an explanation of the configuration key summarizing the set of rules in natural language. Related computing system and software are also disclosed.
Systems and methods include reception of a first request, a first token and a first request identifier of the first request at a first microservice and, while processing the first request, determination to access first data of a first user of a first tenant, querying of a data source for an identifier of a second tenant and an identifier of a second user associated with the first request identifier, determination that the first tenant and the second tenant are identical and the first user and the second user are identical, and, in response to determining that the first tenant and the second tenant are identical and the first user and the second user are identical, accessing of the first data.
Systems and methods include execution of a script in an execution environment, the script implementing a portion of a flow to receive a message from a sender and transmit the message to a receiver, determination of resource consumption data indicating resource consumption in the execution environment during execution of the script in the execution environment, transmission of a prompt to a text generation model, the prompt including the resource consumption data and the script, receive, from the text generation model and in response to the prompt, a response indicating one or more modifications to the script, and present the one or more modifications.
In an example embodiment, a framework is provided to allow an LLM to regenerate an intermediate representation based on natural language feedback that is provided on the compilable computer code that was generated using the programmatic component from an earlier intermediate representation. The user is able to provide this feedback without having any knowledge of the intermediate representation and the framework is designed to allow the LLM to incorporate this feedback into future generation requests without the LLM being aware of the compilable computer code that was created from its prior iterations.
Systems and processes for aligning weakly-annotated tabular data to recognized characters in a document are provided. Data table annotations are grouped by column and value to produce a plurality of annotation groups. Character recognition tokens are received, and a search algorithm is performed to align the annotation groups to the tokens in a stepwise manner. A base column is designated, after which the search algorithm is performed on the base column annotation groups and the results are used to determine a vertical range of the data table. The search algorithm is then run for the other columns of the data table with the results being limited based on the vertical range. At each step, the annotations in an annotation groups are each aligned to one or more of the tokens. A bounding box is generated for each annotation in the data table and output to a target application.
In an example embodiment, a singular test is used to validate a user entity data model for any type of instance in an efficient manner. This results in an Entity-Agnostic test. This approach provides tremendous savings from test implementation, support, data storage, and test triaging perspective, as well as being able to always validate the functional correctness of the data model for any customer/user.
In some implementations, there is a method including searching a plurality of word embeddings representative of a plurality of materials each mapped to a corresponding emission factor by comparing the at least one word embedding representative of the at least one material to at least a portion of the plurality of word embeddings. Related systems, methods, and articles of manufacture are also disclosed.
G06Q 10/0637 - Strategic management or analysis, e.g. setting a goal or target of an organisationPlanning actions based on goalsAnalysis or evaluation of effectiveness of goals
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
Various examples are directed to systems and methods for executing a computer-automated process using trained machine learning (ML) models. A computing system may access first event data describing a first event. The computing system may execute a first ML model to determine an ML characterization of the first event using the first event data. The computing system may also apply a first rule set to the first event data to generate a rule characterization of the first event. The computing system may determine an output characterization of the first event based at least in part on the rule characterization of the first event and determine to deactivate the first rule set based at least in part on the ML characterization of the first event.
One or more applications access a first set of database tables via a first projection view mapping from a first name to a second name. At a given point in time, the one or more applications may detect an indication of a modification operation targeting the first set of database tables. In response to detecting the indication of the modification operation, the one or more applications may create a second set of database tables for storing results of the modification operation. In response to detecting a completion of the modification operation, the first projection view may be remapped from the first name to a third name, where the third name is associated with the second set of database tables. Then, the one or more applications may access the second set of database tables via the first projection view mapping from the first name to the third name.
Complex data models integrate information from diverse sources in data modeling server. The parser and integrator service in data modeling server accesses metadata describing data model and uses template for creating a document. Based on the template and the metadata, the parser and integrator service automatically generate a document that shows element properties of data models. This document serves as an abstract representation, visually illustrating the properties of elements within data models. The system facilitates interactive user input, enabling users to input prompts directed to a generative AI component. This AI processes the prompts, generating results seamlessly integrated into the automatically generated document. In essence, this scenario encapsulates a sophisticated approach to data modeling, where automated processes, guided by metadata and templates, generate insightful documents representing the properties of complex data models. User interaction with generative AI adds a dynamic layer to the process, enhancing the document with tailored insights.
Described herein are techniques for determining whether a trained machine learning model has captured all of the deterministic relations in a dataset. In some examples, the techniques may be applied to the training dataset along with the validation or test dataset. First, the input variables from the dataset are fed into the trained machine learning model to generate predicted outputs. Second, the correctness of the predicted outputs is compared against the output variables from the dataset, also known as the ground truth. The correctness is represented by residuals. Third, the residuals and the input variables are correlated. If correlation exists, then the trained machine learning model has not captured all of the deterministic relations in the dataset.
A system associated with a user experience survey framework for an enterprise may include an enterprise product hierarchy data store that contains information about a hierarchy of product nodes. Each product node may be, for example, associated with a user application. A computer processor of a user experience survey tool may receive from the enterprise an adjustment to the hierarchy of product nodes and store an adjusted hierarchy of product nodes into the enterprise product hierarchy data store. The user experience survey tool may then retrieve user experience survey results for a plurality of user applications. The retrieved user experience survey results are automatically aggregated in accordance with the adjusted enterprise product hierarchy and an aggregation rule selected by the enterprise. An indication of the aggregated user experience survey results may then be output to the enterprise.
System, method, and various embodiments for a medicine marketplace system are described herein. An embodiment operates by detecting that a requester has submitted a request for medicine. It is determined that a first threshold has been crossed prior to an expiration date of the medicine corresponding to a provider. A first notification is provided to the provider indicating that the first threshold has been crossed. An acknowledgement to make the medicine available for transfer to the requester is received. A transaction to transfer the medicine from the provider to the requester prior to the expiration date is consummated.
A computer implemented method can execute a first query plan for a query, obtain statistics for internal nodes of a first query tree representing the first query plan, receive a second query tree representing a second query plan for the query, search for a matching internal node of the first query tree for a selected internal node of the second query tree, and responsive to finding the matching internal node of the first query tree, apply the statistics for the matching internal node of the first query tree to the selected internal node of the second query tree for estimating cost of the second query plan during query optimization of the query. Related systems and software for implementing the method are also disclosed.
Systems and methods described herein relate to the automated generation of object reference structures for data purging processes. A primary query is executed in a database to identify first data objects with reference relationships to a root data object. Secondary queries are executed in the database to identify additional data objects with reference relationships to the first data objects or to other additional data objects identified in a previous one of the secondary queries. An object reference structure is generated based on the reference relationships among the root data object, the first data objects, and the additional data objects. The graphical representation is presented via a user interface. A purge instruction can be received with respect to at least part of the object reference structure. Execution of a data purge is triggered to purge data from the database according to the purge instruction.
The present disclosure relates to techniques for automatically generating new data objects from user input. The system receives user input comprising a plurality of words and executes a first query on a vector store to identify schema elements similar to keywords in the user input. The vector store provides a response with similarity scores for identified elements. A second query is executed on a knowledge graph to identify association paths between data objects that include the identified elements. The knowledge graph response includes association information linking source and target data objects through selected elements. Full association paths are constructed from this information, and a command is generated to instantiate a new data object with elements corresponding to the user input. This approach leverages the strengths of large language models, vector stores, and knowledge graphs to efficiently and accurately create new data objects, ensuring data integrity and relevance.
Systems and methods described herein relate to contextual navigation for embedded analytics integrations. An embedded analytics user interface is presented within a first application. The embedded analytics user interface is provided by a second application and includes a plurality of user-selectable data items of the second application. A user selection of a first data item from among the plurality of user-selectable data items is detected via the embedded analytics user interface. Context data related to the user selection is identified. The context data is identified based at least partially on a stored mapping between the first data item of the second application and a second data item of the first application. One or more navigation options associated with the context data are obtained from the first application. An interface element that includes the one or more navigation options is presented within the embedded analytics user interface.
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
93.
FRAMEWORK FOR EMBEDDING GENERATIVE AI INTO ERP SYSTEMS
A computer-implemented method can run an application associated with an intelligent scenario deployed on an enterprise resource planning (ERP) system. The application receives input values for one or more parameters from a tenant user through a user interface of the ERP system. The method can select a prompt template defined in the intelligent scenario, generate a prompt using the prompt template by replacing the one or more parameters included in the prompt template with respective input values, prompt a large language model (LLM) specified by the intelligent scenario using the prompt, receive a response generated by the LLM, and present the response on the user interface of the ERP system.
Techniques and solutions are provided for improved use of knowledge graphs in document processing. The relevance of properties to a knowledge graph may change over time. While a property may appear, it may take some time before it is apparent that the property should be used in a knowledge graph. Similarly, while a property may be relevant for a period of time, it can lose its relevance. The present disclosure provides techniques for tracking the use of properties over time, and making or proposing property status changes. These changes can result in making the properties visible or non-visible in a knowledge graph, which in turn can affect how future documents are processed. Further, in some cases a property can be made active, and documents processed when the property was not present or not active can be reprocessed to obtain information for the property.
A plurality of candidate phrases are extracted from a first field of a received textual input. Also, first context data is extracted from a second field of the received textual input. Additionally, second context data is retrieved from one or more data sources related to the received textual input. The first context data and the second context data are combined to form combined context data. Then, the plurality of candidate phrases and the combined context data are vectorized. Next, for each candidate phrase of the plurality of candidate phrases, a similarity score is calculated between a vectorized version of the candidate phrase and a vectorized version of the combined context data. Then, a subset of candidate phrases are selected having highest calculated similarity scores. Next, the subset of candidate phrases are provided to one or more applications which are generating responses to the textual input.
G06F 16/383 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
According to some embodiments, systems and methods are provided including a test data repository storing test data; a memory storing processor-executable program code; and a processing unit to execute the processor-executable program code to cause the system to: change a state of the stored test data from a first state to a second state in response to execution of test executable code, the execution using the first state test data; store the second state test data; detect a difference between the first state test data and the second state test data by comparing the stored second state test data to the first state test data; and restore the stored second state test data to the first state test data. Numerous other aspects are provided.
Techniques and solutions are provided for providing alerts to users when information changes, particularly information associated with a knowledge graph. A user can define an intent, where the intent describes the type of information for which a user desires to receive alerts. The intent can be specified directly with respect to knowledge graph elements, or the intent can be specified in another manner and mapped to such elements. A listener is implemented for the intent. A knowledge graph is periodically reviewed for updates. Updates that are relevant to a particular user intent cause the associated listener to be triggered, and information regarding the update is then provided to the user.
A computer-implemented method can receive a natural language query input from a user interface, extract a target entity from the natural language query input, identify a target application programming interface (API) corresponding to the target entity, formulate an API query using the target API, and execute the API query to generate a query output on the user interface. Identifying the target API includes generating a vector representation of the target entity, searching an entity vector database containing vector representations of a plurality of APIs to return one or more candidate APIs whose vector representations match the vector representation of the target entity, and prompting a generative artificial intelligence model to select the target API from the one or more candidate APIs.
A base application data store may contain a base application source metadata file with at least one extension point that defines a permission associated with the extension point in connection with a base application. An extension application server, coupled to the base application data store, retrieves the base application source metadata file and caches the base application source metadata file as a read only file. The extension application server then generates a merge map file with an extension change to the base application metadata file. The base application source metadata file and the merge map file may be, for example, merged in accordance with the defined permission to create a result metadata file for the base application. In some embodiments, the extension application server subscribes to application updates for the base application, and the extension change is reported and validated in accordance with the permission associated with extension point.
A database management system generates a cache entry system view of a database cache. Also, a new column is generated for the cache entry system view, where the new column is an ENTRY_HASH column to identify each entry of the database cache. In an example, the database management system detects a request to remove a given entry of the database cache, where the request includes a given ENTRY_HASH value to locate the given entry of the database cache. In response to receiving the request, the database management system identifies the given entry of the database cache based on the given ENTRY_HASH value. Then, the database management system removes the given entry of the database cache. Also, the database management system notifies a cache manager that the given entry has been removed.