Various examples described herein are directed to systems and methods for operating a database management system. A first host may be executed at a first container of a cloud environment. The first host may be configured according to a first set of operating parameters and may perform a source role. A second host may be executed at a second container of the cloud environment. The second host may be configured according to the first set of operating parameters and may perform a replica role or the source role. A request to modify at least one operating parameter of the first source may be received. A third host may be started at a third container and configured to perform at least one of the source role or the replica role for the source role. At least one of the first host or the second host may be shut down.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
2.
DATABASE REPLICATION WITH HOST REPLICATION AT ASYNCHRONOUSLY REPLICATED SYSTEM
Various examples are directed to systems and methods for operating a primary database management system and a secondary database management system. The secondary database management system may receive a takeover request indicating that the secondary database management system is to assume a role of the primary database system. The secondary database management system may determine that a last valid commit of a first host of the secondary database system is an oldest last valid commit. The secondary database management system may revert to a first state of the primary database management system corresponding to the last valid commit of the first host. The secondary database management system may be configured to assume the role of the primary database management system.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
Various embodiments for a disk-based merge for hash maps are described herein. An embodiment operates by identifying a plurality of hash maps with a plurality of disjunctions. The hash values of each of the entries may be moved to memory and compared for a particular disjunction. A data value with a lower hash value as determined based on the comparison is selected and stored in a merged hash map. The process is repeated until all the data values have been compared. A query is received, and processed based on the merged hash map.
Various embodiments for deduplication of APIs using similarity measures and artificial intelligence are described herein. An embodiment operates by receiving a request to compare a first computing program to a second computing program, wherein each computing program includes an address, one or more tables accessed by a respective computing program, one or more input parameters, and one or more output parameters. Similarity measures are calculated between the addresses, tables, input parameters, and output parameters of the two computing programs. The similarity measures are provided to a trained artificial intelligence (AI) model, which generates a similarity determination. Performed an action based on the similarity determination.
Contribution requests to a code repository are analyzed with a machine learning model before publishing. The machine learning model can be trained with past metadata of the contributor. Metadata can be extracted from the requests to determine whether the request is atypical for the contributor via a risk score. Requests determined to be atypical can be flagged for action by a security manager. Realtime assessment of code contributions can increase overall software security in a software development context.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
6.
WEB-BASED AUTOMATED HTML ELEMENT LOCATION PROVIDER
Briefly, embodiments of a system, method, and article for receiving a user selection of a HyperText Markup Language (HTML) element on a web page. A source representation of objects which comprise a structure and content of the web page may be automatically acquired. The source representation may be automatically processed to determine an ordered list of candidate locations for the HTML element. An output locator may be generated and displayed. The output locator may present the ordered list of location candidates for the HTML element.
Techniques for validation against aggregation across different versions of data obtain a selection of a data set, a dimension configuration for a data visualization, and a filter configuration for the data visualization. Then it is determined whether the data visualization is valid based on a determination that the zero or more filter dimensions include the version dimension, or a determination that the zero or more selected dimensions include the version dimension, or a determination that the zero or more selected dimensions include at least one of the measurable dimensions and that all measurable dimensions of the selected dimensions are restricted to a single version of data from among the multiple versions of the same data. The data visualization is generated in response to a determination that the data visualization is valid.
G06F 16/28 - Bases de données caractérisées par leurs modèles, p. ex. des modèles relationnels ou objet
G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
G06F 16/26 - Exploration de données visuellesNavigation dans des données structurées
This disclosure describes systems, software, and computer implemented methods including receiving a request from a second entity to transfer a digital asset to a first entity. Receiving a request for an address and public key is sent to an asset custody application executing on a device that is managed by the first entity. Receiving, a public key and a wallet address and sends the wallet address to the second entity. Further this disclosure describes receiving a request to transfer a digital asset from a first entity to a second entity. Generating a transaction package pursuant to the one or more parameters, the transaction package including a transaction and a public key of a digital wallet associated with the first entity. Executing the transaction and sending the signed transaction package to a distributed ledger.
G06Q 20/38 - Protocoles de paiementArchitectures, schémas ou protocoles de paiement leurs détails
G06Q 20/36 - Architectures, schémas ou protocoles de paiement caractérisés par l'emploi de dispositifs spécifiques utilisant des portefeuilles électroniques ou coffres-forts électroniques
9.
SCHEDULING SERVICES IN CLOUD-BASED SYSTEMS TO AVOID TIMEOUT
Methods, systems, and computer-readable storage media for receiving, by a scheduled transaction manager, a first request for a first global transaction for an application executed within a cloud-based system, the application including a set of services where execution of the first global transaction requires a set of participant services, in response to receiving the first request, transmitting, by a scheduled transaction coordinator, a first set of requests for a first set of local transactions to the set of participant services, receiving, by the scheduled transaction coordinator, indications of reserved resources from participant services, and determining that received indication of reserved resources have been received from all participant services in the set of participant services, and in response, inhibiting cancelation of resource reservations for each of the participant services in the set of participant services, and receiving one or more results of local transactions in the set of local transactions.
Techniques and solutions are described for facilitating data entry using machine learning techniques. A machine learning model can be trained using values for one or more data members of at least on type of data object, such as a logical data object. One or more input recommendation functions can be defined for the data object, where an input recommendation method is configured to use the machine learning model to obtain one or more recommended values for a data member of the data object. A user interface control of a graphical user interface can be programmed to access a recommendation function to provide a recommended value for the user interface control, where the value can be optionally set for a data member of an instance of the data object. Explanatory information can be provided that describes criteria used in determining the recommended value.
A scale-out computing cluster may include a large number of computing servers and storage devices. In order to provide high reliability, the computing cluster must be able to handle failures of individual devices. Reliability of the computing cluster may be improved by providing a standby server for each active server in the computing cluster. If any active server fails, the corresponding standby server is activated. The failed server may be brought back online or replaced, at which time the restored server becomes the standby server for the now-active original standby server. During the restoration period, if any other active server fails, the standby server for that active server is immediately activated. As a result, the recovery ability of the computing cluster is only challenged if both servers of an active/standby pair fail during the restoration period, substantially improving reliability.
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
G06F 11/16 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel
Various examples are directed to systems and methods for operating a database management system. A first host of a first version may be executed at a first container of a cloud environment. A second host of a second version may be executed at a second container of the cloud environment. The first host may be configured to perform a source role and the second host may be configured to perform a replica role corresponding to the source role. A network layer executing at the cloud environment may receive a request directed to the second host. The network layer may determine that the request is consistent with allowed request data describing allowed requests and may send the request to the second host.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
Embodiments implement efficient localized handling of metadata in connection with the retrieval of data from a remote source. A request including content, is received by a localization engine. In response to the request and based upon the content, only a portion of metadata relevant to the request is initially retrieved from the remote source. Remaining metadata relevant to the query, is only retrieved later according to execution of background jobs. One example relates to language translation in connection with querying of a remote source. Based upon a locale of the user posing the query, only metadata relevant to that particular locale (e.g., Germany) is returned immediately. Metadata relevant to languages of users residing in locales other than the current user (e.g., USA; France), is only returned later according to background jobs. Thus contact with a remote source does not serve as a bottleneck to efficient performance of local activities.
A method for allocating worker threads may include receiving a first fetch call for a query accessing a dataset stored at a database. The first fetch call may require a first portion of a result for the query. A first quantity of worker threads may be allocated to generate the first portion of the result for the query in response to the first fetch call. In response to a second fetch call for the query by determining a threshold corresponding to the first quantity of worker threads, a second quantity of data required for the second fetch call, and a third quantity of data buffered from the first fetch call. A second quantity of worker threads to generate a second portion of the result for the query may be allocated based on the threshold. Related systems and computer program products are also provided.
Emission footprints can be calculated and provided for consumption by software applications. To do this, product data including a quantity of a product, components of the product, and activities for the product are obtained. Emission factor data for the components indicating a carbon dioxide impact of the components per unit are obtained. One or more emissions footprints are calculated per product, per material, and per activity based on a time period, the quantity, the components, the activities, and the emission factor data. Each of the emissions footprints indicates an amount of carbon dioxide per unit for a corresponding product, material, or activity. The calculated emissions footprints for the time period are provided via an application programming interface or as a published event.
A database of text associated with different domains is maintained. Large language models (LLMs) are prepared for use in the different domains by providing the associated text to an instance of an LLM. Thus, using multiple instances of the same pre-trained LLM, domain-specific LLMs are generated. The text provided to the LLM instance may be selected based on an account identifier of the user accessing the LLM, the tenant accessing the LLM, a user selection of a domain, or any suitable combination thereof. A pool of prepared LLM instances may be generated before the access request is received. If a response provided by an LLM instance in a domain to a prompt was rejected by a user and additional information was received during the session to improve the response of the LLM instance, the additional information may be to the text used to prepare future LLM instances for the domain.
Various embodiments for a disk-based merge for hash maps are described herein. An embodiment operates by identifying a plurality of hash maps with a plurality of disjunctions, ordering the one or more entries in each disjunction based on the hash value, and assigning an index value to each data value based on the ordering. The hash values of each of the entries may be moved to memory and compared for a particular disjunction. A data value with a lower hash value as determined based on the comparison is selected and stored in a merged hash map. The process is repeated until all the data values have been compared. A query is received, and processed based on the merged hash map.
Various embodiments for a disk-based merge for combining merged hash maps are described herein. An embodiment operates by identifying a first hash map and a second hash map, and comparing a first hash value from the first hash map with a second hash value from the second hash map, with the lowest index values. A lowest hash value is identified based on the comparison, and an entry corresponding to the lowest hash value is stored in a combined hash map. This process is repeated until all of the hash values from both the first set of hash values and the second set of hash values are stored in the combined hash map. A query is received, and processed based on the combined hash map.
G06F 12/08 - Adressage ou affectationRéadressage dans des systèmes de mémoires hiérarchiques, p. ex. des systèmes de mémoire virtuelle
G06F 12/0864 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p. ex. mémoires cache utilisant des moyens pseudo-associatifs, p. ex. associatifs d’ensemble ou de hachage
G06F 12/0873 - Mappage de mémoire de mémoire cache vers des dispositifs ou des parties de dispositifs de stockage
19.
INTERPRETABILITY FRAMEWORK FOR DIFFERENTIALLY PRIVATE DEEP LEARNING
Data is received that specifies a bound for an adversarial posterior belief pc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Privacy parameters ε, δ are then calculated based on the received data that govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The calculating is based on a ratio of probabilities distributions of different observations, which are bound by the posterior belief pc as applied to a dataset. The calculated privacy parameters are then used to apply the DP algorithm to the function over the dataset. Related apparatus, systems, techniques and articles are also described.
In some implementations, there is provided a method that includes monitoring, by an application uptime system, whether execution of an application is successful without causing an incident; in response to the execution of the application being unsuccessful and causing the incident, the method further comprising: collecting one or more end user incident reports including an incident identifier, a start time of the incident, and a stop time of the incident, collecting one or more development system incident reports linked to the one or more end user incident reports, determining at least one end user metric for the application, generating one or more user interface views based on the at least one end user metric for the application, and causing to be presented the one or more user interface views. Related systems, methods, and articles of manufacture are also disclosed.
Systems and methods include determination of a dataset comprising a plurality of instances, each instance comprising a value of each of a plurality of input variables and of a target variable, where the values of the target variable comprise a plurality of categories, determination of two or more infrequent categories of the plurality of categories from the dataset, determination of two or more non-separable categories from the two or more infrequent categories based on the dataset, changing of occurrences of the two or more non-separable categories within the dataset to a single category to generate a modified dataset, and training of a classifier to output a value of the target variable based on the modified dataset.
G06F 16/28 - Bases de données caractérisées par leurs modèles, p. ex. des modèles relationnels ou objet
G06F 18/2415 - Techniques de classification relatives au modèle de classification, p. ex. approches paramétriques ou non paramétriques basées sur des modèles paramétriques ou probabilistes, p. ex. basées sur un rapport de vraisemblance ou un taux de faux positifs par rapport à un taux de faux négatifs
22.
OBJECT-BASED TEXT SEARCHING USING GROUP SCORE EXPRESSIONS
Methods and systems for object-based text searching using group score expressions are provided. A method may include receiving a query including a request to search specified columns of a table for a set of search terms, and a group score filter for use in filtering the table based at least on a group score associated with a plurality of groups of rows of the table, determining the group score for each of a plurality of groups of rows of the table, filtering the table based at least on the group score filter included in the query and the group score determined for each of the plurality of groups of rows of the table, and providing at least one group of rows of the plurality of groups of rows that includes at least the portion of the set of search terms.
Methods, systems, and computer-readable storage media for receiving a set of descriptions provided as unstructured data, each description associated with one or more entities in a set of entities that can be queried using the IR system, providing, from the set of descriptions, a first set of training data including at least a first set of entities including at least a portion of the set of entities, training a named-entity recognition (NER) model using at least a portion of the first set of training data, receiving, by the IR system, a portion of a search query, and providing a set of auto-complete suggestions based on the portion of the search query and the NER model.
Disclosed herein are system, method, and computer program product embodiments for hybrid-type authorization. An embodiment operates by parsing a policy expression into an abstract syntax tree (AST) and receiving, from a local server, user attribute data of a user. The embodiment further operates by traversing the AST to evaluate the user attribute data and determining whether a result of the traversing is indeterminate. In addition the embodiment operates by sending, to a remote server, a request for additional user attribute data in response to determining that the result is indeterminate, and receiving, from the remote server, the additional user attribute data. Then the embodiment operates by re-traversing the AST to evaluate the user attribute data and the additional user attribute data, authorizing the user based on at least one of the traversing or re-traversing, and outputting an authorization result.
Disclosed techniques and solutions can provide improved snapshot replication. Typically, an initial replica obtained using snapshot replication is periodically updated. However, the update process can unnecessarily consume computing resources if data in a source data object has not changed with respect to data in a replica data object. Disclosed techniques check to determine whether a snapshot replica is out of date before obtaining a new snapshot. The checks can be performed on manual request or on the occurrence of triggers, such as receiving a query that accesses the replica data object or according to a schedule. Information for current and prior versions of the remote data object can be compared to determine whether a replica is out of date, such as digest values of contents of the remote data object or timestamps associated with the remote data object.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
26.
ASYNCHRONOUS PROCESSING OF PRODUCT LIFECYCLE MANAGEMENT (PLM) INTEGRATION MESSAGES
In an example embodiment, asynchronous message processing is performed in a PLM system integration (PLMSI), at least for large message payloads. A processing decision is made as to whether to process a payload synchronously versus asynchronously. In the case of asynchronous processing, this processing can be started in a separate thread from the synchronous communication connection used to transmit the message and payload. The synchronous communication connection (which may be implemented in, for example, Hypertext Transfer Protocol (HTTP)) can be closed after the message was successfully received to prevent connection timeouts.
H04L 47/2441 - Trafic caractérisé par des attributs spécifiques, p. ex. la priorité ou QoS en s'appuyant sur la classification des flux, p. ex. en utilisant des services intégrés [IntServ]
H04L 69/22 - Analyse syntaxique ou évaluation d’en-têtes
27.
SCALABLE CONVERSATIONAL EXPERIENCE FOR CITIZEN DEVELOPERS FOR BUILD AUTOMATION GENERATION
Method and systems can provide a scalable conversational experience for citizen developers for build automation generation. A knowledge graph is identified that represents entities, entity attributes, and entity relationships of entities used in application programming interfaces of an enterprise software system. A natural language model is generated based on information in the knowledge graph and a chatbot is configured with the natural language model to generate automations for automating enterprise processes in the enterprise software system. Chatbot logic is generated based on information in the knowledge graph. The chatbot participates in a conversation with a user of the enterprise software system and the chatbot determines, from the conversation, using the natural language model, parameters of a requested automation for automating a process of the enterprise software system. The chatbot provides the determined parameters to an automation tool for creation of the automation.
Systems and methods are provided for text searching using partial score expressions. A method may include receiving a query to search for a search term in at least a first column of a first table and a second column of a second table, scanning the first column and the second column for at least a portion of the search term, generating a first partial score table, generating a second partial score table, determining a combined score for each row in the first column and/or the second column containing at least the portion of the search term based at least on a join of the first partial score table and the second partial score table, and providing, in response to the query and based at least on the combined score, a row of the first column and/or the second column including at least the portion of the search term.
Methods, systems, and computer-readable storage media for providing, for a set of ML models, a set of training metrics determined using test data during a training phase, providing, for a production-use ML model, a set of inference metrics based on predictions generated by the production-use ML model, generating, by a prompt generator, a set of few-shot examples using the set of training metrics and the set of inference metrics, inputting, by the prompt generator, the set of few-shot examples to a LLM as prompts, transmitting, to the LLM a query, displaying, to a user, a recommendation that is received from the LLM and responsive to the query, receiving input from a user indicating a user-selected ML model responsive to the recommendation, and deploying a user-selected ML model to an inference runtime for production use.
In some implementations, there is provided executing a query execution plan for a query; setting a first flag to indicate to a plurality of worker threads to stop executing tasks in a first queue of a memory stack; pushing into the memory stack, a second queue containing one or more exclusive tasks associated with the query; setting a second flag to indicate to the plurality of worker threads to resume working; and in response to the second queue being empty of the one or more exclusive tasks, setting a third flag to indicate to the plurality of worker threads to stop executing tasks in the second queue, and setting a fourth flag to indicate to the plurality of worker threads to resume working on the tasks in the first queue.
A computer implemented method can obtain, in a data transfer system, a plurality of data records from data sources and monitor operating status of a target application running on a target machine. Responsive to finding that the target application stops operating, the method can send one or more first data records from the data transfer system to the target machine and store the first data records in a target buffer on the target machine. Responsive to finding that the target application resumes operating, the method can send one or more second data records from the data transfer system to the target machine and directly store the second data records in a data repository. While sending the one or more second data records, the method can transfer the one or more first data records from the target buffer to the one or more target databases.
In an implementation, a computer-implemented method includes receiving a plurality of inputs corresponding to a plurality of instances, where each input of the plurality of inputs representing multiple attributes of a respective instance. The computer-implemented method further includes training, using the plurality of inputs and as a trained supervised and multivariate machine learning model, a supervised and multivariate machine learning model. The computer-implemented method further includes determining multiple split point lists based on the trained supervised and multivariate machine learning model, where the multiple split point lists correspond to the multiple attributes, and where each of the multiple split point lists includes a number of split points and a number of split gains associated with a respective attribute.
Disclosed herein are system, method, and computer program product embodiments for scheduling an unplannable workload via a static runtime. An ingestion service operating on a computing device establishes an inbound channel based on a setup order and associate the inbound channel to an Ingestion-Transformation-Load (ITL) task. The ingestion service stores incoming data received via the inbound channel in a staging area and organizes the incoming data into a plurality of batches. The ingestion service monitors the staging area to determine a number of unprocessed batches. Furthermore, in response to determining that the number of unprocessed batches meets or exceeds a first predetermined threshold, the ingestion service triggers a scheduler to generate a work order to be executed on runtime instance for each of the plurality of batches in the staging area.
The present disclosure provides techniques and solutions for improved query optimization. A query plan is received, and at least a portion of the query plan is identified to be analyzed for logically equivalent query plans. A signature is generated for the at least a portion of the query plan. One or more query plans are identified that have signature that matches the signature of the at least a portion of the query plan, but where such query plans are logically equivalent, but not identical, to the at least a portion of the query plan. A query plan of the one or more query plans is substituted in the query plan for the at least a portion of the query plan.
Rows of first and second tables that share common values for one or more designated key fields can be considered partner rows to facilitate computer-based comparison of the tables. Responsive to a user request to compare the first and second tables, which designates field(s) common to both data tables as key fields and field(s) common to both data tables as comparison fields, a matches table is generated which includes the key field(s), comparison field(s), and a source field whose value indicates the originating table of the data in the row. For each set of partner rows, the matches table is populated with data from the partner rows. The data in the matches table is handled, and a results table is populated with the results of the handling of the data in the matches table and with data from any unpartnered rows in the first and second tables.
G06F 16/30 - Recherche d’informationsStructures de bases de données à cet effetStructures de systèmes de fichiers à cet effet de données textuelles non structurées
A computer system may send an async version of a file to a client, with the async version of the file comprising a uniform resource identifier for the file, a secret key for accessing the file, one or more filter parameters, and at least one of metadata of the file or a sample of data of the file. A portion of the data of the file may be excluded from the async version. The computer system may receive a request for additional data of the file from the client, with the request comprising a parameter value for the filter parameter(s), and then generate a requested version of the file based on the request, including at least a portion of the data of the file in the requested version based on the parameter value. The computer system may then send the requested version of the file to the client component.
Disclosed herein are system, method, and computer program product embodiments for machine-assisted process modeling and validation. An embodiment operates by receiving, by at least one processor, a process document describing a process in a user locale. The embodiment then generates the model notation in accordance with a model notation format by processing the process document with a deep learning technique based on a prompt for modeling the process document. The embodiment then outputs the model notation.
In some implementations, there is provided a method including receiving a query request including a join, wherein the join includes a range between a first predicate of the join and a second predicate of the join; generating a query plan including an index join operator; executing the query plan including the index join operator including getting, from the sorted dictionary, the first value identifier, the second value identifier, and the one or more intervening value identifiers between the first value identifier and the second value identifier and executing the index join operator using the first value identifier, the second value identifier, and the one or more intervening value identifiers to obtain a result set.
In some implementations, there is provided pipeline bypassing of certain operators. In some implementations, a method includes generating a query plan including at least one pipeline of operators; determining whether the at least one pipeline of operators includes a first operator that requires a complete result set as an input and further includes a second operator that supports providing the complete result set using a state identified by a state reference; and in response to determining the at least one pipeline of operators includes the first operator that requires the complete result set as the input and further includes the second operator that supports providing the complete result set using the state identified by the state reference, bypassing in the query plan pipelining between the first operator and the second operator.
Generating Notifications Through Chart Pattern Detection Embodiments utilize pattern recognition to generate notifications in connection with analytical applications. A dashboard of the analytical dashboard is scanned to intake charts of data therefrom. Images of the charts are created, and then matched with repository patterns of a trained deep transfer model (such as a Convolutional Neural Network model). Upon matching of a pattern by the model, an alert is generated and communicated to a user to indicate a trend in the analytical data. In this manner, embodiments automatically detect data trends based upon their visual appearance when plotted in a chart, rather than through resource-intensive analysis of individual data point values. In specific embodiments, the pattern recognition may be implemented by an in-memory database engine of an in-memory database responsible for storing charts and/or chart images and/or the repository. In some embodiments, recognition of patterns in chart images may be implemented by a service.
G06F 18/2415 - Techniques de classification relatives au modèle de classification, p. ex. approches paramétriques ou non paramétriques basées sur des modèles paramétriques ou probabilistes, p. ex. basées sur un rapport de vraisemblance ou un taux de faux positifs par rapport à un taux de faux négatifs
A computer-implemented method can specify a source version and a target version of an application programming interface (API) and a target programming language; and retrieve a difference graph connecting from a source knowledge graph characterizing the source version of the API to a target knowledge graph characterizing the target version of the API. The difference graph includes one or more revision edges representing changes of the API between the source version and the target version. The method can install one or more function packages written in the target programming language and associated with the one or more revision edges; and run the one or more function packages to update the API from the source version to the target version.
In some implementations, there is provided a method including generating a query plan including in a first pipeline a first join operator and in a second pipeline a second join operator; executing at least a portion of the query plan including the first pipeline and the first join operator; detecting, based on the at least one operator usage state and the at least one operator pruning condition, an empty state object shared between the first join operator and the second join operator in the second pipeline; and processing, by the at least one operator pruning condition, an indication of the empty state object, wherein the least one operator pruning condition is associated with the second join operator and includes at least a first rule to mark the second join operator for pruning.
A method for training a machine learning model using self-contrastive decorrelation is provided. The method comprises training a machine learning model by receiving a sentence including text, performing a first encoding operation on the sentence, performing a second encoding operation on the sentence, mapping the first vector representation on which a first augmentation operation is performed to a first high dimensional vector representation and the second vector representation on which a first augmentation operation is performed to a second high dimensional vector representation, generating a correlation matrix using the first high dimensional vector representation and the second high dimensional vector representation, and performing a decorrelation operation on the correlation matrix. The method includes receiving, by the trained machine learning model, an query that includes a target sentence, and outputting, using the trained machine learning model, a result sentence that satisfies a similarity metric relative to the target sentence.
A computer-implemented method may comprise receiving, from a client application running within a client network, a request for a server application running within a server network to perform an action, and then generating, by the client network, a modified version of the request for the server application to perform the action, where the modified version of the request for the server application to perform the action comprises an access token configured to be used by the server network to allow an update of an access control list for the server application. The client network may then send, to the server network, the modified version of the request for the server application to perform the action.
Systems, methods, and articles of manufacture, including computer program products, provide a system including at least one data processor and at least one memory storing instructions which, when executed by the at least one data processor, cause operations comprising: generating, by a database execution engine, a query plan including a plurality of operators; inserting, by the database execution engine, an enforce compilation operator into the query plan that includes the plurality of operators, the plurality of operators comprising a first operator, the enforce compilation operator, and a second operator; executing at least the first operator of the query plan; in response to executing the first operator, evaluating, by the database execution engine, an output of the first operator to determine whether a condition is satisfied; and in response to the condition being satisfied, triggering, by the database execution engine, a just-in-time compilation of the second operator.
In some implementations, there is provided a method that includes receiving a query request including a top k query operator for query plan generation, optimization, and execution, wherein k defines a threshold limit of query results for the top k query operator; inserting into a query plan a check operator associated with the top k query operator; in response to executing the query plan, checking, by the check operator, whether an early exit occurs due to the top k query operator reaching the threshold limit; in response to the early exit occurring due to the top k query operator reaching the threshold limit, stopping processing, by the check operator, including opening of another fragment of a database table; and in response to the early exit not occurring, allowing, by the check operator, the opening of the other fragment of the database table.
G06F 16/20 - Recherche d’informationsStructures de bases de données à cet effetStructures de systèmes de fichiers à cet effet de données structurées, p. ex. de données relationnelles
A system and method is provided for implementing a table scan predicate with integrated semi-join filter. The method includes receiving a query including a request to join first data from a first dimension table and second data from a second dimension table with fact data from a fact table. The method includes applying a first dynamic predicate to the first data by collecting the first data based on a first expression of the query and filtering the first column. The method also includes applying a second dynamic predicate to the second data by collecting the second data based on a second expression of the query and filtering the second column. The method also includes executing the query by at least scanning the fact table based on the query, the first filtered column, and the second filtered column.
G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
Systems and methods include reception of a request to recover data of a first tenant to a point in time, determination of backups of first and second database table shards corresponding to the point in time, and generation of metadata associating a second tenant with the first and second shards and the backups. In response to a request to access the first second shards, it is determined based on the metadata that the first and second shards are not stored in a storage layer and, in response, the first shard is recovered to a first storage node from the backup of the first shard, the second shard is recovered to a second storage node from the backup of the second shard, and identifiers of the first storage node and the second storage node are returned.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
49.
DISASTER RECOVERY USING INCREMENTAL DATABASE RECOVERY
Systems and methods include storage of shards of first database tables of a first tenant in a first plurality of storage nodes located in a first region, each shard associated with a first database table and a key range of the first database table, storage of shards of second database tables of a second tenant in a second plurality of storage nodes located in a second region, each shard associated with a second database table and a key range of the second database table, storage of backups of the shards of the first database tables of the first tenant in a plurality of backup locations located in a region different from the first region, and recovery of the backups of the shards of the first database tables of the first tenant from the backup layer to the second plurality of storage nodes.
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
Embodiments are described for a database management system comprising a memory and at least one processor coupled to the memory. The at least one processor is configured to receive a query that corresponds to a data slice and determine a bloom filter based on the query. The at least one processor is further configured to determine that the data slice includes data requested by the query based on the bloom filter and in response to determining that the data slice includes the data requested by the query, load the data slice to the memory.
Embodiments are described for a database management system comprising a memory and at least one processor coupled to the memory. The at least one processor is configured to receive a plurality of queries and determine a first identifier based on the plurality of queries. The at least one processor is further configured to create a first bloom filter based on the first identifier and receive an additional query corresponding to the first identifier. The at least one processor is further configured to execute the first bloom filter.
Example methods and systems relate to the modeling of production processes to include intermediate products. An enterprise resource planning (ERP) system receives user input to define an intermediate product, and generates a master data record for the intermediate product. The ERP system further receives user input to include the intermediate product in a production model within the ERP system. The production model is associated with a real-world production process. The production model is updated based on the master data record to integrate the intermediate product into a sequence of activities. The sequence of activities comprises a first activity that produces the intermediate product and a second activity that consumes the intermediate product. The ERP system executes a production order based on the production model. Execution of the production order includes tracking state information of the intermediate product within the ERP system.
Provided are systems and methods which optimize a validation process performed during training of a time-series forecasting model. The optimization can remove training data that has poor attributes for training (e.g., less error, less fluctuation, less patterns, etc.) to improve the quality of the training data and reduce the amount of processing that is performed by the host system. In one example, a method may include storing a plurality of machine learning models and a data set, dividing the data set into k folds of data, training the plurality of machine learning models on a subset of folds from among the k folds of data, determining error values for the plurality of machine learning models, respectively, based on fold errors among the subset of folds, and storing the error values within the storage.
Content can be visualized using a semi-expanded state. A user interface can contain two or more shapes which contain selectable elements that generate popover shapes. A “semi-expanded” popover shape is generated in the user interface in response to a selection of an element of a first shape in the user interface. The popover contains information associated with a data object and it is positioned outside of and separate from the first shape. The popover is positioned on top of a second shape where a second portion of the second shape is not covered. When a selection of the first popover shape is obtained it is transformed into an expansion of the first shape which contains the information associated with the data object represented by the first shape. The second shape is repositioned outside of the expansion of the first shape.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels
A computer implemented method can receive an incoming query statement and search a query hint registry for a patterned query statement that matches the incoming query statement. A wildcard expression contained in the patterned query statement matches one or more characters of the incoming query statement. The query hint registry includes a statement hint paired with the patterned query statement. Responsive to finding the patterned query statement that matches the incoming query statement, the method can append the statement hint to the incoming query statement, obtain a query execution plan for the incoming query statement appended with the statement hint, and execute the query execution plan.
Disclosed herein are system, method, and computer program product embodiments for determining dependency associated with microservices. An embodiment operates by receiving a first request to perform a task associated with the microservices; determining a first microservice and a second microservice responsible for performing the task; prior to transmitting a second request from the first microservice to the second microservice, inserting a first field, a second field and a third field to the second request; generating tracing data associated with the second request; generating a first dependency graph, based on the generated tracing data; determining whether there is a first dependency associated with the microservices based on the first dependency graph; in response to the determination that there is a first dependency, generating a first report; providing for display the first report; and receiving input to modify at least a portion of the microservices.
Techniques and solutions are provided for executing database triggers. In particular, disclosed techniques allow for the creation of database triggers with multiple insert statements. For a trigger that includes first and second insert statements, first and second tables are created, respectively for the first and second insert statements, that each include a sequence column. At least the first insert statement references a sequence that is incremented during trigger execution. The sequence columns of the first and second tables have a number of elements corresponding to a number of times a given insert operation will execute as a result of a database operation that satisfies the conditions of the trigger. The first and second insert operations are executed using the respective first and second tables.
Disclosed herein are system, method, and computer program product embodiments for generating a recommended harvesting schedule. In embodiments, input data is obtained that includes a respective representation of a crop yield curve for each crop zone in a plurality of crop zones and a set of harvesting constraints including at least one harvesting resource constraint. Based on the input data, a local search heuristic iterates over a plurality of candidate harvesting schedules to identify a current best candidate harvesting schedule and outputs the current best harvesting schedule as the recommended harvesting schedule. The iteration may include determining a solution score for each candidate harvesting schedule based at least upon a measure of a degree to which the candidate harvesting schedule satisfies the set of harvesting constraints and a total crop yield associated with the candidate harvesting schedule, and evaluating each candidate harvesting schedule based on the solution score determined therefor.
G06Q 10/0631 - Planification, affectation, distribution ou ordonnancement de ressources d’entreprises ou d’organisations
G06Q 10/0637 - Gestion ou analyse stratégiques, p. ex. définition d’un objectif ou d’une cible pour une organisationPlanification des actions en fonction des objectifsAnalyse ou évaluation de l’efficacité des objectifs
The present disclosure involves systems, software, and computer implemented methods for data privacy. One example method includes performing a processing action for a data subject for a purpose using a set of data categories that are associated with the purpose. The purpose has a retention period and is a parent purpose in a purpose hierarchy with at least one dependent purpose as a child purpose of the purpose. Dependent purpose retention periods and dependent purpose data categories are determined for each dependent purposes as respective subsets of the set of data categories. In response to an end of purpose for the purpose, data of the set of data categories is blocked. Data in the set of data categories that are not dependent purpose data categories is retained according to the retention period and data of each dependent purpose data category is retained according to a corresponding dependent retention period.
A system and/or method for spill-to-disk in projection operations includes receiving a query including a projection, receiving a plurality of rows in response to the query processed by a processing thread of a plurality of processing threads, determining whether the query specifies an order for the plurality of rows, determining whether a disk buffer associated with the processing thread contains a stored row in response to the query specifying the order, storing the plurality of rows in the disk buffer in response to determining the disk buffer contains the stored row, storing the plurality of rows in a memory buffer associated with the processing thread in response to determining the disk buffer does not contain the stored row and the memory buffer contains at least a threshold amount of memory to store the plurality of rows, and providing the stored plurality of rows in response to the query.
G06F 12/0888 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p. ex. mémoires cache utilisant la mémorisation cache sélective, p. ex. la purge du cache
G06F 12/0893 - Mémoires cache caractérisées par leur organisation ou leur structure
Systems and methods include reception of a first request to copy a database to a second tenant. In response to the first request, first metadata associating the second tenant with a first shard and a second shard of a database table is generated. A second request to access the first shard is received from a requestor associated with the second tenant and, in response to the second request, it is determined that the first shard is not stored in any of a plurality of storage nodes of a storage layer. In response to the determination, an instruction is issued to recover the first shard to a first storage node from a first backup location, and an identifier of the first storage node is returned to the requestor.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
Systems and methods include storage a backup of a first shard of a first database table of a database in a first backup location, the first shard including a first key range of the first database table, storage of a backup of a second shard of the first database table in a second backup location, the second shard including a second key range of the first database table, reception of an instruction to recover the database, and, in response to the instruction, recovery of the first shard to a first storage node from the first backup location and, in parallel, recover the second shard to a second storage node from the second backup location.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
63.
PURPOSE-BASED PROCESSING BY PURPOSE-ACTION ASSOCIATION
The present disclosure involves systems, software, and computer implemented methods for data privacy protocols. One example method includes receiving information defining a purpose for processing personal data of a data category stored in an object. A first mapping is received of a processing action to the purpose. Input data to be obtained for the processing action is identified. A determination is made as to whether the input data is of the data category that has been mapped to the purpose. The processing action is executed using the input data as purpose-based processing of the input data, in response to determining that the input data can be used during execution of the processing action for the purpose. Processing of the input data by the processing action is prevented, in response to determining that the input data cannot be used during execution of the processing action for the purpose.
The present disclosure relates to computer-implemented methods, software, and systems for generating intelligent data reports based on insight into key aspects of the data to provide reports that include identified trends. A first selection for a first data field from a list of data fields exposed for report generation is received. Predictive logic is executed to identify trends in data from a data source associated with i) a first dimension of the data corresponding to the selected first data field and ii) at least one additional dimension corresponding to at least one additional data field. A second data field is identified as corresponding to a second dimension correlated with the first dimension to define trends in the data. A report generated based on data associated with the selected first and second data fields are presented at the interface.
The present disclosure relates to computer-implemented methods, software, and systems for controlling the execution of concurrent threads executing tasks in a database management system. A first task is executed at a first thread from a pool of threads for execution of tasks at the database management system. It can be identified that the execution of the first task is paused and that the first task is in a sleep mode. In response to identifying that the first task is in an awake mode after the sleep mode, it can be determined whether a current number of threads in the pool that are currently processing tasks in parallel is below an allowed number of threads. It can be determined that the allowed number of threads has been reached and a waiting status can be assigned to the first task at the first thread.
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Data processing software, namely, software for capturing,
organizing, analyzing, storing, accessing and reporting
data; data processing software, namely, software for
monitoring, analyzing and optimizing the use of other
computer software; data processing software for online
analytical processing for data mining and for regulatory
compliance management. Software as a service (SaaS) services featuring software for
managing and optimizing enterprise architecture; computer
consultancy services, namely, technical assistance and
technical support services in the nature of software
implementation and troubleshooting of computer software for
others, maintenance and updating of software; application
service provider services, namely, hosting, managing,
developing, analyzing and maintaining applications,
software, and web sites, of others; computer software
consultation; software development; software development
services.
67.
DIFFERENTIALLY PRIVATE VARIATIONAL AUTOENCODERS FOR DATA OBFUSCATION
Techniques for implementing a differentially private variational autoencoder for data obfuscation are disclosed. In some embodiments, a computer system performs operations comprising: encoding input data into a latent space representation of the input data, the encoding of the input data comprising: inferring latent space parameters of a latent space distribution based on the input data, the latent space parameters comprising a mean and a standard deviation, the inferring of the latent space parameters comprising bounding the mean within a finite space and using a global value for the standard deviation, the global value being independent of the input data; and sampling data from the latent space distribution; and decoding the sampled data of the latent space representation into output data.
The example embodiments are directed to systems and methods which may provide a guided user interface session for user input to a software process based on annotations added to a process model of the software process. In one example, a method may include receiving runtime data of an instance of software process from a workflow engine that is executing the instance of the software process, determining a process activity that is excepted to happen next within the running instance of the software process, identifying GUI and a subset of input elements within the GUI which are mapped to the determined process activity based on annotations within a process model of the software process, highlighting the identified subset of input elements and disabling any remaining input elements within the GUI to generate a guided GUI, and displaying the guided GUI via a computing system of a user.
Example methods and systems are directed to the enabling of object history tracking in an object-oriented programming environment. According to some examples, a tracking class includes a tracking function and a tracked class includes a tracked function. A system receives user input to link the tracked function to the tracking function. Executable code is executed to create a target object of the tracked class and to execute the tracked function with respect to the target object. Execution of the tracked function triggers the tracking function to obtain tracking data relating to the target object. The tracking data is stored as part of the target object. According to some examples, the system receives an indication of an error that occurred during execution of the executable code, retrieves the tracking data from the target object, and causes presentation of the tracking data at a user device.
Example methods and systems are directed to inverted indexes. According to some examples, an inverted index is generated based on source data and a posting list threshold. The inverted index comprises one or more restricted posting lists. Each restricted posting list has a maximum size corresponding to the posting list threshold. The method may include receiving a search query comprising a value that identifies a restricted posting list of the one or more restricted posting lists. The value may be used to retrieve and return one or more record identifiers from the identified restricted posting list. A record identifier uniquely identifies one of the plurality of records in the source data.
Techniques and solutions are provided for providing software application functionality allowing users to perform analytical data operations. Software applications typically limit users to interacting with predefined data. Disclosed techniques allow users retrieve new data, or process data in different ways, by accessing lower-level objects, such as analytic queries defined in a virtual data model. An object of a data model defined for a collection of graphical user interfaces can be used to identify an analytical data object providing access to data defined by the data model. A query is executed to retrieve data corresponding to at least a portion of attributes defined in the data model. At least a portion of retrieved data is displayed. User input is received that requests a pivot operation, an operation to add a filter, or to add a multidimensional data element of the analytical data object to the graphical user interface display.
The present disclosure relates to computer-implemented methods, software, and systems for flexibly managing limits for execution of jobs in parallel threads provided by a system. A request for the execution of a first statement at one or more threads associated with parallel processing at the system is received. A first snapshot object of an initial hierarchy of workload classes is obtained. The first snapshot object includes pointers to limiters defined for the workload classes and statements within the initial hierarchy. Each limiter defines an allowable number of threads for parallel execution of jobs. A modification to the hierarchy of workload classes is performed during runtime of the one or more jobs. A second snapshot object of a modified hierarchy of the workload classes. The execution of one or more created jobs is completed based on complying with limits as aggregately calculated for the first statement.
G06F 9/48 - Lancement de programmes Commutation de programmes, p. ex. par interruption
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
Embodiments afford recommendations in the accurate modeling of complex process flows. A repository is provided of known models (in graph form) of complex processes. Semantics of the repository models, are constrained within an existing vocabulary (e.g., one that does not include a particular term). During an initial training phase, a fine-tuned sequence-to-sequence language model is generated from a pre-trained language model (e.g., T5) and semantics of the known repository process models, using transfer-learning techniques (e.g., from Natural Language Processing—NLP). During runtime, an incomplete process model (also in graph form) is received having an unlabeled node. Embodiments provide a node label recommendation based upon the fine-tuned sequence-to-sequence language model. The node label that is recommended, is in a vocabulary which extends beyond the repository vocabulary (e.g., includes the particular term). In this manner, accuracy and/or flexibility of modeling of complex processes (e.g., node label recommendation) can be enhanced.
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 40/40 - Traitement ou traduction du langage naturel
G06Q 10/067 - Modélisation d’entreprise ou d’organisation
Techniques and solutions are provided for processing query requests from a software application, such as one having a user interface model, using an analytical data protocol that accesses an analytic query. Often, user interface models access data using transactional data protocols, which can limit analytical actions that can be performed through a user interface, particularly actions altering data presented or a data format as compared with pre-defined analytical objects. A query request associated with a user interface query model is received and converted to be executable using at least one analytical query model object. The request, in an analytical protocol, is submitted to a virtual data model. The query request in the analytical protocol is converted to be used with an analytic query defined in the virtual data model. The converted query request is executed against a data store and query results are returned to a user interface layer.
The present disclosure involves systems, software, and computer implemented methods for efficiently authorizing parameterized query views. An example method includes parsing a received query to generate a global query parse tree. In response to determining that the query includes a parameterized query view, the parameterized query view is parsed to generate a view parse tree which is then attached to the global query parse tree. In response to determining that an object in the global query parse tree is a parameterized query view, a view parse tree portion of the global query parse tree is traversed to identify objects associated with the parameterized query view. The parameterized query view and the identified objects are authorized in a single authorization step. For objects in the global query parse tree that are not parameterized query views, the object is authorized. In response to all objects being authorized, the query is executed.
Systems and methods are provided for detecting input via a user interface on a computing device, determining that the input triggers a recommended action related to the input and analyzing historical data to extract relevant data for the recommended action. The systems and methods further provide for generating the recommended action based on the extracted relevant data and causing display of the recommended action on the user interface of the computing device.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 16/2457 - Traitement des requêtes avec adaptation aux besoins de l’utilisateur
Systems and methods include determination of caching requirements associated with an application, automatic determination of a caching strategy based on the caching requirements and metadata describing a plurality of available caches, providing the caching strategy to the application, and execution of the application to cache data based on the caching strategy.
In some implementations, there is provided a method that includes detecting in a query plan a pipeline that includes a last restart query operator that can cause a retry of the query plan during execution of the query plan; configuring the pipeline to execute the pipeline using an open call configuration, configuring at least one subsequent pipeline to execute in a fetch call configuration; executing the query plan including the pipeline in the open call configuration; sending, by the send operator, a message indicating the last restart query operator cannot cause a retry of the execution of the query plan; and causing execution of at least one operator in the subsequent pipeline to execute in the fetch call configuration, in which result streaming of partial results is allowed for the at least one operator.
A generic command request, including a command and command input data from the client computing device is received by a command line interface (CLI) backend and from a client CLI on a client computing device. A platform service for the command is determined by the CLI backend and based on command metadata associated with the command. The command input data is mapped by the CLI backend based on the command metadata associated with the command to a platform service application programming interface (API) associated with the platform service. The platform service API is called by the CLI backend based on the mapping. A client-side script defined for the generic command request together with response data received from the platform service API is returned by the CLI backend and to the client CLI for execution in a local script engine.
Techniques and solutions are provided for improving query performance of queries that can dynamically switch between accessing different data sources for a particular operation. The disclosure provides an object type, which can be referred to as a configuration object, that specifies which of multiple data sources should be used in query execution at a particular point in time. Values that specify a data source can be included as data in an instance of the object type, such as values in a relational database table that implements the configuration object. A data source to be used with a query can be changed dynamically by updating contents of the table. During query optimization, a query optimizer can recognize that the configuration object is of a particular type that causes the query optimizer to access contents of the configuration object. The contents can be used to prune portions of a query plan.
A table scan predicate with integrated semi-join filter is provided. A method includes receiving a query including: a request to join first data from a first table and second data from a second table, a first predicate for use in a table scan of the second table, and a second predicate including an expression associated with the first data from the first table and a reference to a column associated with the second data from the second table. The method may include transforming the second predicate into a dynamic predicate for execution of the query. the method may include applying the dynamic predicate to at least the first data. The method may include executing the query by at least scanning the second table based on the first predicate and filtered first data from the application of the dynamic predicate. Related systems and articles of manufacture are provided.
A computer-implemented system and method of extending a workflow. The system translates the workflow into a programming data structure, builds a model structure based on the programming data structure, collects extension instructions related to changing the workflow and orders the extension instructions according to dependencies among the extension instructions, and generates an extended workflow based on applying the extension instructions to the original workflow. In this manner, the system reduces the amount of manual effort in extending the workflow.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
A method, a system, and a computer program product for executing a blocked index join. One or more join values for joining data stored in a database in response to a query for accessing data stored in the database are identified. The database stores data in a plurality of tables. Each table has a plurality of columns and a plurality of rows. A mapping of one or more rows in the plurality of rows corresponding to one or more join values to a number of rows including one or more join values is identified. Based on the mapping, a join of one or more join values based on the rows including one or more join values is executed. The joined values are outputted.
A digital assistant can provide support for automated testing of applications. A natural language interface can be provided by which a testing user can specify a request for one or more testing actions. A natural language processing model can recognize intents in the request, and the intents can be used to execute executable code to perform the requested testing actions. Multiple actions per request can be supported. An object repository can be leveraged to determine user interface control identifiers, and a test data container can store values for use during testing. Testing functionality can thus be provided to a wider base of testing users. A real time, scriptless approach can conserve computing resources.
Techniques and solutions are provided for improving query performance using inverse functions. Often a function is used to perform operations such as data type conversions. The use of these functions can be resource intensive, such as if a conversion needs to be performed for all rows of a particular relational database table. The present disclosure allows for the registration of inverse functions that can be used, or at least be considered for use, in place of a function. A given inverse function can be associated with its function using techniques such as maintaining mapping information or using a particular naming convention. A particular syntax is provided for designating and creating an inverse function.
Systems and methods described herein relate to techniques for identifying and using context data to prioritize reported issues in a software context. A data record of a reported issue is accessed. The reported issue is associated with a software offering provided to a user. The data record comprises issue metadata that includes a first priority rating for the reported issue. The issue metadata is used to identify a relation between the reported issue and context data associated with the user. A second priority rating for the reported issue is generated based on at least the context data. The second priority rating may differ from the first priority rating. The second priority rating is presented at a computing device, optionally together with the first priority rating via a graphical user interface.
G06Q 10/20 - Administration de la réparation ou de la maintenance des produits
G06Q 10/0637 - Gestion ou analyse stratégiques, p. ex. définition d’un objectif ou d’une cible pour une organisationPlanification des actions en fonction des objectifsAnalyse ou évaluation de l’efficacité des objectifs
87.
ALGORITHMIC APPROACH TO HIGH AVAILABILITY, COST EFFICIENT SYSTEM DESIGN, MAINTENANCE, AND PREDICTIONS
A cloud computing design evaluation platform may receive a master variant for a cloud computing design, including a sequential sequence of a set of components. The evaluation platform may then determine a maximum number of parallel levels for the master variant and automatically create a plurality of potential variants of the master variant by expanding the master variant with parallel components in accordance with the maximum number of parallel levels. The evaluation platform determines reliability information (e.g., based on MTBF data) and cost information (e.g., a TCO) for each component. An overall reliability score and overall cost score for each of the automatically created potential variants is automatically calculated and an evaluation result of the calculation is indicated (reflecting an optimum design that meets SLA and TCO goals). Some embodiments may also provide continuous monitoring of design performance and/or predict future design performance based on historical data.
Techniques for automated chaos engineering may run an application on an internal production system. The internal production system is probed to determine an initial health status. If the initial health status indicates that the internal production system is in a stable state, a plurality of tests, such as functional tests or load tests, are conducted on the internal production system which create load at production levels. Chaos engineering actions are performed on the internal production system while the plurality of tests are being conducted. The chaos engineering actions cause system faults in the internal production system. The internal production system is probed after performing the chaos engineering actions to determine a later health status of the internal production system. An admin user is notified if the later health status of the internal production system is not a stable state after performing the chaos engineering actions.
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program may receive a first selection of a data model as a data source for a visualization, wherein the data model specifies data to be organized according to a set of measures and a set of dimensions. The program may receive a second selection of a dimension value for a dimension in the set of dimensions, wherein each measure in the set of measures is a particular dimension value for the dimension. The program may in response to the second selection, generating a state representation of the visualization that includes the selected dimension value for the dimension. The program may generate the visualization based on the state representation of the visualization.
Disclosed herein are system, method, and computer program product embodiments for securely performing a password change. An embodiment operates by receiving a password change request from a user. The password change request comprises an encrypted version of a new password for the user, a cleartext version of the new password, and a login name for the user. The embodiment then executes a command from a password rotator user account with the cleartext version of the new password, the encrypted version of the new password, and the login name. The embodiment then retrieves a public key associated with the login name. The embodiment then determines, based on the public key, that the password change request comes from the user and that the cleartext version of the new password has not been modified. The embodiment then sets the password of a user login associated with the user to the new password.
A first version of a chatbot using natural language processing conducts conversations with a plurality of users. The chatbot provides responses by triggering a plurality of skills including a fallback skill that is triggered when no other skill corresponds to the intent of the user. A second version of the chatbot is deployed while the first version of the chatbot is concurrently engaged in a set of conversation sessions. The second version of the chatbot is configured to trigger other skills besides the fallback skill for at least a portion of the conversations in which the first version of the chatbot triggered the fallback skill. New conversation sessions are mapped to the second version of the chatbot while the set of conversation sessions the first version of the chatbot is engaged in are still mapped to the first version of the chatbot.
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p. ex. des réponses automatiques ou des messages générés par un agent conversationnel
G06F 8/658 - Mises à jour par incrémentMises à jour différentielles
G06F 40/40 - Traitement ou traduction du langage naturel
Methods, systems, and computer-readable storage media for requesting, from a domain name system (DNS) server within an enterprise network, an IP address for a DNS name associated with a computing device, receiving the IP address, storing the IP address in a speculative DNS cache, the speculative DNS cache being operable to store IP addresses for a set of DNS names including the DNS name, providing, by the speculative DNS cache, a refresh period for the IP address, and determining that the refresh period of the IP address has tolled, and in response, refreshing the IP address in the speculative DNS cache.
A method, a system and a computer program for providing data from a directed graph to a language model are provided. The method comprises defining a plurality of conditions and a plurality of patterns, wherein each of the conditions has at least one corresponding pattern. The method further comprises receiving a subset of the directed graph, wherein the subset of the directed graph includes a plurality of statements, wherein each of the statements includes a subject, an object and a predicate relating the subject to the object. The method further comprises for each of the statements in the subset of the directed graph, performing the following: when one of the conditions matches a respective statement and the pattern corresponding to the condition can be applied to the respective statement, computing a string from the respective statement using the pattern.
A first service of a microservices architecture may obtain a set of one or more metrics of the microservices architecture. The microservices architecture may include the first service, a second service, and a network, where the first and second services are configured to communicate with each other via the network. The first service may determine that the set of metrics satisfies one or more bulk mode conditions. The first service may identify a plurality of workers of the first service running in parallel to one another, where each worker is configured to execute a task. The first service may send a bulk request to the second service based on the determining that the set of metrics satisfies the one or more bulk mode conditions. The bulk request may comprise a corresponding request body portion for each worker in the plurality of workers.
G06F 9/48 - Lancement de programmes Commutation de programmes, p. ex. par interruption
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
95.
CALCULATION OF THE PARALLEL CAPACITY OF COMPUTER SYSTEMS
In an example embodiment, an iterative process is used to calculate the number of parallel work processes to set for a computer system. Specifically, an execution unit is started and the execution time for that execution unit is measured. This execution time for the single execution unit is called “unit time.” Then a fixed number (e.g., 20) of execution units are started, and the execution times of each are measured. If the execution time consumption of each of the execution units is lower than some fixed threshold percentage of the unit time (e.g., 120%), then this means that the maximum parallel processing capacity is higher than the fixed number of execution unit. Then more execution units can be added and the process repeated until the execution units' execution times exceed that fixed threshold percentage.
Methods, systems, and computer-readable storage media for receiving a first request parameter for each of the plurality of tenants, receiving a second request parameter for each of the plurality of tenants, assigning the plurality of tenants to an N plurality of tenant groups based on the first request parameter for each of the plurality of tenants, assigning each tenant in the N plurality of tenant groups to a server group in an M plurality of server groups based on the second request parameter for each of the plurality of tenants, and directing, by a load balancer, tenant requests of tenants in the plurality of tenants to servers based on the M plurality of server groups.
Embodiments of the present disclosure include techniques for performing data fixes. In one embodiment, a list of schemas and access information for the schemas is received from a schema manager. The schemas are batched for processing. During processing, schemas in a batch are processed in parallel. Processing includes applying pre-configured SQL commands. If the data fix is successful, applications may be deployed. If the data fix is not successful, application deployments may be blocked.
Methods, systems, and computer-readable storage media for providing historic compute instance (CI) training data at least partially representative of one or more compute instances executing an application in a cloud computing environment, the one or more compute instances being provided in a tenant namespace for a tenant, the tenant namespace being provided in a cluster of the cloud computing environment, training a CI predictor using the historic CI training data, receiving, from a CI adjuster, a first prediction request, transmitting, in response to the first prediction request, a first prediction generated by the CI predictor based on the first prediction request, and instantiating a first set of compute instances within the tenant namespace in response to the first prediction.
Methods, systems, and computer-readable storage media for receiving, through a set of user interfaces (UIs), user input including job configuration information and deployment information for an application, providing a CI/CD job for the application with a CI/CD service using the job configuration information and the deployment information, and triggering, in response to a commit of changes to the application in a repository, automated build of the application, and in response, automatically: generating a development descriptor file at least partially based on the user input, providing an archive file including the development descriptor file, and processing, by the CI/CD job executed by the CI/CD service, the archive file to deploy the application within the target environment.
A computer implement method can create an entity close scheme including a plurality of nodes and edges connecting the plurality of nodes. The nodes represent task objects and the edges define predecessor-successor relationship between pairs of nodes. The method can execute the task objects and monitor status of the task objects. After flagging the status of a completed task object represented by a selected node to be invalid based on evaluation of results of the completed task object, the method can determine downstream nodes of the selected node, then identify contingent downstream nodes among the downstream nodes of the selected node, and flag the status of task objects represented by the contingent downstream nodes to be invalid upon completion of the task objects represented by the contingent downstream nodes.