The disclosed embodiments provide a method and system for processing network data. During operation, the system obtains one or more event streams from one or more remote capture agents over one or more networks, wherein the one or more event streams include event data generated from network packets captured by the one or more remote capture agents. Next, the system applies one or more transformations to the one or more event streams to obtain transformed event data from the event data. The system then enables querying of the transformed event data.
Described herein are systems and methods for enhancing an interface for an information technology (IT) environment. In one implementation, an incident service causes display of a first version of a course of action and obtains input indicative of a request for a new action in the course of action. The incident service further determines suggested actions based at least one the input and causes display of the suggested actions. Once displayed, the incident service obtains input indicative of a selection of at least one action from the suggested actions, and causes display input indicative of a selection of at least one action from the suggested actions.
Techniques are described for providing users of an IT and security operations application with the ability to enable the collection and display of playbook run statistics. Users can selectively enable the generation of playbook run statistics for individual playbooks. Once enabled for a playbook, the IT and security operations application automatically adds source code to the playbook or otherwise enables the collection of function block-level statistics during playbook executions. Users can view the statistics collected for a playbook to compare the performance of individual blocks against one another, to compare the performance of individual playbook runs against other playbook runs or against an average of all playbook runs, and so forth. The ability to obtain playbook run statistics enables users to learn how their playbooks are performing and to troubleshoot potential issues, thereby improving the performance of playbooks and the security and operation of the IT environments in which playbooks are deployed.
Techniques and mechanisms are disclosed for configuring actions to be performed by a network security application in response to the detection of potential security incidents, and for causing a network security application to report on the performance of those actions. For example, users may use such a network security application to configure one or more “modular alerts.” As used herein, a modular alert generally represents a component of a network security application which enables users to specify security modular alert actions to be performed in response to the detection of defined triggering conditions, and which further enables tracking information related to the performance of modular alert actions and reporting on the performance of those actions.
G06F 16/26 - Visual data miningBrowsing structured data
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/55 - Detecting local intrusion or implementing counter-measures
5.
SELECTING A CUSTOM FUNCTION FROM AVAILABLE CUSTOM FUNCTIONS TO BE ADDED INTO A PLAYBOOK
Techniques are described for enabling users of an information technology (IT) and security operations application to create highly reusable custom functions for playbooks. The creation and execution of playbooks using an IT and security operations application generally enables users to automate operations related to an IT environment responsive to the identification of various types of incidents or other triggering conditions. Users can create playbooks to automate operations such as, for example, modifying firewall settings, quarantining devices, restarting servers, etc., to improve users' ability to efficiently respond to various types of incidents operational issues that arise from time to time in IT environments.
Systems and methods are disclosed for generating a distributed execution model with untrusted commands. The system can receive a query, and process the query to identify the untrusted commands. The system can use data associated with the untrusted command to identify one or more files associated with the untrusted command. Based on the files, the system can generate a data structure and include one or more identifiers associated with the data structure in the distributed execution model. The system can distribute the distributed execution model to one or more nodes in a distributed computing environment for execution.
Described are systems, methods, and techniques profiled call stack linking. Data relating to functions that are part of call stacks can be captured from a series of snapshots. Frame information for the identified functions (e.g., a span ID, trace ID) can be identified and indexed. Responsive to receiving a query for a visualization specifying one or more criteria (e.g., all frames that are part of a span), all frames corresponding with the criteria can be identified. An action can be performed using the identified frames, such as generating a visualization of the identified frames for use in deriving insights into the functions being executed as part of a call stack.
A method includes displaying events that correspond to search results of a search query, the events comprising data items of event attributes, the events displayed in a table. The table includes columns corresponding to an event attribute, rows corresponding events, cells populated data items, and interactive regions corresponding to at least one data item and selectable to add one or more commands to the search query. A reference event attribute is determined based on an analysis of a data object. A supplemental column corresponding to a supplemental event attribute is added to the table based on the reference event attribute. Supplemental interactive regions are added to the table and correspond to supplemental data items.
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 16/00 - Information retrievalDatabase structures thereforFile system structures therefor
A method of dynamic cluster manager failover includes routing data traffic associated with managing a plurality of indexers in a cluster to a first cluster manager, wherein the first cluster manager is associated with an active role and is operable to manage the plurality of indexers in the cluster. The method also includes transmitting periodic heartbeat request messages from a second cluster manager of the cluster to the first cluster manager, wherein the second cluster manager is associated with a standby role. Further, the method includes detecting, at the second cluster manager, a loss of heartbeat response messages from the first cluster manager. Also, the method includes receiving information from a set of indexers regarding a status of the first cluster manager and in response to a determination that the status of the first cluster manager is offline, promoting the second cluster manager to switch over to the active role.
Techniques are described that enable an IT and security operations application to prioritize the processing of selected events for a defined period of time. Data is obtained reflecting activity within an IT environment, wherein the data includes a plurality of events each representing an occurrence of activity within the IT environment. A severity level is assigned to each event of the plurality of events, where the events are processed by the IT and security operations application in an order that is based at least in part on the severity level assigned to each event. Input is received identifying at least one event of the plurality of events for expedited processing to obtain a set of expedited events, and the identified events are processed by the IT and security operations application before processing events that are not in the set of expedited events.
A method of rendering a service graph illustrating dependencies between a frontend and a backend of an application comprises generating a plurality of frontend traces from a plurality of frontend spans and generating a plurality of backend traces from a plurality of backend spans ingested from the application. The method also comprises aggregating frontend metrics data using the plurality of frontend traces and backend metrics data using the plurality of backend traces. The method further comprises determining connection information between one or more frontend traces of the plurality of frontend traces and corresponding backend traces of the plurality of backend traces. The method also comprises rendering the service graph using the connection information and the aggregated frontend and backend metrics data.
Provided are systems and methods for determining and displaying service performance information via a graphical user interface. A method can include visually rendering a service-level dashboard reflecting performance of a service and presenting a visual indication of health of each component service and a list of events each corresponding to a change in performance of one of the component services. The method can further include responsive to receiving, via a graphical user interface (GUI), a selection of a component service, visually rendering a system-level dashboard reflecting performance of the selected component-level service, wherein the component service is performed by one or more machines, and wherein the system-level dashboard presents the machines and one or more events each corresponding to a change in performance of one of the machines.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 43/0817 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
14.
Aggregating metrics for workflows associated with a real user session
A method of aggregating metrics associated with a user interaction during a real user session comprises identifying a span comprising a tag associated with a workflow from ingested spans associated with the real user session, where the workflow comprises spans generated in response to the user interaction. The method also comprises identifying other spans associated with the workflow from the ingested spans. The method further comprises grouping the other spans associated with the workflow with the tagged span and aggregating metrics for the workflow over a duration of time.
According to embodiments, a data stream including a plurality of time series data is received and metadata objects are extracted from the data stream. The metadata objects are associated with metrics time series (MTS) objects. The metadata objects and MTS objects are stored via separate in-memory data structures in a logical database. The in-memory data structures include information that correlates the metadata objects with the MTS objects. Any updates to the metadata objects will stay with the metadata objects and do not propagate to the MTS objects. A logical in-memory join may be performed to associate the metadata objects with the appropriate MTS object according to the in-memory data structures when a query for an MTS object is received.
Systems and methods are described for generation of a query using a non-textual input. For example, the query can be generated using a point and click input. A selection of a data source can be identified and an initial query can be automatically generated based on the selection of the data source. A graphical user interface can be displayed and populated with one or more selectable parameters based on the initial query. A selection of the one or more selectable parameters can be received as a non-textual input and a query can be automatically generated based on the selection. For example, a query for execution by a data intake and query system can be generated based on the selection. The query can be provided to the data intake and query system. The data intake and query system may then execute the query on a set of data.
Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating, based on sensor data captured by a depth sensor on a device, one or more 3D meshes representing a physical space, wherein each of the 3D meshes comprises a corresponding set of vertices and a corresponding set of faces comprising edges between pairs of vertices; determining that a mesh is visible in a current frame captured by an image sensor on the device; determining, based on the corresponding set of vertices and the corresponding set of faces for the mesh, a portion of the mesh that lies within a view frustum associated with the current frame; and updating the one or more 3D meshes by texturing the portion of the mesh with one or more pixels in the current frame onto which the portion is projected.
A deployment orchestrator is provided that manages package deployments at different hierarchical levels. Each hierarchical level is associated with a particular type of resource object. The deployment orchestrator creates different of resource objects, each associated with a different hierarchical level and updates instances of the different resource objects based on information related to a package that is to be deployed. The deployment orchestrator performs processing associated with deploying the package at the hierarchical level based on information stored in the instances of the resource objects associated with the hierarchical level e.g., information related to a package that is to be deployed.
Various embodiments of the present application set forth a computer-implemented method that includes receiving, by a trusted tunnel bridge and from a first application executing in a first network, a first encrypted data packet, where the first encrypted data packet includes an encrypted portion of data, and a destination device identifier (DDI). The method further includes determining, by the trusted tunnel bridge, a particular device in a second network and associated with the DDI included in the first encrypted data packet. The method further includes sending, by the trusted tunnel bridge directly to the particular device, the first encrypted data packet.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
A client device executes a display layout application that receives a size of each display item included in a set of display items. The set of display items is associated with a first frame included in a bounding box associated with a display screen. The display layout application determines a reference size based on the sizes of the set of display items. The display layout application determines a size of the first frame based on the reference size. The display layout application determines a position for a first display item included in the set of display items based on a position of the first frame within the bounding box. The display layout application generates a layout for display on the display screen, where the layout includes the first display item.
Various implementations of the present application set forth a method comprising generating, based on first sensor data captured by a depth sensor on a mobile device, three-dimensional data representing a physical space that includes a real-world asset, generating, based on second sensor data captured by an image sensor on the mobile device, two-dimensional data representing the physical space, combining, based on a correlation the three-dimensional data and the two-dimensional data, the two-dimensional data and the three-dimensional data into an extended reality (XR) stream, where the XR stream includes a digital representation of the real-world asset, and transmitting, to a remote device, the XR stream for rendering at least a portion of the digital representation of the real-world asset in a remote XR environment.
A system generates a user interface that enables a user to interact with an interactive chart associated with a statement of a data processing package. Via one or more user interactions with the user interface, the system may receive one or more chart parameters for the chart. Using a statement from the data processing package and the one or more chart parameters, the system may generate an additional statement and append the generated statement to the data processing package to form an enriched data processing package. The system may communicate the enriched data processing package to a search service for execution. The system may display the results in the chart.
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
Systems and methods are described for processing ingested data, detecting anomalies in the ingested data, and providing explanations of a possible cause of the detected anomalies as the data is being ingested. For example, a token or field in the ingested data may have an anomalous value. Tokens or fields from another portion of the ingested data can be extracted and analyzed to determine whether there is any correlation between the values of the extracted tokens or fields and the anomalous token or field having an anomalous value. If a correlation is detected, this information can be surfaced to a user.
A multitenant deployment includes a computing cluster that executes multiple containerized instances of a software application. Each containerized instance is associated with one or more datastores that can be assigned to different tenants. A registry store maintains a mapping between tenants and datastores, thereby allowing a registry manager to properly route tenant requests to the correct datastores. A capacity manager tracks tenant usage of datastores in the registry store and then scales computing resources for each tenant in proportion to usage. The capacity manager also migrates tenant resources in response to catastrophic failures or upgrades. In this fashion, the multitenant deployment can adapt a single-tenant software application for multi-tenancy in a manner that is both transparent and secure for the tenant.
A system generates a user interface that enables a user to generate a chart from one or more statements of a data processing package. Via one or more user interactions with the user interface, the system may receive one or more chart parameters for a chart. Using a statement from the data processing package and the one or more chart parameters, the system may generate an additional statement and append the generated statement to the data processing package to form an enriched data processing package. The system may communicate the enriched data processing package to a search service for execution. The system may display the results in an interactive chart.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
26.
VISUALIZATIONS OF QUERY RESULTS USING GENERATED FILES
Systems and methods are disclosed for generating one or more files to visualize query results. The systems and methods can include parsing one or more files that include one or more queries and computer-executable instructions for displaying results of the one or more queries. The one or more queries can identify a set of data to be processed and a manner of processing the set of data. The systems and methods can further include generating one or more files that include the results of the queries and computer-executable instructions for displaying one or more visualizations of the results.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
27.
Generating span related metric data streams by an analytic engine
A method of generating metrics data associated with a microservices-based application comprises ingesting a plurality of spans and mapping an ingested span of the plurality of spans to a span identity, wherein the span identity comprises a tuple of information identifying a type of span associated with the span identity, wherein the tuple of information comprises user-configured dimensions. The method further comprises grouping the ingested span by the span identity, wherein the ingested span is grouped with other spans from the plurality of spans comprising a same span identity. The method also comprises computing metrics associated with the span identity and using the metrics to generate a stream of metric data associated with the span identity.
Techniques are described for enabling a cloud-based IT and security operations application to execute playbooks containing custom code in a manner that mitigates types of risk related to the misuse of cloud-based resources and security of user data. Users use a client application to create and modify playbooks and, upon receiving input to save a playbook, the client application determines whether the playbook includes custom code. If the client application determines that the playbook includes custom code, the client application establishes a connection with a proxy application (also referred to as an “automation broker”) running in the user's own on-premises network and sends a representation of the playbook to the proxy application. The client application further sends to the IT and security operations application an identifier of the playbook and an indication that the playbook (or the custom code portions of the playbook) is stored within the user's on-premises network.
H04L 41/5054 - Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
H04L 41/0681 - Configuration of triggering conditions
H04L 41/08 - Configuration management of networks or network elements
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
A computerized method is disclosed for automated handling of data ingestion anomalies. The method features operations of detecting a data ingestion anomaly and determining a cause for the data ingestion anomaly. The causal determination may be conducted by at least (i) determining features of an anomalous data ingestion volume, (ii) training a second data model, after a first data model being used to detect the data ingestion anomaly, with data sets consistent with the determined features, (iii) applying the second data model to predict whether a data ingestion sub-volume is anomalous, (iv) obtaining system state information during ingestion of the anomalous data ingestion sub-volume, and (v) determining the cause of the anomalous data ingestion volume based on the system state information.
Techniques promote monitoring of hypervisor systems by presenting dynamic representations of hypervisor architectures that include performance indicators. A reviewer can interact with the representation to progressively view select lower-level performance indicators. Higher level performance indicators can be determined based on lower level state assessments. A reviewer can also view historical performance metrics and indicators, which can aid in understanding which configuration changes or system usages may have led to sub-optimal performance.
Computerized methodologies are disclosed that are directed to detecting anomalies within a time-series data set. An aspect of the anomaly detection process includes determining one or more seasonality patterns that correspond to a specific time-series data set by evaluating a set of candidate seasonality patterns (e.g., hourly, daily, weekly, day-start off-sets, etc.). The evaluation of a candidate seasonality pattern may include dividing the time-series data set into a collection of subsequences based on the particular candidate seasonality pattern. Further, the collection of subsequences may be divided into clusters and a silhouette score may be computed to measure the clustering quality of the candidate seasonality pattern. In some instances, the candidate seasonality pattern having the highest silhouette score is selected and utilized in anomaly detection process. In other instances, a plurality of seasonality patterns may be combined forming a time policy, where the time policy is utilized in anomaly detection process.
Implementations of this disclosure provide a machine learning model training system that receives user input being a natural language description of a search query, and packages and transmits the natural language description as a prompt to a plurality of large learning models (LLMs). The model training system also receives response from the plurality of LLMs being translations of the natural language descriptions to an executable search query and displays the translations to a user via a graphical user interface. The model training system receives user feedback via the graphical user interface that corresponds to indications as to whether each translation is correct, syntactically and/or semantically, and, in some examples, an indication of which response was preferred. The model training system also generates training data from the user input, translations generated by the plurality of LLMs, and user feedback, and subsequently, initiates training of a LLM using the training data.
A query coordinator can receive a query and identify a first portion of the query to be processed by a first data processing system and a second portion of the query to be processed by a second data processing system. The query coordinator can obtain a modified query based on identifying the first portion and the second portion of the query. The query coordinator can define a query processing scheme according to the modified query and provide the query processing scheme to the second data processing system. Based on providing the query processing scheme, the query coordinator can obtain an output of the second data processing system. The query coordinator can identify a second query based on the output and provide the second query to a component of the first data processing system.
Computerized methodologies are disclosed that are directed to detecting anomalies within a time-series data set. A first aspect of the anomaly detection process includes analyzing the regularity of the data points of the time-series data set and determining whether a data aggregation process is to be performed based on the regularity of the data points, which results in a time-series data set having data points occurring at regular intervals. A seasonality pattern may be determined for the time-series data set, where a silhouette score is computed to measure the quality of the fit of the seasonality pattern to the time-series data. The silhouette score may be compared to a threshold and based on the comparison, the seasonality pattern or a set of heuristics may be utilized in an anomaly detection process. When the seasonality pattern is utilized, the seasonality pattern may be utilized to generate thresholds indicating anomalous behavior.
A data intake and query system can manage the search of data stored at an external location relative to the data intake and query system using one or more indexers. The data intake and query system can receive data stored at the external location. The data intake and query system can process the data and generate an index using the one or more indexers. The data intake and query system can discard the data and store the index and a location identifier of the external location in one or more buckets. In response to a query, the data intake and query system can identify that at least a subset of the data is responsive to the query using the index and can obtain the at least the subset of the data from the external location using the location identifier.
Computerized methodologies are disclosed that are directed to detecting anomalies within a time-series data set. An aspect of the anomaly detection process includes determining one or more seasonality patterns that correspond to a specific time-series data set by evaluating a set of candidate seasonality patterns (e.g., hourly, daily, weekly, day-start off-sets, etc.). The evaluation of a candidate seasonality pattern may include dividing the time-series data set into a collection of subsequences based on the particular candidate seasonality pattern. Further, the collection of subsequences may be divided into clusters and a silhouette score may be computed to measure the clustering quality of the candidate seasonality pattern. In some instances, the candidate seasonality pattern having the highest silhouette score is selected and utilized in anomaly detection process. In other instances, a plurality of seasonality patterns may be combined forming a time policy, where the time policy is utilized in anomaly detection process.
Techniques, which may be embodied herein as systems, computing devices, methods, algorithms, software, code, computer readable media, or the like, are described herein for comparing a set of metrics generated during a simulated user interaction with a website to metrics generated by observing real user interactions with the website. Simulated user interactions with a website can be used to diagnose a website's performance issues, but it can be difficult to determine whether the simulated interactions reflect the experience of real users. In addition, the simulated user interactions can be challenging to contextualize because the number of observed real user interactions may significantly outnumber the simulated interactions. A graphical user interface can help with the interpretation of these website interactions by using the real user interactions to properly contextualize the simulated results.
Implementations of this disclosure provide a machine learning model training system that receives user input being a natural language description of a search query, and packages and transmits the natural language description as a prompt to a plurality of large learning models (LLMs). The model training system also receives response from the plurality of LLMs being translations of the natural language descriptions to an executable search query and displays the translations to a user via a graphical user interface. The model training system receives user feedback via the graphical user interface that corresponds to indications as to whether each translation is correct, syntactically and/or semantically, and, in some examples, an indication of which response was preferred. The model training system also generates training data from the user input, translations generated by the plurality of LLMs, and user feedback, and subsequently, initiates training of a LLM using the training data.
A computer-implemented method of configuring an anomalous behavior detector includes updating a distribution used for modeling anomalous behavior in telemetry data with information associated with observed anomalous behavior to generate an updated distribution representative of the observed anomalous behavior where, prior to the updating, the distribution is representative of theoretical anomalous behavior. The method further includes computing a threshold for a detector operable to alert on anomalous activity using the updated distribution. The method also comprises computing a divergence between the live telemetry data monitored by the detector and the anomalous behavior modeled by the updated distribution. Responsive to a determination that the divergence is above a critical threshold, the method comprises enabling the detector to continue to monitor the live telemetry data in the application.
A computerized method is disclosed that includes operations of obtaining network traffic data between a source device and a destination device, applying a set of one or more security rules to a plurality of metrics of the network traffic data to obtain a subset of network traffic metrics, applying a first trained machine learning model to the subset of network traffic metrics to generate a feature vector through feature extraction of the subset of network traffic metrics, and evaluate the feature vector for a presence of beaconing and classify the subset of network traffic metrics, and responsive to the classifying of the subset of network traffic metrics, generating a flag for a system administrator. The plurality of metrics include at least one or more of packet size, packet transmission rate, or a ratio of (i) packet size for inbound packets and (ii) packet size for outbound packets.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
41.
Appending time ranges to distinct statements of a data processing package
A system generates a user interface that enables a user to modify time ranges associated with search-related statements of a data processing package. Via one or more user interactions with the user interface, the system may receive a modified time range for the statement. The modified time range may be appended to the data processing package to form an enriched data processing package. The system may communicate the enriched data processing package to a search service for execution. The system may display the results in the user interface.
Described herein are techniques for integrating external sensors to an edge device, such as for ingesting data into a data intake and query system. The edge device has an internal message broker for communicating with internal (e.g., preconfigured, recognized) sensors, and an external message broker for communicating with external (e.g., customer-configured, otherwise unrecognized) sensors. The external message broker provides access to customer configuration of external sensors, but is logically quarantined from the internal message broker to prevent unwanted customer access to internal configurations. The internal and external message brokers interface only via a bridging service that transforms external sensor data into data based on customer-configurable transformations. The transformed data can be handled by the edge device and/or downstream components (e.g., a data intake and query system) in the same manner as internal sensor data.
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
H04L 67/63 - Routing a service request depending on the request content or context
43.
Machine-learning based prioritization of alert groupings
Systems and methods are disclosed that are directed to improving the prioritization, display, and viewing of system alerts through the use of machine learning techniques to group the alerts and further to prioritize the groupings. Additionally, a graphical user interface is generated that illustrates the prioritized listing of the plurality of groupings. Thus, a system administrator or other user receives an improved experience as the number of notifications provided to the system administrator are reduced due to the grouping of individual alerts into related groupings and further due to the prioritization of the groupings. Previously, or in current technology, system alerts may be automatically generated and provided immediately to a system administrator. In some instances, any advantage of detecting system errors or system monitoring provided by the alerts is negated by the vast number of alerts and provision of minimally important alerts in a manner that concealed more important alerts.
Implementations of this disclosure provide for automated monitoring of configuration parameters of a primary data intake and query system instance operating within a distributed deployment environment. Further implementations provide for automatically generating instructions in response to a detected change in a configuration parameter of the primary data intake and query system instance and transmitting those instructions to one or more secondary data intake and query system instances. The instructions, upon execution by one or more processors, cause the configuration parameters of the one or more secondary data intake and query system instances to be updated in accordance with the detected change in the configuration parameter of the primary data intake and query system instance.
A computerized method is disclosed for grouping alerts through machine learning while implementing certain time constraints. The method includes receiving an alert to be assigned to any of a plurality of existing issues or to a newly created issue, the alert including a temporal field that includes a timestamp of an arrival time of the alert, wherein an issue is a grouping of one or more alerts, determining a subset of existing issues from the plurality of existing issues that each satisfy time constraints, wherein the time constraints correspond to (i) a time elapsed between a most recent alert of a first existing issue and a timestamp of the alert, or (ii) a maximum issue time length of the first existing issue, and deploying a trained machine learning model to assign the alert to either an existing issue of the subset of existing issues or a newly created issue.
A search assistant engine is described that integrates with a data intake and query system and provides an intuitive user interface to assist a user in searching and evaluating indexed event data. Additionally, the search assistant engine provides logic to intelligently provide data to the user through the user interface such as determining fields of events likely to be of interest based on determining a mutual information score for each field and determining groups of related fields based on determining a mutual information score for each field grouping. Some implementations utilize machine learning techniques in certain analyses such as when clustering events and determining an event templates for each cluster. Additionally, the search assistant engine may import terms or characters from user interaction into predetermined search query templates to generate tailored search query for the user.
A system is described that receives a query model of a query that includes one or more query commands. The query model includes a command model that corresponds to at least query command of the one or more query commands. The system uses the command model to generate an interactive action model summary and causes a user interface to display the query and the interactive action model summary in a query actions panel. A modification to the query in the user interface causes an update to the query actions panel and a modification to the action model summary causes an update to the at least one query command of the query.
Metric time series (MTS) data objects stored within in-memory storage are marked as inactive in response to determining that no MTS data has been received for the MTS objects within a first predetermined time period. In response to determining that an MTS object has been inactive for longer than a second predetermined time period, the MTS data object is migrated from in-memory storage to on-disk storage. Queries directed to MTS objects are first run against MTS object data stored within in-memory storage, and then against MTS object data stored within on-disk storage. In this way, an amount of in-memory storage needed to store MTS objects may be minimized, while optimizing search performance.
A system generates a user interface that enables a user to generate a data summarization statement for a data processing package. Via one or more user interactions with the user interface, the system may receive one or more parameters for the summarization statement. Using the parameters, the system may generate a summarization statement for execution by a data service, an action model display object, a statement action model display object, and/or a filter token object for display in the user interface.
G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
An interface and improved data intake and query system is described herein that allows users to define metrics and that aggregates metric values regardless of the level at which a metric is defined and/or the level at which metric values are available. The improved data intake and query system can initialize a sketch in response to a user providing one or more metric definitions. The initialized sketch includes one or more instances, where each instance produces an output and collects metric value(s), appends the metric value(s) to the output, and forwards the appended data to a process function downstream in a data processing pipeline. The process function separates the output and the metric value(s), sending the output further downstream in the data processing pipeline and sending the metric value(s) to a parallel process function that sits outside the data processing pipeline. The parallel process function can persist the metric value(s).
Implementations of this disclosure provide an anomaly detection system and methods of performing anomaly detection on a time-series dataset. The anomaly detection may include utilization of a forecasting machine learning algorithm to obtain a prediction of points of the dataset and comparing the predicted value of a point in the dataset with the actual value to determine an error value associated with that point. Additionally, the anomaly detection may include determination of a sensitivity threshold that impacts whether points within the dataset associated with certain error values are flagged as anomalies. The forecasting machine learning algorithm may implement a seasonality component determination process that accounts for seasonality or patterns in the dataset. A search query statement may be automatically generated through importing the sensitivity threshold into a predetermined search query statement that implements that forecasting machine learning algorithm.
A method of computing real-time metrics for automated workflows includes aggregating a set of ingested spans into a set of traces. The method further includes executing a set of rules to determine a set of workflows associated with the set of traces, wherein each workflow of the set of workflows is associated with a respective trace of the set of traces, and wherein each workflow is operable to group together activity associated with a client process within a respective trace. The method also includes assigning a name to each workflow based on the rules and computing real-time metrics for each of the workflows.
Techniques are described for enabling an application to automatically generate text narratives explaining risk scores assigned to risk objects. The application uses natural language generation (NLG) techniques to enable the automatic create text narratives providing context and explanation for risk scores. The described approaches use data from a variety of data sources (e.g., risk event indexes, correlation search data, attack framework data, etc.) to create compelling and useful explanations of the risk analysis associated with identified risk objects. These automatically generated text narratives can be readily presented in any number of different interfaces without the need for complex visualizations or user effort to derive the same information. The automatically created text narratives enable users to better understand the risk analysis for particular risk objects, obtain storylines detailing risk objects' activity patterns over time, and to better analyze, triage, and mitigate IT environment risks based on such information.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
Techniques are described for providing a threat analysis platform capable of automating actions performed to analyze security-related threats affecting IT environments. Users or applications can submit objects (e.g., URLs, files, etc.) for analysis by the threat analysis platform. Once submitted, the threat analysis platform routes the objects to dedicated engines that can perform static and dynamic analysis processes to determine a likelihood that an object is associated with malicious activity such as phishing attacks, malware, or other types of security threats. The automated actions performed by the threat analysis platform can include, for example, navigating to submitted URLs and recording activity related to accessing the corresponding resource, analyzing files and documents by extracting text and metadata, extracting and emulating execution of embedded macro source code, performing optical character recognition (OCR) and other types of image analysis, submitting objects to third-party security services for analysis, among many other possible actions.
A method for deployment of machine-learning based operators within a query is described. For this embodiment, a sequence of operators associated with a query is identified, which includes at least a first operator and at least a second operator. The second operator is configured to perform operations, in accordance with a machine learning (ML) component, on data received as input from execution of the first operator. Schemas associated with the machine learning component is retrieved along with schemas associated with other operators within the sequence. Compatibility between at least an output schema associated with the first operator and an input schema associated with the second operator associated with the ML component is determined. Thereafter, a portion of the sequence of operators including at least the second operator and another operator of the sequence of operators successive to the second operator may be stored within a data store for subsequent use.
A graphical user interface (GUI) for presentation of network security risk and threat information is disclosed. A listing is generated of incidents identified by use of event data obtained from a networked computing environment. A particular incident is determined to be associated with a risk object, wherein a risk object is a component of the networked computing environment. The listing is populated with a name associated with the risk object. Risk events associated with the incident are determined, wherein each risk event contributes to a risk score for the incident. The risk score indicates a potential security issue associated with the risk object. The listing is populated with the risk score and a summary of the events. An action is associated with the listing, for triggering display of additional information associated with the risk object. The listing can be displayed in a first display screen of the GUI.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
58.
System and method for identifying resource access faults based on webpage assessment
A method for identifying and indicating resource access faults associated with a webpage. The method includes receiving a machine-readable file that includes a plurality of instructions defining at least content and structure of a webpage. The method further comprises causing a browser to load the webpage based at least in part on the machine-readable file; determining resource utilization associated with the load of the webpage; identifying one or more resource access faults associated with the machine-readable file based at least in part on the determined resource utilization and a resource access instruction policy; for each of the one or more resource access faults, identifying an instruction of the plurality of instructions that corresponds to the particular resource access fault; and causing display of the one or more instructions.
An example method of utilizing shared search queries for defining multiple key performance indicators (KPIs) comprises: receiving input specifying one or more service definitions, each service definition of the one or more service definitions specifying an entity definition for an entity providing a service of one or more services executing in an information technology (IT) environment, wherein the IT environment is monitored by the service monitoring system, wherein the service monitoring system uses first machine data of a first entity specified by a first service definition of the one or more service definitions to monitor a first KPI for a first service of the one or more services, and wherein the service monitoring system uses second machine data of a second entity specified by a second service definition of the one or more service definitions to monitor a second KPI for a second service of the one or more services; determining that the first machine data and the second machine data include common machine data; defining, based on the first machine data and the second machine data including common machine data, a shared base search query for the first KPI and the second KPI; executing the shared based search query to generated shared base search query results for the first KPI and the second KPI; and generating, using results from executing the shared base search query, a first value for the first KPI and a second value for the second KPI.
Described are systems, methods, and techniques for collecting, analyzing, processing, and storing time series data and for evaluating and dynamically estimating a resolution of one or more streams of data points and updating an output resolution. Responsive to receiving a stream of data points, a data resolution can be derived and an output resolution can be set to a first value. When a change to the data resolution is detected, the output resolution can be changed, modifying a frequency at which output data points are generated and/or transmitted. In some instances, a detector can be implemented to trigger an alert responsive to ingested data points corresponding with triggering parameters. An output resolution for the detector can be dynamically modified based on dynamically detecting a change to the data resolution of the stream of data.
Aspects described herein provide security actions based on a current state of a security threat. In one example, a computer-implemented method includes identifying a security threat within a computing environment comprising a plurality of computing assets. The method further includes obtaining state information for the security threat within the computing environment from computing assets of the plurality of computing assets in the computing environment. The method further includes determining a current state for the security threat within the computing environment based on the state information. The method further includes obtaining enrichment information for the security threat that relates kill-state information to an identity of the security threat. The method further includes determining one or more security actions for the security threat based on the enrichment information and the current state for the security threat.
Disclosed herein is a method that supports queries deploying operators based on multiple programming languages at least through determining schema compatibility between neighboring operators within a query. Upon receipt of a query, a sequence of operators of the query is identified, where the sequence of operators includes at least two neighboring operators including a first operator and a second operator representing a machine learning model. By determining schema compatibility between at least the first and second operators, the method either alerts a user to schema incompatibility before attempting to execute the query or determine that the schemas are compatible such that the query may be executed without the occurrence of errors due to schema incompatibility between neighboring operators. Advantageously, the method enables the integration of a machine learning model into the query while still ensuring schema compatibility.
Techniques are described for providing a built-in “app” editor for an information technology (IT) and security operations application that enables users to create, modify, and test operation of apps under development within the editor. Some IT and security operations applications enable users to extend the applications by adding connectivity to third party technologies to run custom actions. For example, a user might create a custom app to enable an IT and security operations application to connect to an external service providing information about malicious Internet Protocol (IP) addresses, to connect to a relevant cloud provider service, or to interact with a firewall or other type of computing device used in a user's computing environment. Given the broad set of technologies that can be orchestrated by an IT and security operations application, apps broadly enable users to add custom functionality to interface with virtually any technology of interest.
A machine data validation system can track and validate the integrity of machine data generated by machines. The system can generate hashes for the items and batch hashes that can be validated using an immutable data store, such one or more blockchains in a tiered blockchain structure. The system can store machine data and additional associated data in a first lightweight blockchain, and store grouped sets of the data in a second robust blockchain. The system can implement the tiered blockchain structure to efficiently store and reference the hashes to validate the machine data at different times or upon request from an end-user.
G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
67.
Managing event group definitions in service monitoring systems
Network connected devices are controlled via the transmission of action messages to prevent or correct conditions that impair the operation of the networked information technology (IT) assets. The service monitoring system (SMS) monitoring the IT environment groups together related notable events that are received during system operation. Automatic processes dynamically determine grouping operations that automatically correlate the events without requiring, for example, a set of declarative grouping rules. Event grouping may be performed on a by-service basis to facilitate the complex processing of predicting undesirable system conditions that may be prevented or reduced by transmission of the action messages to the appropriate assets. Event grouping operations may be directed with control information maintained via user interface.
H04L 41/147 - Network analysis or design for predicting network behaviour
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 41/50 - Network service management, e.g. ensuring proper service fulfilment according to agreements
H04L 41/5009 - Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
Queries may be resolved against large quantities of collected data (such as traces) by dividing collected data into multiple time intervals and incrementally assigning multiple workers the query and the collected data over the multiple time intervals. For each time interval, these workers may conditionally update one or more summary data structures within the worker based on the query and the portion of collected data assigned to the worker. The summary data structures for each time interval may then be incrementally returned and merged with results from earlier time intervals to create a final merged query result.
Techniques are described for providing a threat analysis platform capable of automating actions performed to analyze security-related threats affecting IT environments. Users or applications can submit objects (e.g., URLs, files, etc.) for analysis by the threat analysis platform. Once submitted, the threat analysis platform routes the objects to dedicated engines that can perform static and dynamic analysis processes to determine a likelihood that an object is associated with malicious activity such as phishing attacks, malware, or other types of security threats. The automated actions performed by the threat analysis platform can include, for example, navigating to submitted URLs and recording activity related to accessing the corresponding resource, analyzing files and documents by extracting text and metadata, extracting and emulating execution of embedded macro source code, performing optical character recognition (OCR) and other types of image analysis, submitting objects to third-party security services for analysis, among many other possible actions.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating a 3D representation of the 3D environment that includes one or more 3D meshes. The method also includes determining at least a portion of the 3D environment that falls within a current frame captured by the image sensor. The method further includes generating one or more additional 3D meshes representing the at least a portion of the 3D environment and combining the one or more additional 3D meshes with the one or more 3D meshes into an update to the 3D representation of the 3D environment.
A computerized method is disclosed that includes operations of detecting user input to a first webpage rendered within a web browser, the user input corresponds to closure of the first webpage, providing an indication of the user input corresponding to the closure of the first webpage to a web browser extension operating in accordance with the web browser, the indication includes an identifier, performing, by the web browser extension operating in accordance with the web browser, a search for the identifier within a URL of each webpage currently opened by the web browser in order to determine that a second webpage is associated with the first webpage based on inclusion of the identifier in a URL of the second webpage, and initiating, by the web browser extension, closure of the second webpage associated with the first webpage following the user input corresponding to closure of the first webpage.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Exploratory data analysis system for automated generation of search queries using machine learning techniques to identify certain log fields and correlation thereof
Implementations of this disclosure provide a search assistant engine that integrates with a data intake and query system and provides an intuitive user interface to assist a user in searching and evaluating indexed event data. Additionally, the search assistant engine provides logic to intelligently provide data to the user through the user interface such as determining fields of events likely to be of interest based on determining a mutual information score for each field and determining groups of related fields based on determining a mutual information score for each field grouping. Some implementations utilize machine learning techniques in certain analyses such as when clustering events and determining an event templates for each cluster. Additionally, the search assistant engine may import terms or characters from user interaction into predetermined search query templates to generate tailored search query for the user.
A device that includes an extended reality application is employed by a user to access an extended reality environment. A selection of a first subset of dashboard panels included in a plurality of dashboard panels is received via an input device associated with the extended reality environment. Each dashboard panel included in the plurality of dashboard panels includes a visual representation of data. The first subset of dashboard panels is displayed in a foreground area of a workspace of the XR environment. A second subset of dashboard panels included in the plurality of dashboard panels is displayed in a background area of the workspace of the XR environment.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 9/451 - Execution arrangements for user interfaces
77.
Collaboration spaces in extended reality conference sessions
Extended reality (XR) software application programs establish remote collaboration sessions in which a host device and one or more remote devices can interact. When initiating a remote collaboration session, an XR application in a host device determines a collaboration area. The collaboration area corresponds to a portion of a real-world environment that is shared by the host device with the one or more remote devices. In some embodiments, the collaboration area can be determined automatically and/or based on user input. The XR application causes sensors associated with the host device to scan the collaboration area. Then, the XR application transmits, to the one or more remote devices, a three-dimensional representation of the collaboration area for rendering in one or more remote XR environments.
G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
H04L 65/1069 - Session establishment or de-establishment
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
Techniques are described for providing a threat analysis platform capable of automating actions performed to analyze security-related threats affecting IT environments. Users or applications can submit objects (e.g., URLs, files, etc.) for analysis by the threat analysis platform. Once submitted, the threat analysis platform routes the objects to dedicated engines that can perform static and dynamic analysis processes to determine a likelihood that an object is associated with malicious activity such as phishing attacks, malware, or other types of security threats. The automated actions performed by the threat analysis platform can include, for example, navigating to submitted URLs and recording activity related to accessing the corresponding resource, analyzing files and documents by extracting text and metadata, extracting and emulating execution of embedded macro source code, performing optical character recognition (OCR) and other types of image analysis, submitting objects to third-party security services for analysis, among many other possible actions.
A data intake and query system receives raw machine via an internet protocol (IP) such as the hypertext transfer protocol (HTTP). The system has configurable global settings for the received raw machine data that determine properties such as the metadata that is associated with raw machine data. Each event is associated with a token, which is also configurable and provides settings such as metadata settings for the raw machine data. The raw machine data is stored as events based on the metadata. Electronic devices that generate raw machine data may transmit the raw machine data to the data intake and query system within HTTP messages. The HTTP messages may also include settings such as metadata for the raw machine data. The raw machine data is stored as events based on the global metadata settings, token metadata settings, and HTTP message metadata settings.
Various implementations set forth a computer-implemented method for scanning a three-dimensional (3D) environment. The method includes generating, in a first time interval, a first extended reality (XR) stream based on a first set of meshes representing a 3D environment, transmitting, to a remote device, the first XR stream for rendering a 3D representation of a first portion of the 3D environment in a remote XR environment, determining that the 3D environment has changed based on a second set of meshes representing the 3D environment and generated subsequent to the first time interval, generating a second XR stream based on the second set of meshes, and transmitting, to the remote device, the second XR stream for rendering a 3D representation of at least a portion of the changed 3D environment in the remote XR environment.
Systems and methods are described for processing ingested data, detecting anomalies in the ingested data, and providing explanations of a possible cause of the detected anomalies as the data is being ingested. For example, a token or field in the ingested data may have an anomalous value. Tokens or fields from another portion of the ingested data can be extracted and analyzed to determine whether there is any correlation between the values of the extracted tokens or fields and the anomalous token or field having an anomalous value. If a correlation is detected, this information can be surfaced to a user.
A method of persisting and querying Real User Monitoring (RUM) data comprises grouping together spans associated with a user-interaction with a webpage or application that are ingested during a given time duration. The method also comprises generating one or more data sets each associated with an analysis modality using the grouped spans, wherein each analysis modality extracts a different level of detail from the spans. Further, the method comprises selecting, based on a first user query, a first analysis modality for generating a response to the first user query and accessing a data set that is associated with the first analysis modality. The method also comprises generating the response to the first user query using the data set associated with the first analysis modality, wherein the first user query requests information pertaining to a performance of the application in response to the user-interaction.
A computing device can receive a query that identifies a set of data to be processed and determine that a portion of the set of data resides in an external data system. The query system can request data identifiers associated with data objects of the set of data from the external data system and communicate the data identifiers to a data queue. The computing device can instruct one or more search nodes to retrieve the identifiers from the data queue. The search nodes can use the data identifiers to retrieve data objects from the external data system and process the data objects according to instructions received from the computing device. The search nodes can provide results of the processing to the computing device.
A computerized method is disclosed for grouping alerts through machine learning. The method including receiving an alert to be assigned to any of a plurality of existing issues or to a newly created issue, wherein an issue is a grouping of alerts, determining a temporal distance between the alert and each of the existing issues, determining either of (i) a numerical distance between the alert and each of the existing issues for a particular numerical field, or (ii) a categorical distance between the alert and each of the existing issues for a particular categorical field, determining an overall distance between the alert and each of the existing issues, and assigning the alert to either (i) an existing issue having a shortest overall distance to the alert that satisfies one or more time constraints, or (ii) the newly created issue.
Various implementations set forth a computer-implemented method for scanning a three-dimensional (3D) environment. The method includes generating, in a first time interval, a first extended reality (XR) stream based on a first set of meshes representing a 3D environment, transmitting, to a remote device, the first XR stream for rendering a 3D representation of a first portion of the 3D environment in a remote XR environment, determining that the 3D environment has changed based on a second set of meshes representing the 3D environment and generated subsequent to the first time interval, generating a second XR stream based on the second set of meshes, and transmitting, to the remote device, the second XR stream for rendering a 3D representation of at least a portion of the changed 3D environment in the remote XR environment.
A computerized method is disclosed that includes operations of obtaining historical network traffic and preparing a training set of data by: applying security rules to the historical network traffic data to obtain a first filtered subset of network transmissions representing a first set of beaconing candidates that is labeled to form a first set of labeled results, applying a clustering logic to the historical network traffic data to obtain a second filtered subset of network transmissions representing a second set of beaconing candidates that is labeled to form a second set of labeled results, applying a machine learning model to the historical network traffic data to label the historical network traffic forming a third set of labeled results, wherein the first, second and third sets of labeled results are augmented to form an augmented labeled training set, and training a machine learning model using the augmented labeled training set.
A computerized method is disclosed for grouping alerts and providing remediation recommendation. The method includes receiving the alert to be assigned to an existing open issue or a newly created issue, wherein an issue is a grouping of one or more alerts, assigning the alert to either a first existing open issue or the newly created issue by determining a weighted sum of the distance between the feature vectors of the alert and each existing open issue, determining a weighted sum of the distance between the feature vectors of the alert and each closed issue, and generating a user interface that illustrates an assignment of the alert and at least one of (i) a closed issue having a shortest distance to the alert or (ii) recommended remediation efforts associated with the closed issue having the shortest distance to the alert.
The present invention is related to a method for providing dynamic indexer discovery. The method comprises receiving, from an index manager, a status indication associated with a plurality of indexers, wherein each of the plurality of indexers indexes events of raw machine-generated data received from a plurality of data collectors. The method further comprises determining a weight associated with each of the plurality of indexers and selecting an indexer from the plurality of indexers. Subsequently, the method comprises allocating data to the indexer in accordance with a respective weight assigned to the indexer and transmitting the allocated data to the indexer.
Information retrieved from monitoring agents currently installed on instrumented entities within a system is analyzed to discover additional entities within the system that are connected to the instrumented entities. Each of these discovered entities is analyzed to determine whether a monitoring agent is able to be installed within the entity; if installation is possible, such installation is automatically performed (or a guided manual installation is implemented utilizing an interface). After a monitoring agent is installed within a discovered entity, information is retrieved from that monitoring agent and is used to discover additional entities within the system that are connected to that discovered entity. In this way, an iterative discovery of all entities within a system may be performed.
A computer system displays a graphical user interface (GUI) that includes data visualizations corresponding to data having timestamps within a time interval. A first type of input signal is mapped to a second type of input signal. The first type of input signal is associated with an input device communicatively coupled to the computer system. The second type of input signal is configured to operate a graphical user control of the GUI. Before mapping, the first type of input signal is configured to perform a function that is different from operation of the graphical user control. After receiving an input signal of the first type, an input signal of the second type is applied to the graphical user control based on the mapping. The time interval is adjusted, and the data visualizations are updated automatically to correspond to updated data having timestamps within the adjusted time interval.
A graphical user interface (GUI) includes multiple data visualizations and an adjustable graphical user control. The data underlying the data visualizations are timestamped, and the graphical user control enables a user to select a time interval. When a time interval is selected or modified via the graphical user control, the multiple data visualizations update automatically in real time to reflect data that correspond to the currently selected time interval.
G06F 16/26 - Visual data miningBrowsing structured data
G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
G06F 16/2458 - Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
93.
Unhandled data protection for programmatic input/output routing to datasets with user-defined partitions
Systems and methods are described for implementing programmatic input/output (I/O) routing to datasets with user-defined partitions while providing unhandled data protection. As disclosed herein, a user may define a dataset as including one or more partitions, each partition including criteria for storing data objects written to the partitioned dataset in the individual partitions. Data objects written to the dataset can then be evaluated according to the criteria, and routed to an appropriate partition. To provide unhandled data protection, a dataset definition can include a default partition to which data objects are routed when the data object fails to satisfy the criteria of any of the set of user-defined partitions identified in the specification. Processing I/O operations according to a user-defined partitioning schema can enable data objects to be arranged according to any partitioning schema without tethering the partitioning to a particular underlying storage system.
Systems, methods, and software described herein provide for validating security actions before they are implemented in a computing network. In one example, a computing network may include a plurality of computing assets that provide a variety of different operations. During the operations of the network, administration systems may generate and provide security actions to prevent or mitigate the effect of a security threat on the network. However, prior to implementing the security actions within the network, computing assets may exchange security parameters with the administration systems to verify that the security actions are authentic.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
A data intake and query system can generate local data enrichment objects and receive federated data enrichment objects from another data intake and query system. In response to receiving a query, the data intake and query system can determine whether the query is subquery of a federated query. If the query is a subquery, the data intake and query system can use the federated data enrichment objects to execute the query.
A computerized method is disclosed for retraining machine learning models based on user feedback. The method includes receiving user feedback indicating a change is to be made to an assignment of one or more alerts, wherein the one or more alerts were assigned by a machine learning model implementing a distance metric, wherein an issue is a grouping of at least one alert, constructing a convex optimization procedure to minimize an adjustment of weights of the distance metric, retraining the machine learning model by adjusting the weights of the distance metric in accordance with the convex optimization procedure, and evaluating one or more subsequently received alert using the retrained machine learning model. Changes to be made to the assignment include any of merging of two issues, splitting of two issues based on time or an alert field, or reassignment of an alert from a first issue to a second issue.
Systems and methods are described for display of metric data and log data in a graphical user interface. Metric data can be ingested from a first data source via a first ingestion path and log data can be ingested from a second data source via a second ingestion path. The first data source and the second data source may be distinct, disparate data sources and the first ingestion path and the second ingestion path may be distinct, disparate ingestion paths. The metric data can be displayed in a first area of the graphical user interface and the log data can be displayed in a second area of the graphical user interface. Input can be received identifying a selection of a portion of the metric data for display and the log data can be filtered based on the selection to identify a portion of the log data for display.
A client device that includes a camera and an extended reality client application program is employed by a user in a physical space, such as an industrial or campus environment. The user aims the camera within the mobile device at a real-world asset, such as a computer system, classroom, or vehicle. The client device acquires a digital image via the camera and detects textual and/or pictorial content included in the acquired image that corresponds to one or more anchors. The client device queries a data intake and query system for asset content associated with the detected anchors. Upon receiving the asset content from the data intake and query system, the client device generates visualizations of the asset content and presents the visualizations via a display device.
Techniques are described for providing a threat analysis platform capable of automating actions performed to analyze security-related threats affecting IT environments. Users or applications can submit objects (e.g., URLs, files, etc.) for analysis by the threat analysis platform. Once submitted, the threat analysis platform routes the objects to dedicated engines that can perform static and dynamic analysis processes to determine a likelihood that an object is associated with malicious activity such as phishing attacks, malware, or other types of security threats. The automated actions performed by the threat analysis platform can include, for example, navigating to submitted URLs and recording activity related to accessing the corresponding resource, analyzing files and documents by extracting text and metadata, extracting and emulating execution of embedded macro source code, performing optical character recognition (OCR) and other types of image analysis, submitting objects to third-party security services for analysis, among many other possible actions.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
A computerized method is disclosed that includes operations of training a machine learning model using a labeled training set of data, wherein the machine learning model is configured to classify domain name server (DNS) records, obtaining DNS record data including at least a first DNS Txt record, applying the trained machine learning model to the first DNS Txt record to classify the first DNS Txt record and responsive to the classification of the first DNS Txt record, generating a flag for a system administrator. The trained machine learning model may classify the first DNS Txt record using logistic regression. In some instances, applying the trained machine learning model to the first DNS Txt record includes performing a tokenizing operation on the first DNS Txt record to generate a tokenized first DNS Txt record.