Techniques of implementing partition level operations with concurrent activities are disclosed. A first operation can be performed on a first partition of a table of data. The first partition can be one of a plurality of partitions of the table, where each partition has a plurality of rows. A first partition level lock can be applied to the first partition for a period in which the first operation is being performed on the first partition, thereby preventing any operation other than the first operation from being performed on the first partition during the period the first partition level lock is being applied to the first partition. A second operation can be performed on a second partition of the table at a point in time during which the first operation is being performed on the first partition.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 16/176 - Support for shared access to filesFile sharing support
A system for managing database logging, the comprises a processor; and a user task executing in a database server process and executable by the processor, the user task to: receive in a database management system on a database server, a command to manipulate a portion of a database managed by the database management system; obtain a lock on the portion of the database; create a first log record in a first private log cache associated with the user task, the first log record recording a data manipulation to the portion of the database; enqueue the first log record to a queue; and release the lock on the portion of the database after copying the first log record to the queue.
Techniques are provided for providing multi-factor authentication with Uniform Resource Locator (URL) validation (MFAUV). One of the multiple authentication factors used may include a unique, user-specific URL that is sent to the user within a message. In this way, the user may simply click on, or otherwise execute or select, the provided URL, directly from within the message in which the URL is provided.
G06F 21/35 - User authentication involving the use of external additional devices, e.g. dongles or smart cards communicating wirelessly
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
A method for reliable data synchronization within a network is disclosed. The producer system stories data in a persistent data store and produces one or more data updates. The producer system simultaneously transmits the data updates to a consumer system and initiating storage of the data updates at the producer system. When storage of the data updates at the producer system is complete, the producer system transmits a first acknowledgment to the consumer system. The producer system determines whether a second acknowledgment has been received from the consumer system, wherein the second acknowledgment indicates that the consumer system has successfully stored the data updates at the consumer system. In accordance with a determination that the second acknowledgment has been received from the consumer system, the producer system changes the temporary status of the data updates stored at the producer system to a permanent status.
G06F 16/00 - Information retrievalDatabase structures thereforFile system structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
Techniques are provided for providing multi-factor authentication with Uniform Resource Locator (URL) validation (MFAUV). One of the multiple authentication factors used may include a unique, user-specific URL that is sent to the user within a message. In this way, the user may simply click on, or otherwise execute or select, the provided URL, directly from within the message in which the URL is provided.
G06F 21/35 - User authentication involving the use of external additional devices, e.g. dongles or smart cards communicating wirelessly
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 29/06 - Communication control; Communication processing characterised by a protocol
Disclosed herein are methods for retrieving data from a database. An embodiment operates searching for a key in a first index. The method determines that the searching will require a storage access request and issues the storage access request. The method continues searching for the key in a second index.
An operator tree is formed for a data processing plan, the operator tree containing a plurality of interconnected nodes and including a grouping of two or more duplicative portions, each of the two or more duplicative portions having identical nodes and structure such that when the operator tree is executed, operators executed in a first duplicative portion using a first thread perform same functions use different data than operators in a second duplicative portion using a second thread. One or more operators in the first portion and one or more operators in the second portion to be synchronized with each other are identified. A synchronization point is created for the identified operators in the first thread and one or more subsequent threads, wherein the synchronization point receives information from each of the identified operators to build an artifact to deliver to one or more operators that depend on the artifact.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 16/21 - Design, administration or maintenance of databases
G06F 9/52 - Program synchronisationMutual exclusion, e.g. by means of semaphores
8.
Optimizing performance in CEP systems via CPU affinity
In an example embodiment performance is optimized in a complex event stream (CEP) system. Information about a plurality of CEP threads is obtained. Then nearness among the plurality of CEP threads is determined, wherein nearness between a first and a second CEP thread indicates how much interaction is expected to occur between the first and second CEP thread. Based on the determined nearness, the plurality of CEP threads are organized into a plurality of CEP thread groups. Then, each of the plurality of CEP thread groups are assigned to a different processing node, with each processing node having one or more processors and a memory.
Techniques of implementing partition level operations with concurrent activities are disclosed. A first operation can be performed on a first partition of a table of data. The first partition can be one of a plurality of partitions of the table, where each partition has a plurality of rows. A first partition level lock can be applied to the first partition for a period in which the first operation is being performed on the first partition, thereby preventing any operation other than the first operation from being performed on the first partition during the period the first partition level lock is being applied to the first partition. A second operation can be performed on a second partition of the table at a point in time during which the first operation is being performed on the first partition.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 16/176 - Support for shared access to filesFile sharing support
Disclosed in some examples is a method of database replication, the method including at a Relational Database Management System (RDMS), determining a first replication mode; identifying a triggering event; determining that the triggering event indicates a change in the first replication mode; responsive to determining that the triggering event indicates a change in the first replication mode, determining a second replication mode, the second replication mode being a different replication mode than the first replication mode; identifying a database change made by one or more database tasks; and replicating the database change to an external replication component according to the second replication mode.
G06F 16/00 - Information retrievalDatabase structures thereforFile system structures therefor
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
11.
Splitting of a join operation to allow parallelization
A system and method for processing a database query is described. In response to detection that a database query involves a star or snowflake join operation, a join operator in a preliminary query plan can be split into a build operator and a probe operator. The probe operator can be placed in a final query plan in the same place as the join operator in the preliminary query plan, while the build operator can be placed beneath the probe operator in the final query plan, between an exchange operator and the exchange operator's child from the preliminary query plan.
In an example embodiment, a method of operating a task scheduler for one or more processors is provided. A topology of one or more processors is obtained, the topology indicating a plurality of execution units and physical resources associated with each of the plurality of execution units. A task to be performed by the one or more processors is received. Then a plurality of available execution units from the plurality of execution units is identified. An optimal execution unit is then determined, from the plurality of execution units, to which to assign the task, based on the topology. The task is then assigned to the optimal execution unit, after which the task is sent to the optimal execution unit for execution.
A method can include receiving a request to execute a database command identifying a target table; identifying a plurality of rows to insert into the target table based in part on the database command; writing rows, from the plurality of rows, into a data page until the data page is full; determining, by an index thread manager, a number of threads to use for updating indexes defined for the target table; and upon determining the data page is full, updating, in parallel, the indexes defined for the target table using the number of threads.
In some example embodiments, a method comprises: receiving, by a first node of a plurality of nodes in a distributed database system on a shared disk cluster infrastructure, a transaction request to perform a user database transaction a data item in a user database on a shared disk; acquiring, by the first node, a transaction lock for the data item; storing a lock file for the user database transaction in a lock information database on the shared disk, the lock file comprising lock information for the transaction lock and an indication of a status of the user database transaction, and the lock information comprising an identification of a location of the data item; and storing a transaction record of the user database transaction in the user database on the shared disk subsequent to the storing of the lock file in the lock information database on the shared disk.
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 17/30 - Information retrieval; Database structures therefor
A database system provides a non-volatile cache memory layer for caching pages for a set of database from the database system. The non-volatile cache memory layer may include a non-volatile cache for caching pages for a database from the set of database on the database system. The non-volatile cache may be configured through invoking a configuring stored procedure persistent on the database system. A request is received at the non-volatile cache memory layer for performing an operation on a page from the database on the database system. Based on the received request and an identification of the page, a caching operation is performed on the non-volatile cache memory layer. The caching operation is associated with the request. Data associated with the requested operation on the page is stored and organized on the NV cache memory layer.
G06F 17/30 - Information retrieval; Database structures therefor
G06F 12/0875 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
16.
Generating a native access plan for dynamic entity calling
Disclosed herein are system, method, and computer program product embodiments for generating a native access plan from a query execution plan for dynamic entity calling. An embodiment operates by receiving the query execution plan comprising at least one call to an entity, the entity being implemented by a plurality of classes, and generating source code of a native access plan that implements the query execution plan. The source code of the native access plan includes instructions to translate a run-time call to the entity to a call to a corresponding implementation of the entity based on an identifier of the called implementation of the entity.
Disclosed herein are system, method, and computer program product embodiments for generating a native access plan for semi join operators. An embodiment operates by generating a plurality of variables based upon the positions of a plurality of operators in a compiled query plan, opening and traversing tables as the query plan is executed, and closing those tables based on the rows queried and the plurality of variables.
Disclosed herein are system, method, and computer program product embodiments for eliminating redundancy when generating intermediate representation code. An embodiment operates by traversing a query execution plan, and for at least one operator in the query execution plan, determining whether the operator is derived from a parent class operator. If it is determined that the operator is derived from the parent class operator, source code for the native access plan is generated using one or more code generator functions corresponding to the parent class operator and/or one or more generator functions specifically corresponding to the child class operator. If it is determined that the operator is not derived from the parent class operator, source code for the native access plan is generated using one or more code generator functions corresponding to the operator.
A transaction descriptor associated with a vertical chain of row versions is received. The vertical chain of row versions is traversed. The vertical chain is part of a grid structure formed by a number of vertical chains intersected with a number of horizontal chains. A link to a current row version is terminated. A link from the current row version to an older row version in a horizontal chain is locally stored and terminated. The older row version is set as ready for garbage collection. The current row version is set as ready for garbage collection. A link from the current row version to a next row version in the horizontal chain is locally stored and terminated. The next row version is appointed as current.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program receives a query for a set of records in a database system having values in a field of a table that fall within a range of values. The program also determines a number of bits used to represent the values in the field of the table. The program further determines a set of operations to perform on the values in the field of the table based on the determined number of bits. The program also performs the determined set of operations on the values in the field of the table in order to identify the set of records in the database.
An original query execution plan of a database query is received. The original query execution plan represents a tree of operators. Source code for the original query execution plan is generated by a single traversal of the tree of operators. The generated source code is compiled into native machine code. The native machine code represents a simplified native access plan (SNAP).
Various embodiments of systems and methods to generate native access plan source code are described herein. In one aspect, a database query is received. A query execution plan, including a parent operator and one or more descendent operators, corresponding to the database query is retrieved. Further, a check is made to determine whether the parent operator and the one or more descendent operators include at least one loop. When both the parent operator and the one or more descendent operators include at least one loop, consume points for the at least one loop are defined. The parent operator and the one or more descendent operators are merged based on consume point types to generate native access plan source code.
A method may include accepting a database query including an operator requesting two or more incoming tuple streams be combined into a result tuple stream. At least one data value in the incoming tuple streams may be represented by an enumeration value. The method may include generating a query execution plan for the database query. The query execution plan may include encoding the enumeration value and a corresponding source identifier into a composite union enumeration. The source identifier may identify which of the two or more tuple streams corresponds to the enumeration value. The method may further include executing the database query according to the query execution plan to obtain the data value and providing the data value in response to the database query.
A database query may include an operator requesting two or more incoming tuple streams be combined into a result tuple stream. Generating a query execution plan may include constructing an equivalence union enumeration lookup table for a result domain of an element within the result tuple stream by taking a set union of incoming tuple domains, wherein each distinct value within that result domain is assigned an enumeration value. Generating the query execution plan may include constructing a secondary enumeration for each incoming tuple stream, wherein each secondary enumeration maps enumerated values within the incoming tuple stream into secondary ordinal values that correspond to equivalence union enumeration values. Generating the query execution plan may include mapping an incoming enumeration value through the secondary enumeration to produce an equivalence union enumeration value, and/or mapping, with the equivalence union enumeration lookup table, the equivalence union enumeration value to a cell value.
Disclosed herein are system, method, and computer program product embodiments for rollover strategies in an n-bit dictionary compressed column store. An embodiment operates by receiving a new value for addition to a compressed column store. It is determined that a maximum storage capacity for tokens in the compressed column store has been reached for the data dictionary. The compressed column store is converted into a composite store including the existing compressed column store and a newly created flat store. The new value is stored in the flat storage portion of the composite store.
Increasing the efficiency of performing queries on databases by eliminating partitions during a database query. The database query configured to access a database table having one or more columns and one or more rows, the database query including a condition on a specified basis column. The database table being partitioned on the basis of the specified column, the specified column having one or more distinct values and the partitioning including mapping, by the at least one programmable processor, individual ones of the one or more distinct values to individual partitions causing each row in the table to be mapped to a specific partition. Candidate partitions and guaranteed partitions can be identified. The database query can be applied only to candidate partitions. All rows which satisfy the database query and all the rows of the guaranteed partition can be forward for processing.
In an example embodiment, a method of operating a task scheduler for one or more processors is provided. A topology of one or more processors is obtained, the topology indicating a plurality of execution units and physical resources associated with each of the plurality of execution units. A task to be performed by the one or more processors is received. Then a plurality of available execution units from the plurality of execution units is identified. An optimal execution unit is then determined, from the plurality of execution units, to which to assign the task, based on the topology. The task is then assigned to the optimal execution unit, after which the task is sent to the optimal execution unit for execution.
A plurality of reserve and commit log operations are initiated in a database system. Thereafter, at least a portion of the database operations are logged in a log such that transient data structures are kept in-memory of the database system and persistent data structures are kept in byte-addressable memory. Next, each of one or more clients concurrently accessing the log are registered to enable such clients to access the log.
Techniques of implementing partition level operations with concurrent activities are disclosed. A first operation can be performed on a first partition of a table of data. The first partition can be one of a plurality of partitions of the table, where each partition has a plurality of rows. A first partition level lock can be applied to the first partition for a period in which the first operation is being performed on the first partition, thereby preventing any operation other than the first operation from being performed on the first partition during the period the first partition level lock is being applied to the first partition. A second operation can be performed on a second partition of the table at a point in time during which the first operation is being performed on the first partition.
Methods, systems, and computer program products for decompressing data are described. An ordinal column number of columnar data to be accessed is obtained, the ordinal column number identifying a location of the columnar data in a corresponding uncompressed row, the columnar data being stored in a first data structure. A breakpoint value in a breakpoint field of the at least partially compressed row is determined, the breakpoint value indicating a location of an end of a common prefix in the corresponding uncompressed row, the common prefix being stored in a second data structure. The ordinal column number of the columnar data to be accessed and a column number indicated by the breakpoint value are compared, the comparison identifying one or more locations of the columnar data to be accessed.
A system includes a gateway that is configured to receive a message from a source for transmission to a destination and multiple communication channels on which to transmit the message to the destination, where the communication channels include different types of communication channels. The system includes a decision engine that is operably coupled to the gateway and the communication channels. The decision engine is configured to select a first communication channel from the communication channels to route the message for transmission to the destination. The decision engine is configured to select a second communication channel from the communication channels to route the message for transmission to the destination in response to a period of time expiring without receiving an acknowledgement from the destination via the first communication channel, where the second communication channel is a different type of communication channel than the first communication channel.
Disclosed in some examples is a method, the method including detecting that an RDMS is recovering from a failure; sending a request for a last committed transaction on a replication component to the replication component; receiving, from the replication component, the last committed transaction which identifies a transaction that was the last committed transaction at a replication component at a time of RDMS failure; determining that a transaction log on the RDMS includes a transaction that had not yet been replicated at the time of RDMS failure which was committed on the transaction log subsequent to the last committed transaction received from the replication component; and based on that determination rolling back the transaction that had not yet been replicated at the time of RDMS failure.
G06F 17/30 - Information retrieval; Database structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
Disclosed herein are system, method, and computer program product embodiments for providing point in time recovery on a database. An embodiment operates by determining that one or more values were written to one of a plurality of database nodes of a database as part of a write transaction. The one or more data pages to which the one or more values were written are copied to a storage location of a backup corresponding to the write transaction. The storage location of the one or more data pages in the backup are written to a location in a transaction log corresponding to the write transaction.
G06F 17/30 - Information retrieval; Database structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Disclosed herein are system, method, and computer program product embodiments for stream optimized data processing. An embodiment operates by receive a stream of data in a streaming data format. A query associated both with the stream of data and one or more records of a database is determined. It is determined whether the one or more records of the database are stored in a local cache. Those records not stored in the local cache are retrieved from the database and converted into the streaming data format. A query response, including references to each of the one or more records stored in the local cache in the streaming data format, is provided for execution of the query.
A method and corresponding apparatus configured to collect raw data from a plurality of wireless devices. The raw data includes activity recorded when the wireless devices are at a selected topographic region. The raw data is combined to produce aggregated data representative of the activities of the individual wireless devices. Either the aggregated data or the raw data are selected for analysis depending on whether the raw data meets a threshold activity or subscriber density level. The selected data are analyzed to identify activity patterns of users of the wireless devices.
As individuals increasingly engage in different types of transactions they face a growing threat from, possibly among other things, identity theft, financial fraud, information misuse, etc. and the serious consequences or repercussions of same. Leveraging the ubiquitous nature of wireless devices and the popularity of (Short Message Service, Multimedia Message Service, etc.) messaging, an infrastructure that enhances the security of the different types of transactions within which a wireless device user may participate through a Second Factor Authentication facility. The infrastructure may optionally leverage the capabilities of a centrally-located Messaging Inter-Carrier Vendor.
A message identifier collector may collect message identifiers identifying sent messages having been sent by originating devices and identifying received messages of the sent messages that have been received at corresponding recipient devices. A message identifier matcher may match a sent message identifier for a sent message of the sent messages with a received message identifier for a corresponding received message of the received messages at a corresponding recipient device, and a delivery notification generator may send a delivery notification to an originating device of the originating devices that originally sent the sent message, thereby indicating receipt of the message at the corresponding recipient device. A delivery notification network path along which the message identifiers and the delivery notification are sent is different from a message delivery network path along which the message is sent.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
Systems and methods are presented for load balancing databases in a cloud server environment. In some embodiments, a method can include accessing, by a server in a network-based system, one or more system configuration parameters of the network-based system, with the one or more system configuration parameters defining one or more performance capabilities of the network-based system. The method may also include accessing performance characteristics of a query of a queried database; generating a quadtree decomposition, with the quadtree decomposition modeling a cost estimate of the database query as a function of a range of the performance capabilities of the one or more system configuration parameters; and generating a proposed packing of databases based on the modeled cost estimate of the query including the queried database and defining a configuration of a plurality of databases to be stored in the server.
Various embodiments of systems and methods for replicating data included in a portable electronic device to a new portable electronic device are described herein. Initially a copy of data, including an application, stored in the portable electronic device is generated. Next a determination is made whether the application is included in an application distribution platform corresponding to an operating system of the new portable electronic device. Finally based on the determination, the application is downloaded from the application distribution platform to the new portable electronic device.
Various embodiments of systems and methods for dynamically switching device configuration based upon context are described herein. In an aspect, the method includes reading a tag attached to an entry gate of a restricted area through a device. Upon reading the tag, an application is executed to connect the device to a mobile device management (MDM) server. Upon establishing the connection, the restricted area identifier (ID) is sent to the MDM server. The device receives one or more policies applicable for the restricted area from the MDM server. The received one or more policies are executed on the device to change the device configuration. After execution, the device sends a confirmation message to the MDM server to indicate that the device is policy complaint. Upon receiving the confirmation, the MDM server instructs to open the entry gate to allow the device within the restricted area.
Various embodiments of systems and methods to provide memory management of a device accessing applications are described herein. In one aspect, a request is received to access an application on a device. Further, a check is performed to determine whether the application is enterprise application or personal use application. When the application is personal use application, access to the application is provided by installing the personal use application on the device. The personal use application utilizes at least a portion of an available general memory and a portion of an available corporate memory in the device.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 9/44 - Arrangements for executing specific programs
A system includes a database that stores data on one or more memory devices and a business object layer that receives a request for data associated with a user stored on the database. The system includes a first cache that reads and stores the requested data from the database in response to the request from the business object layer, where the first cache is partitioned into different segments and the different segments are stored across multiple different computing devices. The system includes a second cache that reads and stores the requested data from the first cache. The business object layer filters and applies business logic to the data before the second cache reads the requested data from the first cache. The second cache is stored on a single computing device that received the request. The business object layer delivers the requested data from the second cache.
A method and system for transforming a serial schedule of transactions into a parallel schedule of transaction is disclosed. In one example, a computer system stores a list of data transactions in a transaction log. The computer system then reads a respective data transaction from the transaction log. The computer system determines whether the respective data transaction is dependent on any other currently pending data transaction. In accordance with a determination that the respective data transaction is not dependent on any other currently pending data transaction, the computer system applies the data changes to a reconstructed data set. In accordance with a determination that the respective data transaction is dependent on a currently pending second data transaction, the computer system delays commitment of the respective data transaction until the second data transaction has been applied to the reconstructed data set.
G06F 17/30 - Information retrieval; Database structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
A clustered server system and a method for maintaining a server cluster involve a plurality of servers that collectively form a server cluster. A master database stores configuration information concerning the server cluster. Each server stores a local copy of the configuration information in a file system. The servers automatically update their respective file system using a database copy of the configuration information if the configuration information changes in the database.
Methods, systems, computer program products, and articles of manufacture for processing events are described. An event is obtained and the event is processed to generate data using a first set of one or more operators. The generated data is stored in a first column store with a first row/transaction identifier and the first row/transaction identifier is stored in one or more first processing queues to enable further processing of the event using a second set of one or more operators.
Systems and methods are presented for auto-starting and auto-stopping databases in a cloud server environment. In some embodiments, a method includes accessing, by an initial server in a network-based system, a request to connect to a target database located in a target server of the network-based system. The method can include determining, by an administrative database residing in the initial server, a location of the target database residing in the target server, switching an execution context from no database in the target server to a copy of the administrative database in the target server, performing an auto-start procedure to auto-start the target database in the target server, switching the execution context from the administrative database in the target server to the target database in the target server, and transmitting a completion acknowledgement indicating the target server is connected to the target database.
A system includes a gateway that is configured to receive a message from a source for transmission to a destination and multiple communication channels on which to transmit the message to the destination, where the communication channels include different types of communication channels. The system includes a decision engine that is operably coupled to the gateway and the communication channels. The decision engine is configured to select a first communication channel from the communication channels to route the message for transmission to the destination. The decision engine is configured to select a second communication channel from the communication channels to route the message for transmission to the destination in response to a period of time expiring without receiving an acknowledgement from the destination via the first communication channel, where the second communication channel is a different type of communication channel than the first communication channel.
A method can include receiving a request to execute a database command identifying a target table; identifying a plurality of rows to insert into the target table based in part on the database command; writing rows, from the plurality of rows, into a data page until the data page is full; determining, by an index thread manager, a number of threads to use for updating indexes defined for the target table; and upon determining the data page is full, updating, in parallel, the indexes defined for the target table using the number of threads.
Disclosed in some examples is a method, the method including detecting that an RDMS is recovering from a failure; sending a request for a last committed transaction on a replication component to the replication component; receiving, from the replication component, the last committed transaction which identifies a transaction that was the last committed transaction at a replication component at a time of RDMS failure; determining that a transaction log on the RDMS includes a transaction that had not yet been replicated at the time of RDMS failure which was committed on the transaction log subsequent to the last committed transaction received from the replication component; and based on that determination rolling back the transaction that had not yet been replicated at the time of RDMS failure.
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
50.
Transaction completion in a synchronous replication environment
Systems and methods are presented for completing transactions in a synchronous replication environment. In some embodiments, a computer-implemented method can include generating in a database server, an identifier to identify a database transaction. The method can also include transmitting the identifier to a replication server; receiving acknowledgement that the identifier is acknowledged by the replication server; storing the transaction in the database server; and executing the transaction after receiving acknowledgement from the replication server and after determining the transaction is stored in the database server; wherein transmitting the identifier to the replication server occurs in parallel with storing the transaction in the database server.
G06F 17/30 - Information retrieval; Database structures therefor
G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A method for reliable data synchronization within a network is disclosed. The producer system stories data in a persistent data store and produces one or more data updates. The producer system simultaneously transmits the data updates to a consumer system and initiating storage of the data updates at the producer system. When storage of the data updates at the producer system is complete, the producer system transmits a first acknowledgment to the consumer system. The producer system determines whether a second acknowledgment has been received from the consumer system, wherein the second acknowledgment indicates that the consumer system has successfully stored the data updates at the consumer system. In accordance with a determination that the second acknowledgment has been received from the consumer system, the producer system changes the temporary status of the data updates stored at the producer system to a permanent status.
G06F 16/00 - Information retrievalDatabase structures thereforFile system structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
Disclosed in some examples is a method of database replication, the method including at a Relational Database Management System (RDMS), determining a first replication mode; identifying a triggering event; determining that the triggering event indicates a change in the first replication mode; responsive to determining that the triggering event indicates a change in the first replication mode, determining a second replication mode, the second replication mode being a different replication mode than the first replication mode; identifying a database change made by one or more database tasks; and replicating the database change to an external replication component according to the second replication mode.
A method can include initiating execution of a database command, the database command associated with a base table with at least one row to copy to a target table, the database command associated with a non-bulk insert mode; making a run-time decision on whether to automatically convert the insert mode from the non-bulk insert mode to a BULK insert mode based on the number of row buffers filled with rows from the base table during execution of the database command; and inserting at least one row into the target table using an insert mode based on the run-time decision.
A method for inserting rows into a target table can include receiving a database command, the database command associated with a base table with at least one row to copy to a target table; receiving an indication that use of a BULK insert mode is feasible for the database command; based on the indication, and determining that an insert mode for the database command has been converted from a non-bulk insert mode to the BULK insert mode: reading a row from the base table; building the row read from the base table into an allocated row buffer; inserting the row into the target table in the BULK insert mode; and if it is determined that the allocated row buffer is full, updating at least one index in parallel with the inserting.
In an example embodiment, performance is optimized in a complex event stream (CEP) system. Information about a plurality of CEP threads is obtained. Then nearness among the plurality of CEP threads is determined, wherein nearness between a first and a second CEP thread indicates how much interaction is expected to occur between the first and second CEP thread. Based on the determined nearness, the plurality of CEP threads are organized into a plurality of CEP thread groups. Then, each of the plurality of CEP thread groups are assigned to a different processing node, with each processing node having one or more processors and a memory.
Seamless failover in a database replication environment which has a primary database server and a plurality of standby database servers, is described. An example method includes orderly terminating transactions on the primary database server, where the transactions are originated from client applications. The transaction logs of the primary database server are drained and the transaction logs are replicated from the primary database server to the plurality of standby database servers. One of the standby database servers is designated as a new primary database server processing user transactions.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
57.
Deferring and/or eliminating decompressing database data
Disclosed herein are system, method, and computer program product embodiments for deferring or eliminating a need to decompress database data. An embodiment operates by decompressing a column of a database. The column may be represented by a predicate of a query. A row of the database may be determined to satisfy the predicate based on decompressed information from the column. Decompression of an additional column of the row may be deferred during execution the query until the row is determined to satisfy the predicate. The additional column may satisfy the query.
Bloom filter cost estimation engine for improved performance and accuracy is described. An example method includes building an execution plan for a join operation having a plurality of levels, where the execution plan includes a top join operator at a top level, a leaf scan operator on a bottom level, and one or more intermediate operators between the top level and the bottom level. A row reduction effect of applying a Bloom filter is determined by simulating a semi-join operation over table statistic representation at each of the plurality of levels of the execution plan. A cost savings of the join operation is calculated based on the row reduction effect at the each of the plurality of the levels.
Disclosed herein are technologies that give a disproportionate amount of screen real estate (or container real estate) to one of a group of user interface (UI) subcontainers to which a user is giving his or her attention. More particularly, in response to an indication that the user is focused and/or interested on a particular subcontainer, the device enlarges that subcontainer to occupy more (and perhaps all) of the available screen (or container) real estate. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
In an example embodiment, one or more pages from a database are stored in a page cache stored in a shared memory, the one or more pages stored in a packed format. One or more rows from the database are stored in a row cache stored in the shared memory, the one or more rows stored in an unpacked format. A request for a row of the database is received. Then, the row cache is searched for the row. In response to a determination that the row cannot be found in the row cache, the page cache is searched for the row. Finally, the row is returned.
Disclosed herein are system, method, and computer program product embodiments for multilevel synchronization of database table partition states. An embodiment operates by retrieving a partition from a partition lookup structure and determining whether the partition is in an active state. Based on a determination that the partition is in the active state an embodiment increments a counter associated with the partition using a compare-and-swap instruction accesses the partition.
A system for managing database logging, the comprises a processor; and a user task executing in a database server process and executable by the processor, the user task to: receive in a database management system on a database server, a command to manipulate a portion of a database managed by the database management system; obtain a lock on the portion of the database; create a first log record in a first private log cache associated with the user task, the first log record recording a data manipulation to the portion of the database; enqueue the first log record to a queue; and release the lock on the portion of the database after copying the first log record to the queue.
A system and method for processing a database query is described. The method can, in response to detection that a database query involves a star or snowflake join operation, determine a selectivity ratio for each of a plurality of dimension tables. The selectivity ratio having a lower value can correspond to a more restrictive dimension table. Thereafter, a table ordering can be created beginning with a fact table and continuing with each of the dimension tables in ascending order of their corresponding selectivity ratios. Then a query plan involving join operations between successive tables in the table ordering can be created.
Methods, systems, and computer program products for compressing a row are described. A common prefix may be obtained and data in the row matching the common prefix may be identified. A column number of a column corresponding to a breakpoint of the common prefix may be determined and data matching the common prefix may be deleted from the row. An identifier of the common prefix may be inserted into the row and a breakpoint field in the row may be set to the determined column number.
Disclosed herein are system, method, and computer program product embodiments for accelerating database queries containing bitmap-based conditions. An embodiment operates by determining a bitmap, where the bitmap represents a set of rows that have satisfied a conjunct that precedes a negated condition in a query expression and restricting the evaluation of the negated condition to the set of rows represented by the bitmap.
Systems and methods are presented for reducing load database time in a database backup process. In some embodiments, a computer-implemented method may include marking a checkpoint in a log of the database; generating a backup of the database for data up to the checkpoint; recording first changes in the database while generating the backup of the database; adding to the backup of the database an additional backup of the recording of the first changes; recording second changes in the database while adding the additional backup; determining if a number of second changes satisfies a criterion; and if the number of second changes satisfies the criterion, then adding to the backup of the database a backup of the recorded second changes. Recording these changes can enable a database dump process to contain more recent page images, so that the amount of recovery at load time is reduced.
G06F 17/30 - Information retrieval; Database structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
67.
Splitting of a join operation to allow parallelization
A system and method for processing a database query is described. In response to detection that a database query involves a star or snowflake join operation, a join operator in a preliminary query plan can be split into a build operator and a probe operator. The probe operator can be placed in a final query plan in the same place as the join operator in the preliminary query plan, while the build operator can be placed beneath the probe operator in the final query plan, between an exchange operator and the exchange operator's child from the preliminary query plan.
Systems, methods and computer program products for tracking objects in an area of interest are described. According to an embodiment, an object is tracked as follows. First position information relating to the object is received from a first sensor and translated to a coordinate system of a map. The object is displayed on the map in accordance with the translated first position information. Second position information relating to the object is received from a second sensor, where the second sensor is based on a second positioning technology different from the first positioning technology. The second position information is translated to the coordinate system of the map, and the object is displayed on the map in accordance with the translated second position information. In an embodiment, other information relating to the object is also received from sensors.
Existing algorithms to build balanced tree structures (“b-trees”) compare a data element (e.g., a key) to be inserted with the data elements that have already been inserted to find the correct position to insert the data element. Additionally, the algorithms balance and/or rebalance the b-tree when any individual node gets over-filled. As part of this balancing, data elements stored in the various nodes are moved to other nodes. These operations can incur both time and resource costs. We propose an algorithm to build a b-tree in a bottom up manner and a technique to modify trees built using the aforementioned algorithm so that they are balanced. We also propose a method to allow for adding more data into the thus-built b-tree as long as it follows a certain set of pre-conditions.
Various embodiments of systems and methods for managing a plurality of nodes in a distributed computing environment are described herein. Initially a request to process a to-be-processed request is received. Next one or more nodes from a plurality of nodes, included in a cluster, is identified to process the to-be-processed request. Next the to-be-processed request is divided into a plurality of sub-requests. Next the plurality of sub-requests are assigned to the identified one or more nodes and the generated additional node. A node failure of one of the one or more identified nodes is identified. Finally, one or more of the plurality of sub-requests assigned to the failed node is re-assigned to another node of the plurality of nodes.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
71.
Replication description model for data distribution
A system, method and computer-readable mediums for replicating data, are provided. A replication logic description describing how data is replicated in a replication path and a resource description describing a replication environment are specified. The replication logic description is bound to at least one resource in the resource description. Once bound, an object representing a bound replication logic description and the resource description is generated and deployed in the replication environment. Once deployed the object replicates data in the replication path while ensuring transaction consistency and delivery during replication of the data.
The current subject matter describes static partitioning and sub-partitioning of a row identifier space associated with a table in a delta memory store of a database so as to allow data to be concurrently inserted into rows identified by the corresponding sub-partitions. A server system associated with the database can receive data to be inserted into a database. The server system can select a sub-fragment of a row identifier space identifying identifiers of rows stored in the database for the table. The sub-fragment can be selected based on a preference specified by an insert operation used for insertion of the data into the columnar database and on availability of the sub-fragment. The server system can insert the data in rows identified by the selected sub-fragment while other data is being concurrently inserted in rows identified by other one or more sub-fragments of the row identifier space.
Disclosed herein are system, method, and computer program product embodiments for storing data in a database using a tiered index architecture, An embodiment operates by creating a first tier and assigning a first threshold size to the first tier. When the first tier exceed the first threshold size, the system pushes data from the first tier into a second tier.
A locking mechanism in a delta-store-based database to support long running transactions across multiple RID spaces is described. An example method includes establishing a column-based in-memory database including a main store and a delta store. A delete or an update statement is executed with a transaction on a table having plurality of table versions. The table versions are represented by bitmaps in the delta store and the bitmaps and table fragments corresponding to the table versions implement RID spaces for the table. A lock on a row of the table manipulated by the delete or the update statement is requested to preclude other transaction from deleting or updating an obsolete version of data. Upon a successful validation that the row to be locked is not the obsolete version in the RID spaces of the table, a lock is granted to the transaction.
G06F 17/30 - Information retrieval; Database structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
75.
Set-oriented locking based on in-memory bitmaps for a column-oriented database
A set-oriented locking based on in-memory bitmap for a column store database is described. An example method includes establishing a column-based in-memory database including a main store and a delta store, where the delta store has a plurality of row-visibility lock bitmaps visible to transactions at various points in time. The lock bitmaps represent a bit encoding to indicate whether there are granted row locks tables in the database. A delete or an update statement is executed with a transaction on a table. A set of row locks on rows of the table manipulated by the delete or the update statement are requested to preclude other transactions from currently deleting or updating the same rows. Accordingly, set operations are performed on the lock bitmap to manage the set of row locks associated with the transaction.
G06F 17/30 - Information retrieval; Database structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
76.
Robust communication system for guaranteed message sequencing with the detection of duplicate senders
Guaranteed message sequencing between a first and second database is described. An example method includes maintaining first state information associated with the first database at the first database, where second state information associated with the first database is maintained at the second database. The client sends, to the second database, a message describing changed rows between the first database and the second database since a last synchronization and the first state information. The client subsequently receives, from the second database, status of the last synchronization, where the status is determined by the second database based on the first state information and the second state information.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
77.
Routing replicated data based on the content of the data
Disclosed herein are system, method, and computer program product embodiments for routing data to be replicated based on the content of the data. An embodiment operates by retrieving a row from a database transaction log receiving a filtering condition. The embodiment evaluates whether the content of the row satisfies the filtering condition and selects a replication path for transmitting the transaction for replication.
Methods and systems configured to facilitate smart pre-fetching for sequentially accessing tree structures such as balanced trees (b-trees) are described herein. According to various described embodiments, a pre-fetch condition can be determined to have been met for a first cache associated with a first level of a tree such as a b-tree. A link to a bock of data to be read into the cache can be read into the cache by accessing a second level of the tree. The data elements associated with the retrieved link can subsequently read into the cache.
Enhanced shared memory based communication driver for improved performance and scalability is described. An example method includes creating a shared memory segment for a database server instance. The database server instance and a client reside on a same computing device. A first database connection is established to the database server instance using a pre-configured communication end point. An identifier of the shared memory segment for the database server instance is sent to the database server instance and the database server instance listens to subsequent connection requests generated on the shared memory segment. Moreover, a second database connection to the database server instance is established using the shared memory segment as a communication end point. Upon a successful connection of the second database connection, the first database connection is closed.
Disclosed herein are methods for retrieving data from a database. An embodiment operates searching for a key in a first index. The method determines that the searching will require a storage access request and issues the storage access request. The method continues searching for the key in a second index.
Disclosed herein are system, method, and computer program product embodiments for sorting a disarranged index keys in an index. First an operation is performed on a table that includes an index set on at least one column, where the operation causes the index keys in the index to become disarranged. The disarranged index keys are rearranged into a proper order using an in-place index sort. To rearrange the index keys in the index, a determination is made whether the index is a tail-end index and whether the index is a fixed-size index. Based on the determination, the in-place index sort is performed on the index, where the in-place index sort arranges the index keys in the index into the proper order.
A system, method and a computer-readable medium for reducing a contentious access of data in memory system storage by simulating an online transaction processing business lifecycle, are provided. The memory storage system determines a type of data, where the type of data corresponds to access frequency of data. The data is stored in a row-based format in a row-based storage, a page-based format in a page-based storage and a compressed format in a compressed storage based on the determined type of data. The data is also transferred between the row-based storage, the page-based storage and the compressed storage according to predefined criteria.
Disclosed herein are system, method, and computer program product embodiments for generating a histogram used to optimize a query plan. An embodiment operates by initializing a first thread and a second thread, such that the first thread processes a first section of a column and the second thread processes a second section of the column, concurrently with the first thread. The first thread generates a first hash table and the second thread generates a second hash table. The first and second hash tables represent data distribution stored in the respective first and second sections of the column. The first and second tables are merged into a histogram that represents data distribution in the column.
A system, computer-implemented method, and computer-program product embodiments for determining a cardinality estimate for a query. A cardinality estimator identifies a predicate in a query, where the predicate is split into a plurality of equivalence classes. The cardinality estimator then generates a plurality of equivalence graphs from the plurality of equivalence classes, one equivalence graph for an equivalence class. Spanning trees are identified from the plurality of equivalence graphs, and the cardinality estimator then determines the cardinality estimate for the query from the spanning trees.
A delta store giving row-level versioning semantics to a non-row-level versioning underlying store is described. An example method includes establishing a column-based in-memory database including a main store and a delta store, where the main store allows only non-concurrent transactions on a same table and the delta store has a plurality of row-visibility bitmaps implementing a row-level versioning mechanism that allows concurrent transactions on the same table. A local RID space is established for a table fragment, that for each table in the database, the data of the table is stored in one or more main table fragment in the main store and in one or more delta table fragments in the delta store. Each table fragment has a local RID space, and the local RID space is a collection of one-based contiguous integer local RIDs (Row IDs) describing local positions of the rows of the table fragment.
G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled
G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
G06F 17/30 - Information retrieval; Database structures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
86.
Client-side directed commands to a loosely coupled database
Dynamically directing a command to a node in a distributed database is described. An example method includes receiving the command from a client application to access data in the distributed database, where the command contains a set of parameters. A primary key is constructed from at least some of the parameters. The client further generates routing information from a node-partition table based on a comparison of the primary key with an entry in the node-partition table, where the node-partition table maps the primary key to a node in the distributed database. Accordingly, the command is directed to the node in the distributed database based on the routing information.
Disclosed herein are system, method, and computer program product embodiments for optimizing a query plan reuse in a database server system accessible by a plurality of client connections. An embodiment comprises determining if a query plan in a global cache storage is reserved by a client connection of a plurality of client connections, generating a cloned query plan from the query plan based on the determining, and associating the cloned query plan with a second client connection of the plurality of client connections.
In an example embodiment, event stream processing is performed by first parsing an input query into a directed acyclic graph (DAG) including a plurality of operator nodes. Then a grouping of one or more of the operator nodes is created. One or more partitions are created, either by the user or automatically, in the DAG by forming one or more duplicates of the grouping. A splitter node is created in the DAG, the splitter node splits data from one or more event streams and distributes it among the grouping and the duplicates of the grouping. Then, the input query is resolved by processing data from one or more event streams using the DAG.
In an example embodiment, a method for performing event stream processing is provided. An event stream is received, the event stream comprising a real time indication of one or more events occurring. Then it is determined that the event stream is identified in a streaming publish service inside a database. The event stream may then be inserted directly into one or more database tables in the database based on the determining.
Disclosed herein are system, method, and computer program product embodiments for calibrating and using a stable storage model. An embodiment operates by generating, by a central computer, an access request for a stable storage, wherein the access request comprises a plurality of page accesses; measuring a cost to execute the access request on the stable storage; amortizing the cost over the plurality of page accesses; and calibrating, by the central computer, a stable storage model based on the amortized cost.
Disclosed herein are system, method, and computer program product embodiments for storing and accessing data in a shared disk database system using a timestamp range to improve cache efficiency. An embodiment operates by retrieving, by a node, from a shared storage. a blockmap identity and a root page associated with a data request, based on a determination that the blockmap identity associated with a data request is present in a cache. The embodiment continues, retrieving, by the node, the logical page by copying a stored logical page from the shared storage and setting a lower timestamp value of the logical page to a timestamp associated with the stored logical page and an upper timestamp value of the logical page to a timestamp associated with the data request, based on a determination that the logical page is not present in the cache.
Multi-pass parallel merging in a database includes identifying characteristics of non-final pages during database query operations. A phase of page consolidation is triggered based on the identified characteristics and a final page is stored.
Disclosed herein are system, method, and computer program product embodiments for rollover strategies in an n-bit dictionary compressed column store. An embodiment operates by receiving a new value for addition to a compressed column store, determining that a current memory block of a most recently added token to the compressed column store is the insertion block. It is determined that the maximum token value has been reached for the current memory block. A new virtual memory block is created using the current insertion block, and a token corresponding to the new value is stored in the new virtual memory block. In another embodiment, when it is determined a maximum number of token values that may be stored in a compressed column store has been reached for a data dictionary, the compressed column store is converted into a composite store include a flat store where the new value is stored.
A method, a system and a computer program product for maintaining a pre-computed result set are disclosed. A server coupled to a data source determines whether an object stored in the data source received an update. The server identifies at least one identifier associated with a pre-computed result set based on that determination. The pre-computed result set is computed based on the object. The server computes an updated pre-computed result set using the identifier by applying the received update to the pre-computed result set.
Disclosed herein are system, method, and computer program product embodiments for replicating data in a distributed database system. Data containing a replicated truncation point associated with a replicating system is received via a data path. It can then be determined that the truncation point represents the point at which all data in a transaction log has been replicated (e.g., successfully or safely) and the transaction log can then be truncated at the truncation point (i.e., the data up to the truncation point deflected). Data containing an additional replicated truncation point associated with an additional replicating system via an additional data path may be received. It can then be determined that the additional replicated truncation point represents the point at which all data in the transaction log has been replicated and the transaction log can be then truncated at the additional replicated truncation point.
Various embodiments of systems and methods for recommending applications to portable electronic devices are described herein. Initially a context change of an application identification parameter is detected. Based on the detected context change, a target application, from a plurality of applications, may be identified. A similarity value is then computed between the identified target application and another application. Finally, an application to be recommended to a portable electronic device is determined based on the computed similarity value and a rate value of another application.
Freeing memory safely with low performance overhead in a concurrent environment is described. An example method includes creating a reference count for each sub block in a global memory block, and each global memory block includes a plurality of sub blocks aged based on respective allocation time. A reference count for a first sub block is incremented when a thread operates a collection of data items and accesses the first sub block for a first time. Reference counts for the first sub block and a second sub block are lazily updated. Subsequently, the sub blocks are scanned through in the order of their age until a sub block with a non-zero reference count is encountered. Accordingly, one or more sub blocks whose corresponding reference counts are equal to zero are freed safely and with low performance overhead.
Disclosed herein are system, method, and computer program product embodiments for constructing an index for a database table. An index that comprises a data structure may be created. The index can then be populated with data from the database table. When a request to modify the database table is received, the method may determine that the request to modify the database table relates to a portion of the database table corresponding to a portion of the index that has yet to be populated. An entry indicating the requested modification can be inserted into the portion of the index that has yet to be populated.
Various embodiments of systems and methods for enhancing consumer engagement using advanced communication exchange services are described herein. The method involves receiving by a consumer device an address book entry from an enterprise device. The consumer device is enabled with enhanced address book capability provided by any advanced communication exchange systems. Further, in an aspect, the received address book entry is activated to enable the enterprise device to push business information to the consumer device. In another aspect, selecting the address book entry invokes the advanced communication services supported by the enterprise device. By accessing one or more of the advanced communication services, business information from the enterprise device is received via the selected communication exchange service. In an aspect, the received business information is customized based on online presence information of the consumer device.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
G06Q 30/02 - MarketingPrice estimation or determinationFundraising
100.
High performance index creation on sorted data using parallel query plans
Creation of an index for a table of sorted data for use by a data storage application is initiated. Thereafter, N+1 logical partition of rows of the table are defined so that each logical partition has a corresponding worker process. Each worker process then builds a sub-index based on the corresponding logical partition which are later merged to form the index. Related apparatus, systems, techniques and articles are also described.