Patent classifications
G06F16/1744
DATA PROCESSING DEVICE, DATA PROCESSING PROGRAM, AND DATA PROCESSING METHOD
A device (1) for processing transaction data (D1) including a plurality of records includes: a storage unit (2) that stores the transaction data; and a compressed data generation unit (5) configured to generate compressed data (D6) corresponding to the transaction data, based on a value of a transaction quantity included in the transaction data stored in the storage unit (2), wherein each of the records includes a value of at least one item, the item includes the transaction quantity, and the value of the transaction quantity includes a natural number other than 1.
Platform for managing mobile applications
Embodiments of the invention make consumer application adoption more efficient by giving suppliers access to the desired target audience by displaying the applications to the right audience. Suppliers can provide criteria for the kinds of users they are looking to target using constraints. Embodiments of the invention enable viewers that match the constraints to see the service. The user gets to see an automatically generated application set, that is instantly available, with a high probability of containing the application the user is likely to seek. Identity and Payment support are built into the platform, such that the user no longer needs to register with each application or set up payment with each application. In addition, the platform reduces bandwidth consumption, storage front print, and power consumption of the user device by choosing when and which modules to download to the user device.
File system reorganization in the presence of inline compression
A method for file system reorganization in the presence of inline compression includes obtaining a virtual block pointer for an original compressed segment to be reorganized, the original compressed segment comprising compressed allocation units of data stored in a storage system, wherein the virtual block pointer comprises an extent list identifying the compressed allocation units in the original compressed segment and a pointer to where the original compressed segment is stored; copying only the referenced compressed allocation units in the original compressed segment to a new compressed segment in a substantially contiguous manner; updating the extent list to identify the referenced compressed allocation units in the new compressed segment, and the pointer to where the new compressed segment is stored; and freeing the original compressed segment.
CONTENT-BASED DYNAMIC HYBRID DATA COMPRESSION
An information handling system includes a processor configured to process a training data file to determine an optimal data compression algorithm. The processor may also perform a compression ratio analysis that includes compressing the training data file using data compression algorithms, calculating a compression ratio associated with each of the data compression algorithms, determining an optimal compression ratio from the compression ratio associated with the each data compression algorithm; and determining a desirable data compression algorithm associated with the training data file based on the optimal compression ratio. The processor may also perform a probability analysis that includes generating a symbol transition matrix based on the desirable data compression algorithm, extracting statistical feature data based on the symbol transition matrix, and generating probability matrices based on the statistical feature data to determine the optimal data compression algorithm for each segment of a working data file.
SYSTEM AND METHOD FOR FILE SYSTEM METADATA FILE REGION SEGMENTATION FOR DEDUPLICATION
A method for managing file based backups (FBBs) includes obtaining, by a backup agent, a backup request for a FBB, in response to the backup request, generating a FBB, generating a FBB metadata file corresponding to the FBB, wherein the FBB metadata file comprises a set of attribute regions, performing, using the set of attribute regions, a deduplication on the FBB metadata file to obtain a deduplicated FBB metadata file, and storing the deduplicated FBB metadata file in a backup storage system.
Concurrent computations operating on same data for CPU cache efficiency
Techniques for CPU cache efficiency may include performing concurrent processing, such as for first and second data operations, in a synchronized manner that prevents loading the same data chunk into the CPU cache more than once. Processing may include synchronizing the first and second data operations with respect to a first data chunk to ensure that both the first and second data operation processing has completed prior to proceeding with performing such processing on a second data chunk. The first and second data operations may be any two of deduplication, encryption, and compression, performed inline as part of the data path. In one embodiment, the first and second data operations for the first data chunk may be performed in parallel or sequentially where neither data operation proceeds with another data chunk until processing of the first and second data operations is complete for the first data chunk.
File layer to block layer communication for block organization in storage
A method performed by a block-storage server, of storing data is described. The method includes (1) receiving, from a remote file server, data blocks to be written to persistent block storage managed by the block-storage server; (2) receiving, from the remote file server, metadata describing a placement of the data blocks in a filesystem managed by the remote file server; and (3) organizing the data blocks within the persistent block storage based, at least in part, on the received metadata. An apparatus, system, and computer program product for performing a similar method are also provided.
Deduplication-adapted CaseDB for edge computing
Disclosed is a data deduplication method for an edge computer. The method is performed in a key-value store, and may include receiving a compaction request occurred from the key-value store to a metadata layer, checking whether deduplication for removing duplicated data is required when compaction of a metadata file is performed in response to the received compaction request, and removing the duplicated data by checking whether the deduplication is required.
ENCODING LIDAR SCANNED DATA FOR GENERATING HIGH DEFINITION MAPS FOR AUTONOMOUS VEHICLES
Embodiments relate to methods for efficiently encoding sensor data captured by an autonomous vehicle and building a high definition map using the encoded sensor data. The sensor data can be LiDAR data which is expressed as multiple image representations. Image representations that include important LiDAR data undergo a lossless compression while image representations that include LiDAR data that is more error-tolerant undergo a lossy compression. Therefore, the compressed sensor data can be transmitted to an online system for building a high definition map. When building a high definition map, entities, such as road signs and road lines, are constructed such that when encoded and compressed, the high definition map consumes less storage space. The positions of entities are expressed in relation to a reference centerline in the high definition map. Therefore, each position of an entity can be expressed in fewer numerical digits in comparison to conventional methods.
HYBRID FILE COMPRESSION MODEL
An archive file that includes an archive start point and an archive end point is received to be segmented and compressed. A first set of compression start points to segment the archive file according to a first function and a second set of compression start points to partition the archive file according to a second function are created. The first set of compression start points and the second set of compression start points are combined to create a set of merged compression start points to partition the archive file into portions between the archive start point and the archive end point. Each portion between the archive start point and the archive end point are compressed to create a compressed archive file.