G06F16/9017

MULTIPLE VIRTUAL NAMESPACES ON A SINGLE PHYSICAL NAMESPACE TO AVOID FILE SYSTEM RESTARTS AND IMPROVE AVAILABILITY
20230025994 · 2023-01-26 ·

One example method includes defining a physical namespace, determining a number of virtual namespaces, virtualizing the physical namespace by defining the virtual namespaces on the physical namespace, and generating a modified lookup key that is a function of a name of one of the virtual namespaces. The modified lookup key may be moved between virtual namespaces without requiring interruption of a backup or restore process, and without requiring an associated file system to be taken offline. Movement of the modified lookup key may be transparent to a user and may permit preservation of scripts that were in place prior to the move.

METHOD, APPARATUS, ELECTRONIC DEVICE, AND MEDIUM FOR FILE MANAGEMENT
20230229626 · 2023-07-20 ·

File management is enabled for large numbers of files. Example file management includes setting a grouping identifier for a file. The method further includes determining a storage address of a data block of the file, the storage address indicating an extent where the data block is located and an offset. The method further includes setting, in a storage region of the extent corresponding to the grouping identifier, a flag for the data block based on the offset. In this manner, a large number of files in a distributed file system can be managed more efficiently.

Online trained object property estimator
11561983 · 2023-01-24 · ·

This disclosure describes systems and methods for using an estimator to produce values for dependent variables of streaming objects based on values of independent variables of the objects. The systems and methods may include continuously tuning the estimator based on any objects received with pre-populated values for the dependent variables.

Systems and methods for multiresolution priority queues
11704153 · 2023-07-18 · ·

A system for storing and extracting elements according to their priority takes into account not only the priorities of the elements but also three additional parameters, namely, a priority resolution p.sub.Δ and two priority limits p.sub.min and p.sub.max. By allowing an ordering error if the difference in the priorities of elements are within the priority resolution, an improvement in performance is achieved.

Monitoring asset hierarchies based on asset group metrics

An asset monitoring and reporting system (AMRS) implements an interface to establish an asset hierarchy to be monitored and reported against. The interface employs a search query of extant asset data from which definitional aspects of the asset hierarchy can be identified, and therefrom the interface automatically determines control information reflective of the asset hierarchy to direct the ongoing operation of the AMRS. The interface further allows for configuration of a metric definition for a metric of an asset node of the asset hierarchy, the metric representing a point in time or a period of time and derived from a metric-time search of machine data produced by or about the asset node and receives an identification of a metric determination specification for the metric definition, the metric determination specification comprising at least identification of a metric component and identification of a calculation operation to apply to the metric component.

Performing multiple point table lookups in a single cycle in a system on chip

In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.

Offload of data lookup operations

A central processing unit can offload table lookup or tree traversal to an offload engine. The offload engine can provide hardware accelerated operations such as instruction queueing, bit masking, hashing functions, data comparisons, a results queue, and a progress tracking. The offload engine can be associated with a last level cache. In the case of a hash table lookup, the offload engine can apply a hashing function to a key to generate a signature, apply a comparator to compare signatures against the generated signature, retrieve a key associated with the signature, and apply the comparator to compare the key against the retrieved key. Accordingly, a data pointer associated with the key can be provided in the result queue. Acceleration of operations in tree traversal and tuple search can also occur.

Information processing apparatus, information processing method, and storage medium

An information processing apparatus includes an obtaining unit configured to obtain information of a plurality of labels applied to the learning data by a plurality of users, information regarding reliability of each applied label itself, and information regarding reliability of a user who applies the relevant label, wherein the information of the label is information regarding a result to be recognized in a case where the predetermined recognition is performed on the learning data and a determination unit configured to determine a label to the learning data from among the plurality of labels based on the reliability of the label itself and the reliability of the user who applies the relevant label.

Techniques and apparatus for coarse granularity scalable lifting for point-cloud attribute coding

A method, computer system, and computer-readable medium are provided for point cloud attribute coding by at least one processor. Data associated with a point cloud is received. The received data is transformed through a lifting decomposition based on enabling a scalable coding of attributes associated with the lifting decomposition. The point cloud is reconstructed based on the transformed data.

Using delayed autocorrelation to improve the predictive scaling of computing resources

Techniques are described for filtering and normalizing training data used to build a predictive auto scaling model used by a service provider network to proactively scale users' computing resources. Further described are techniques for identifying collections of computing resources that exhibit suitably predictable usage patterns such that a predictive auto scaling model can be used to forecast future usage patterns with reasonable accuracy and to scale the resources based on such generated forecasts. The filtering of training data and the identification of suitably predictable collections of computing resources are based in part on autocorrelation analyses, and in particular on “delayed” autocorrelation analyses, of time series data, among other techniques described herein.