Patent classifications
G06F9/3891
SYSTEMS AND METHODS FOR ESTABLISHING AND MANAGING FAST DATA CHANNELS AMONG MODERN WORKSPACES
Systems and methods for establishing and managing fast data channels among modern workspaces are described. In an embodiment, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: detect access by a first workspace and a second workspace to an IHS resource, and establish a fast data channel between the first and second workspaces, at least in part, based upon context information.
Systems and methods related to resource distribution for a fleet of machines
Systems and methods related to resource distribution for a fleet of machines are disclosed. A system may include a fleet of machines each having an associated resource capacity and a resource requirement to perform a task. The system may further include a controller having a resource requirement circuit to determine an aggregated amount of the resource requirement and an aggregated amount of the resource capacity. A resource distribution circuit may adaptively improve, in response to an aggregated amount of the resource capacity, an aggregated resource delivery of the resource.
SYSTEMS, DEVICES, AND METHODS FOR SELECTING A DISTRIBUTED FRAMEWORK
A method of selecting a distributed framework includes identifying, by a selection device coupled to a memory, at least a first cryptographic evaluator of a plurality of cryptographic evaluators, wherein identifying the at least a first cryptographic evaluator further comprises and evaluating a secure proof generated by the at least a first cryptographic evaluator, and identifying the at least a first cryptographic evaluator as a function of the secure proof, assigning, by the selection device, a confidence level of the at least a first cryptographic evaluator, and selecting, by a selection device, a distributed framework from the plurality of cryptographic evaluators as a function of the confidence level, and assigning a task to the distributed framework.
APPLICATION PROGRAMMING INTERFACE TO CONFIGURE PROCESSOR PARTITIONING
Apparatuses, systems, and techniques to configure processor partitioning for a multi-process service. In at least one embodiment, a multi-process service configures a set of streaming multiprocessors of one or more parallel processing units to perform one or more threads in response to an application programming interface (API).
Computer Vision Systems and Methods for Modeling Three-Dimensional Structures Using Two-Dimensional Segments Detected in Digital Aerial Images
A system for modeling a three-dimensional structure utilizing two-dimensional segments comprising a memory and a processor in communication with the memory. The processor extracts a plurality of two-dimensional segments corresponding to the three-dimensional structure from a plurality of images indicative of different views of the three-dimensional structure. The processor determines a plurality of three-dimensional candidate segments based on the extracted plurality of two-dimensional segments and adds the plurality of three-dimensional candidate segments to a three-dimensional segment cloud. The processor transforms the three-dimensional segment cloud into a wireframe indicative of the three-dimensional structure by performing a wireframe extraction process on the three-dimensional segment cloud.
Hypervisor and container placement and cost optimization utilizing machine learning
According to some embodiments, an automated provisioning system may receive a customer demand associated with an application to be executed in a cloud-based computing environment. The automated provisioning system may include a process allocator to communicate with Virtual Machine (“VM”) and container provisioners and determine cluster data. A machine learning based microservice setup platform, coupled to the automated provisioning system, may receive the cluster data and information about the customer demand. The machine learning based microservice setup platform may then execute policy rules based on the cluster data (and information about the customer demand) and generate a recommendation for the customer demand. The automated provisioning system may then assign the customer demand to one of a VM-based infrastructure and a container-based infrastructure in accordance with the recommendation generated by the machine learning based microservice setup platform.
Two-server privacy-preserving clustering
Described herein are systems and techniques for privacy-preserving unsupervised learning. The disclosed system and methods can enable separate computers, operated by separate entities, to perform unsupervised learning jointly based on a pool of their respective data, while preserving privacy. The system improves efficiency and scalability, while preserving privacy and avoids leaking a cluster identification. The system can jointly compute a secure distance via privacy-preserving multiplication of respective data values x and y from the computers based on a 1-out-of-N oblivious transfer (OT). In various embodiments, N may be 2, 4, or some other number of shares. A first computer can express its data value x in base-N. A second computer can form an ×N matrix comprising
random numbers m.sub.i,0 and the remaining elements m.sub.i,j=(yjN.sup.i-m.sub.i,0) mod
. The first computer can receive an output vector from the OT, having components m.sub.i=(yx.sub.i N.sup.i-m.sub.i,0) mod
.
AUTOMATED DISCOVERY OF DATABASES
In some examples, a networked computing system comprises a backup node cluster of a backup service in communication with a host database node cluster of a host, a host database at least initially undiscovered by the backup node cluster, one or more processors coupled with memory storing instructions that, when executed, perform operations comprising at least installing a backup agent on at least one node of the host database node cluster, registering the host at the backup service, based on the host registration, triggering a host database discovery process to discover the undiscovered database automatically, the discovery process including a discovery call, in response to the discovery call, receiving metadata relating to the discovered database, and communicating with the discovered database.
EXECUTING MULTIPLE PROGRAMS SIMULTANEOUSLY ON A PROCESSOR CORE
Systems and methods are disclosed for allocating resources to contexts in block-based processor architectures. In one example of the disclosed technology, a processor is configured to spatially allocate resources between multiple contexts being executed by the processor, including caches, functional units, and register files. In a second example of the disclosed technology, a processor is configured to temporally allocate resources between multiple contexts, for example, on a clock cycle basis, including caches, register files, and branch predictors. Each context is guaranteed access to its allocated resources to avoid starvation from contexts competing for resources of the processor. A results buffer can be used for folding larger instruction blocks into portions that can be mapped to smaller-sized instruction windows. The results buffer stores operand results that can be passed to subsequent portions of an instruction block.
TRAVERSING AN ADJACENCY LIST ON DISTRIBUTED PROCESSORS
A distributed system including multiple processors associated with non-transitory computer-readable media storing computing instructions. The computing instructions, when collectively executed on the multiple processors, cause the multiple processors collectively to perform certain acts. The acts can include executing multiple iterations until a stopping condition is satisfied, by, for each of the multiple iterations: (i) processing a set of input nodes at the multiple processors using a set of criteria to generate first data at the multiple processors, wherein the set of input nodes is different at each of the multiple iterations; (ii) determining a list of output nodes using adjacency rows of an adjacency list at different ones of the multiple processors, such that each output node of the list of output nodes is one hop from a respective input node of the set of the input nodes; and (iii) updating the set of the input nodes for a subsequent iteration of the multiple iterations based on the list of output nodes when the stopping condition is not satisfied. The acts also can include outputting second data based at least in part on the first data. Other embodiments are disclosed.