G06F2209/509

PROCESSING SYSTEM, AND PROCESSING METHOD

A processing system performs using an edge device and a server device, wherein the edge device includes processing circuitry configured to process processing target data and output a processing result of the processing target data, determine that the server device is to execute processing related to the processing target data when an evaluation value for evaluating which of the edge device and the server device is to process the processing target data satisfies a condition, determine that the evaluation value is included in a range for determining that processing is to be executed by the edge device when the processing result of the processing target data satisfies a predetermined evaluation, and output the processing result of the processing target data processed, and transmit data that causes the server device to execute the processing related to the processing target data when determining that the server device is to execute the processing.

PROCESSING CHAINING IN VIRTUALIZED NETWORKS

To dynamically allow chaining of logical processing units comprising endpoints, at least a type of an endpoint, and address information whereto connect the endpoint is configured, wherein the type of the endpoint is either a host port type or a logical processing unit type. During offloading from a central processing unit one or more functions to be performed by at least one further processing unit, the central processing unit is interacting with the one or more logical processing units via endpoints of the host port type and logical processing units are interacting via endpoints of the logical processing unit port type, the interaction using the address information.

Composable information handling systems in an open network using access control managers

A method for managing composed information handling systems includes obtaining a composition request for a composed information handling system, making a first determination that a first information handling system is not capable of servicing the composition request local, and based on the first determination: allocating, an available resource on the first information handling system to the composed information handling system, sending a resource allocation request to a system control processor manager for access to an additional resource, obtain, in response to the allocation request, a notification for access to a second information handling system of the information handling systems that provides the available resource, setting up management services for available resource and the additional resource to obtain logical hardware resources, and presenting the logical hardware resources to at least one compute resource set as bare metal resources.

Methods and arrangements for robot device control in a cloud

The present disclosure relates to a first Web server (102, 204, 60, 70) and a second Web server (108, 214, 80, 90), and methods therein for controlling of a robot device over a cloud interface. A hyper-text transfer protocol, HTTP, request for a trajectory between a start position and a goal position is sent (S120, S230, 302, 402) towards the second Web server. One or more calculated trajectories are obtained (S122, 304) based on information as received encoded in the request. A HTTP response is sent (306) towards the first WEB server, comprising one or more calculated trajectories. Executing (S126, S266; 308, 406) of a trajectory at least based said one or more of the received trajectories is performed by the first Web server (102, 204, 60, 70). A scalable robot device control method is thus proposed, which is advantageously uses stored calculated trajectories between start and goal positions, for the robot device.

METHOD AND SYSTEM FOR PERFORMING COMPUTATIONAL OFFLOADS FOR COMPOSED INFORMATION HANDLING SYSTEMS

Techniques described herein relate to a method for performing computational offloads for composed information handling systems. The method includes obtaining, by a system control processor associated with a composed information handling system, a computational offload request associated with a dataset from an application executing on an at least one compute resource set; in response to obtaining the computational offload request: identifying a dataset location associated with the dataset in the composed information handling system; identifying resources of the composed information handling system capable of performing the computational offload request; selecting a resource of the resources to perform the computational offload; and initiating performance of the computational offload request on the selected resource.

APPARATUS AND METHOD FOR TREE STRUCTURE DATA REDUCTION
20230215091 · 2023-07-06 ·

Apparatus and method for tree structure data reduction. For example, one embodiment of an apparatus comprises: a plurality of compute units; bounding volume hierarchy (BVH) processing logic to update a BVH responsive to changes associated with leaf nodes of the BVH, the BVH processing logic comprising: treelet generation logic to arrange nodes of the BVH into a plurality of treelets, the treelets including a plurality of bottom treelets and a tip treelet, each treelet having a number of nodes selected based on workgroup processing resources of the compute units; a dispatcher to dispatch workgroups to compute units to process the treelets, wherein a separate workgroup comprising a separate plurality of threads is dispatched to process each treelet.

Distributing processing resources across local and cloud-based systems with respect to autonomous navigation

Embodiments herein include a method executable by a processor coupled to a memory. The processor is local to a vehicle can operable to determine initial location and direction information associated with the vehicle at an origin of a trip request. The processor receives one or more frames captured while the vehicle is traveling along a navigable route relative to the trip request and estimates an execution time for each of one or more computations respective to an analyzing of the one or more frames. The processor, also, off-loads the one or more computations to processing resources of a cloud-based system that is in communication with the processor of the vehicle in accordance with the corresponding execution times.

Application computation offloading for mobile edge computing

Systems, apparatuses, methods, and computer-readable media, are provided for offloading computationally intensive tasks from one computer device to another computer device taking into account, inter alia, energy consumption and latency budgets for both computation and communication. Embodiments may also exploit multiple radio access technologies (RATs) in order to find opportunities to offload computational tasks by taking into account, for example, network/RAT functionalities, processing, offloading coding/encoding mechanisms, and/or differentiating traffic between different RATs. Other embodiments may be described and/or claimed.

Detecting execution hazards in offloaded operations

Detecting execution hazards in offloaded operations is disclosed. A second offload operation is compared to a first offload operation that precedes the second offload operation. It is determined whether the second offload operation creates an execution hazard on an offload target device based on the comparison of the second offload operation to the first offload operation. If the execution hazard is detected, an error handling operation may be performed. In some examples, the offload operations are processing-in-memory operations.

Graphics processing unit systems for performing data analytics operations in data science

Systems and methods are provided for efficiently performing processing intensive operations, such as those involving large volumes of data, that enable accelerated processing time of these operations. In at least one embodiment, a system includes a graphics processor unit (GPU) including a memory and a plurality of cores. The plurality of cores perform a plurality of data analytics operations on a respectively allocated portion of a dataset, each of the plurality of cores using only the memory to store data input for each of the plurality of data analytics operations performed by the plurality of cores. The data storage for the plurality of data analytics operations performed by the plurality of cores is also provided solely by the memory.