Patent classifications
G06T1/60
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
MEMORY INTERFACE WITH REDUCED ENERGY TRANSMIT MODE
PAM encoding techniques that leverage unused idle periods in channels between data transmissions to apply longer but more energy-efficient codes. To improve energy savings, multiple sparse encoding schemes may be utilized selectively to fit different sized gaps in the traffic. These approaches may provide energy reductions, for example with memory READ and WRITE traffic, when transferring 4-bit data using 3-symbol sequences.
MEMORY INTERFACE WITH REDUCED ENERGY TRANSMIT MODE
PAM encoding techniques that leverage unused idle periods in channels between data transmissions to apply longer but more energy-efficient codes. To improve energy savings, multiple sparse encoding schemes may be utilized selectively to fit different sized gaps in the traffic. These approaches may provide energy reductions, for example with memory READ and WRITE traffic, when transferring 4-bit data using 3-symbol sequences.
Data transformation for a machine learning model
Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.
Data transformation for a machine learning model
Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.
Image processing apparatus and computer-readable recording medium storing screen transfer program
An image processing apparatus of transferring a display image to a client machine, the display image being an image to be displayed on a display device associated with the client machine, the image processing apparatus including: a memory; and a processor coupled to the memory, the processor being configured to perform processing, the processing including: executing a first transfer process configured to transfer only moving image data as the display image; executing a second transfer process configured to transfer moving image data and still image data as the display image; and executing a control process configured to select either the executing of the first transfer process or the executing of the second transfer process, by using a frame rate of the display image and a state of a graphics processing unit (GPU) circuitry configured to perform a process related to an image.
Image processing apparatus and computer-readable recording medium storing screen transfer program
An image processing apparatus of transferring a display image to a client machine, the display image being an image to be displayed on a display device associated with the client machine, the image processing apparatus including: a memory; and a processor coupled to the memory, the processor being configured to perform processing, the processing including: executing a first transfer process configured to transfer only moving image data as the display image; executing a second transfer process configured to transfer moving image data and still image data as the display image; and executing a control process configured to select either the executing of the first transfer process or the executing of the second transfer process, by using a frame rate of the display image and a state of a graphics processing unit (GPU) circuitry configured to perform a process related to an image.
Storage of levels for bottom level bounding volume hierarchy
Aspects presented herein relate to methods and devices for graphics processing including an apparatus, e.g., a GPU. The apparatus may configure a BVH structure including a plurality of levels and a plurality of nodes, the BVH structure being associated with geometry data for a plurality of primitives in a scene. The apparatus may also identify an amount of storage in a GMEM that is available for storing at least some of the plurality of nodes in the BVH structure. Further, the apparatus may allocate the BVH structure into a first BVH section including a plurality of first nodes and a second BVH section including a plurality of second nodes. The apparatus may also store first data associated with the plurality of first nodes in the GMEM and second data associated with the plurality of first nodes and the plurality of second nodes in a system memory.
Imaging system and control method for imaging system
An imaging system, comprising a shooting operation interface that operates to form an image of a subject, and a processor that has a bio-information acquisition section and a stress determination section, wherein the bio-information acquisition section acquires bio-information of an operator when, during shooting awaiting action where an instant for acquiring still images is awaited, the shooting operation interface is operated, and the stress determination section determines stress conditions that shooting actions place on the operator based on the bio-information that has been acquired using the bio-information acquisition section.