Patent classifications
G06F15/167
Multi-threaded processing of search responses
Multi-threaded processing of search responses returned by search peers is disclosed. An example method may include transmitting, by a computer system, a search request to a plurality of search peers of a data aggregation and analysis system; receiving, by a first processing thread, a plurality of data packets from the plurality of search peers; parsing, by a second processing thread, one or more data packets of the plurality of data packets to produce a first partial response to the search request; parsing, by a third processing thread, the one or more data packets to produce a second partial response to the search request; and generating, based on the first partial response and the second partial response, an aggregated response to the search request.
Analytics, Algorithm Architecture, and Data Processing System and Method
A system and method employing a distributed hardware architecture, either independently or in cooperation with an attendant data structure, in connection with various data processing strategies and data analytics implementations are disclosed. A compute node may be implemented independent of a host compute system to manage and to execute data processing operations. Additionally, an unique algorithm architecture and processing system and method are also disclosed. Different types of nodes may be implemented, either independently or in cooperation with an attendant data structure, in connection with various data processing strategies and data analytics implementations.
Friend capability caching
Friend capability caching designed to allow a user of an application to improve a user's experience based on the shared capabilities of a set of friends. Communication between client devices can be improved by knowing the shared capabilities, such as a device type, media format and media size. The client devices store capabilities of friends devices such that a client device can communicate with other client devices as a function of the friend device capabilities.
AUTOMATIC COALESCING OF GPU-INITIATED NETWORK COMMUNICATION
Apparatuses, systems, and techniques are directed to automatic coalescing of GPU-initiated network communications. In one method, a communication engine receives, from a shared memory application executing on a first graphics processing unit (GPU), a first communication request assigned to or having a second GPU as a destination to be processed. The communication engine determines that the first communication request satisfies a coalescing criterion and stores the first communication request in association with a group of requests that have a common property. The communication engine coalesces the group of requests into a coalesced request and transports the coalesced request to the second GPU over a network.
Transportation vehicle for providing infotainment content in areas of limited coverage
A method for providing a user of a transportation vehicle with infotainment content. An area of insufficient coverage of a network along a route ahead of the transportation vehicle is determined. Infotainment content to be made available to the user in the area of insufficient network coverage is determined based on at least one user input. This determined infotainment content to be provided is loaded into the transportation vehicle via the network and is finally made available to the user in the area of insufficient network coverage. A transportation vehicle to carry out the method and a system having a transportation vehicle and a network server.
Restful method and apparatus to import content by geo-aware content caching service
Representational state transfer (REST) based geo-aware content transfer includes a REST server configured for receiving an application programming interface (API) request from a client device. The REST server obtains a upload universal resource locator (URL) targeting a caching server that is geographically closest to the client device, constructs an upload link containing the upload URL and a completion callback, and sends the upload link to the client device. The client device uses the upload URL to upload content to the caching server. The caching server interprets the completion callback to obtain a completion URL and, upon completion of content uploading, makes a REST API call using the completion URL. Responsive to the REST API call, the REST server executes an inbound operation to complete the uploading to a content management system and returns the content URL to the caching server which sends the content URL to the client device.
PROCESSING ELEMENT AND NEURAL PROCESSING DEVICE INCLUDING SAME
The present disclosure discloses a processing element and a neural processing device including the processing element. The processing element includes a weight register configured to store a weight, an input activation register configured to store an input activation, a flexible multiplier configured to receive a first sub-weight of a first precision included in the weight, receive a first sub-input activation of the first precision included in the input activation, and generate result data by performing multiplication calculation of the first sub-weight and the first sub-input activation as the first precision or a second precision different from the first precision according to the first sub-weight and the first sub-input activation and a saturating adder configured to generate a partial sum by using the result data.
Pooled memory address translation
A shared memory controller receives, from a computing node, a request associated with a memory transaction involving a particular line in a memory pool. The request includes a node address according to an address map of the computing node. An address translation structure is used to translate the first address into a corresponding second address according to a global address map for the memory pool, and the shared memory controller determines that a particular one of a plurality of shared memory controllers is associated with the second address in the global address map and causes the particular shared memory controller to handle the request.
Pooled memory address translation
A shared memory controller receives, from a computing node, a request associated with a memory transaction involving a particular line in a memory pool. The request includes a node address according to an address map of the computing node. An address translation structure is used to translate the first address into a corresponding second address according to a global address map for the memory pool, and the shared memory controller determines that a particular one of a plurality of shared memory controllers is associated with the second address in the global address map and causes the particular shared memory controller to handle the request.
Method and apparatus for edge computing service
Methods and apparatuses for edge computing services are provided, and a method of caching, by an edge data network, data from a service server includes obtaining information about a location of a terminal from a 3.sup.rd Generation Partnership Project (3GPP) network, generating movement information of the terminal in a region of interest based on information about correspondence between the information about the location of the terminal and a configured region of interest, and caching data from the service server, the data being determined based on the movement information of the terminal in the region of interest and a configured cache rule.