G06F15/17

PARALLEL-SERIAL CONVERSION CIRCUIT, INFORMATION PROCESSING APPARATUS AND TIMING ADJUSTMENT METHOD
20170244426 · 2017-08-24 · ·

A parallel-serial conversion circuit including a data transmission unit to output first data and second data of a prescribed pattern in accordance with a second clock obtained by dividing a first clock, a first flip flop to receive the first data so as to output the first data in accordance with the first clock, a second flip flop to receive the second data so as to output the second data in accordance with the first clock, a selector to select one of the first data and the second data so as to output the selected data in accordance with the first clock, and an adjustment unit to compare the second data to be received by the second flip flop and the first data output from the first flip flop so as to adjust, based on a comparison result, a timing for the first flip flop to receive the first data.

ACKNOWLEDGEMENT-LESS CANARY-BASED COMPLETION PROTOCOL
20170242821 · 2017-08-24 · ·

A method and system for performing operations in a canary-based communication protocol; specifically, an acknowledgment-less scheme to reduce completion latency and increase effective bandwidth utilization on a computer expansion bus is disclosed. In one embodiment, a host selects a canary to represent whether a data stream of unknown content has been received. The host sends the canary to the target over a communication protocol and then marks a portion of a memory buffer with the same canary. Since the data may be unknown, the canary chosen could be the same value as the data. As such, when processing a request and transmitting data back to the host, the target can do real-time detection to determine whether a canary collision will occur. If a collision does occur, the target can remedy the collision without the need to time out and retry the operation.

Deploying a portion of a streaming application to one or more virtual machines according to hardware type

A streams manager monitors performance of a streaming application, and when the performance needs to be improved, the streams manager requests a cloud manager provision one or more VMs on a server that has a specified hardware type, and optionally has specified available hardware capacity. In response, the cloud manager determines which available servers have the specified hardware type, and when available hardware capacity is specified, further determines which of the available servers with the specified hardware type have the specified available capacity. When there are multiple servers that satisfy the request from the streams manager, the cloud manager determines from historical performance logs for the servers which is preferred. The cloud manager then provisions the requested VM(s) on the specified hardware type and returns the requested VM(s) to the streams manager. The streams manager then deploys a portion of the streaming application to the VM(s).

Artificial reality system using a multisurface display protocol to communicate surface data

This disclosure describes efficient communication of surface texture data between system on a chip (SOC) integrated circuits. An example system includes a first integrated circuit, and at least one second integrated circuit communicatively coupled to the first integrated circuit by a communication interface. The first integrated circuit, upon determining that surface texture data of a frame to be rendered for display by the second SoC integrated circuit is to be updated, (a) transmits the surface texture data in one or more update packets to the second integrated circuit using the communication interface, and (b) transmits a command to the second integrated circuit indicating that the surface texture data of the frame has been updated using the communication interface. The second integrated circuit, upon receipt of the command, (a) sets a pointer to a location in the display buffer storing the surface texture data of the frame, and (b) renders the surface texture data of the frame for display on a display device.

Efficient CPU mailbox read access to GPU memory

Techniques are disclosed for peer-to-peer data transfers where a source device receives a request to read data words from a target device. The source device creates a first and second read command for reading a first portion and a second portion of a plurality of data words from the target device, respectively. The source device transmits the first read command to the target device, and, before a first read operation associated with the first read command is complete, transmits the second read command to the target device. The first and second portions of the plurality of data words are stored in a first and second portion a buffer memory, respectively. Advantageously, an arbitrary number of multiple read operations may be in progress at a given time without using multiple peer-to-peer memory buffers. Performance for large data block transfers is improved without consuming peer-to-peer memory buffers needed by other peer GPUs.

Efficient CPU mailbox read access to GPU memory

Techniques are disclosed for peer-to-peer data transfers where a source device receives a request to read data words from a target device. The source device creates a first and second read command for reading a first portion and a second portion of a plurality of data words from the target device, respectively. The source device transmits the first read command to the target device, and, before a first read operation associated with the first read command is complete, transmits the second read command to the target device. The first and second portions of the plurality of data words are stored in a first and second portion a buffer memory, respectively. Advantageously, an arbitrary number of multiple read operations may be in progress at a given time without using multiple peer-to-peer memory buffers. Performance for large data block transfers is improved without consuming peer-to-peer memory buffers needed by other peer GPUs.

ELECTRONIC DEVICE AND CO-PROCESSING CHIP

An electronic device includes: a display screen; a main processing chip configured to output first display data; and a co-processing chip configured to output second display data; wherein the co-processing chip includes a switching module, the switching module is electrically connected to the display screen, and the switching module is configured to output the first display data to the display screen according to control of the co-processing chip to control the display screen to work in a first display mode or configured to output the second display data to the display screen to control the display screen to work in a second display mode; and the main processing chip is further configured to enter a hibernate state when the display screen works in the second display mode. A co-processing chip is further provided.

ELECTRONIC DEVICE AND CO-PROCESSING CHIP

An electronic device includes: a display screen; a main processing chip configured to output first display data; and a co-processing chip configured to output second display data; wherein the co-processing chip includes a switching module, the switching module is electrically connected to the display screen, and the switching module is configured to output the first display data to the display screen according to control of the co-processing chip to control the display screen to work in a first display mode or configured to output the second display data to the display screen to control the display screen to work in a second display mode; and the main processing chip is further configured to enter a hibernate state when the display screen works in the second display mode. A co-processing chip is further provided.

Managing electronic mail for an end-user that is unavailable

A first computer sends an electronic message transparently to a second computer of intended recipients of an electronic mail (e-mail), in response to the e-mail addresses of the intended recipients being entered, by a first end-user on the first computer, into a ‘To’ message header field of the e-mail. The second computer sends a Boolean value to the first computer, wherein one of the intended recipients is not available to respond to the e-mail. The first computer queries a repository to return to the first end-user contact information of backup contact entities to respond the e-mail for the intended recipients that are not available to respond to the e-mail. The first computer sends the e-mail to the backup contact entities that are available to respond to the e-mail. The first computer deletes the e-mail from each inbox of the backup contact entities that received but did not read the e-mail before the intended recipients read the e-mail.

Classifying user-provided code

Processes for classifying, and dynamically adjusting, tiers for web services are described. Depending on the classification of the web service, support resources (e.g. servers, storage, bandwidth or other communications resources, etc.) may be configured in different ways, such as, for example, sharing resources among one or more of the web services, or isolating the resources for particular web services from those of other web services. Various electronic storefronts may be provided by a service provider to merchants/customers of the service provider. The service provider may classify each of the electronic storefronts for the merchants to a plurality of tiers. Such classifying may be performed, for example, during an enrollment of the merchant with the service provider, and/or during operation of the electronic storefront.