Patent classifications
G06F15/167
System for content continuation and handoff
A content delivery system and method for use with plurality of digital multimedia data processing systems and legacy systems spanning across one or more network environments. The system and method enable users with freedom of mobility while maintaining access to the user's selected content while the user transitions from one device in one location to a different device in a difference location, substantially without interruption and without the need for user action to turn on and off these target data processing systems. The Instant Invention can provide high bandwidth content delivery solutions based upon hardware and software components by activating a target device while the system is proximate to the target device and, in one embodiment, automatically redirecting the content while the system is proximate to a new target device without user intervention. The target devices include digital multimedia data processing systems and legacy systems including, but not limited to, HDTVs, TV, Personal Computers, digital music systems, printers, radios, and fax machines.
Selecting computing resources
Systems and methods are described for distributing pool resources. One method includes maintaining a plurality of groups of computing resources, wherein each group of the plurality of groups includes computing resources that share a respective combination of resource characteristics; receiving a first request to perform a first test on a computing resource; determining, from the plurality of groups of computing resources, a subset of groups of computing resources that include a respective combination of resource characteristics that satisfy the required characteristics of the first test; shuffling the subset of groups and selecting a first group from the shuffled subset of groups; selecting an available computing resource from the first group; and causing the first test to be performed on the selected available computing resource from the first group.
MSS headend caching strategies
Systems and methods for caching data and generating responses to data requests are disclosed. A content delivery system for sending requested data to a client based on a client request in accordance with one or more embodiments of the present invention comprises a server for compiling data into a data cache, and a headend, coupled to the server, for obtaining the data, categorizing and storing the data in groups in an object cache, receiving a client request, picking data from the object cache and generating a response, and returning the response while caching the response in a response cache that is used to directly respond to future client requests.
Providing faster data access using multiple caching servers
A method and system for identifying an optimal server to receive requests for network content requested by a user of a network device is provided. A browser application in a network device receives a request for network content from a user and transmits the request to a server. The browser application receives the network content from the server and renders the network content to the user on the network device. Executable code in the rendered network content enables the browser application to identify an optimal server to receive subsequent items of network content requested by the user. When the user selects an item of network content in the rendered network page, the browser application connects to the optimal server to receive subsequent items of network content for the user.
METHOD OF CONTROLLING A VIRTUAL MACHINE, INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
A method of controlling a first virtual machine and a second virtual machine, the method includes detecting that the second virtual machine is in a suspended state, storing one or more first packets into a first buffer during the suspended state, inputting the one or more first packets stored in the first buffer into a second buffer after the suspended state is ended, generating one or more second packets by replicating the one or more first packets input from the first buffer to the second buffer, transmitting the one or more first packets stored in the second buffer to the first virtual machine, and transmitting the one or more second packets to the second virtual machine.
Processor, accelerator, and direct memory access controller within a processor core that each reads/writes a local synchronization flag area for parallel execution
It is provided a processor system comprising at least one processor core including a processor, a memory and an accelerator. The memory includes an instruction area, a synchronization flag area and a data area. The accelerator starts, even if the processor is executing another processing, acceleration processing and executes read instruction in a case where the read instruction is a flag checking instruction and a flag indicating the completion of predetermined processing has been written; and stores the data subjected to the acceleration processing after completion of the acceleration processing, and further writes a flag indicating the completion of the acceleration processing. The processor starts, even if the accelerator is executing another processing, read instruction corresponding to a flag in a case where the read instruction is the flag checking instruction and it is confirmed that the flag indicating the completion of the acceleration processing has been written.
Processor, accelerator, and direct memory access controller within a processor core that each reads/writes a local synchronization flag area for parallel execution
It is provided a processor system comprising at least one processor core including a processor, a memory and an accelerator. The memory includes an instruction area, a synchronization flag area and a data area. The accelerator starts, even if the processor is executing another processing, acceleration processing and executes read instruction in a case where the read instruction is a flag checking instruction and a flag indicating the completion of predetermined processing has been written; and stores the data subjected to the acceleration processing after completion of the acceleration processing, and further writes a flag indicating the completion of the acceleration processing. The processor starts, even if the accelerator is executing another processing, read instruction corresponding to a flag in a case where the read instruction is the flag checking instruction and it is confirmed that the flag indicating the completion of the acceleration processing has been written.
Distributed caching cluster management
A management system may enable and monitor a cache or other cluster to make the cluster configuration-aware such that initialization and changes to the underlying structure of the cluster can be dynamically updated. For example, a distributed memory caching system may provide initial configuration to a client from a memory caching node referenced by an alias provided by a configuration endpoint. Updates of configuration may be retrieved from memory caching nodes, each storing current configuration of the cache cluster. A management system monitors changes to the cache cluster, such as provisioning of new caching nodes, and updates the configuration stored in the caching nodes for retrieval by a client.
Local client discovery for content via cache
Systems and techniques are disclosed for predictively selecting media content items and providing the predicted media content items to a cache. A media client may be in communication with a cache and detect the media content items stored on the cache. Based on the detection, a media content user interface may be modified and may contain the cached media content items or links to the cached media content items.
MULTI-CORE PROCESSOR AND STORAGE DEVICE
A multi-core processor includes a plurality of cores, a shared memory, a plurality of address allocators, and a bus. The shared memory has a message queue including a plurality of memory regions for transmitting messages between the plurality of cores. The plurality of address allocators are configured to, each time addresses in a predetermined range corresponding to a reference memory region among the plurality of memory regions are received from a corresponding core among the plurality of cores, control the plurality of memory regions to be accessed in sequence by applying an offset determined according to an access count of the reference memory region to the addresses in the predetermined range. The bus is configured to connect the plurality of cores, the shared memory, and the plurality of address allocators to one another.