Patent classifications
G06F8/45
Voice command integration for local network connected devices
Various arrangements for facilitating smart television content receivers in a local network are provided. A primary television receiver executing a first operating system can receive audio data including human voice from a voice enabled remote control. The primary television receiver can transmit the audio data to a secondary television receiver executing a second operating system and that includes a voice command component. The secondary television receiver can convert the audio data into voice command data and transmit the voice command data to the primary television receiver. The primary television receiver can transmit the voice command data to a voice processing server via the Internet and receive, in response, a command generated based on the voice command data. The primary television receiver can transmit the command to the secondary television receiver. The voice command component can then control an operation of the secondary television receiver based on the command.
Computer Processing and Outcome Prediction Systems and Methods
Computer processing and outcome prediction systems and methods used to generate algorithm time prediction polynomials, inverse algorithm time prediction polynomials, determine race conditions, determine when a non-linear algorithm can be treated as if it were linear, as well as automatically generate parallel and quantum solutions from classical software or from the relationship between monotonic attribute values.
DEVICES, SYSTEMS, AND METHODS FOR TYPE INFERENCING CODE SCRIPTED IN A DYNAMIC LANGUAGE
A system configured to convert human-readable source code into computer-readable source code is disclosed herein. The system can include a processor and a memory configured to store a compiling engine that, when executed by the processor, causes the processor to: receive an input program comprising human-readable source code, wherein the human-readable source code comprises a complex function, type inference the complex function, thereby inferring a first set of potentially partial and imprecise data types for the input program; transform the type inferenced complex function and type infer again a number of times, type inference the transformed complex function, thereby inferring a full set of precise data types for the type inferenced input program; and generate an output program comprising machine-readable code, wherein the machine-readable code is fully optimized using the full set of precise data types.
AUTOMATIC WORKFLOW GENERATION BASED ON ANNOTATED CODE STATEMENTS
Automatic workflow generation is described. One or more files containing code statements for accessing and modifying information in a destination database is received. The code statements are parsed from the one or more files and dependencies between the code statements are determined. A dependency graph is built by arranging the code statements according to the dependencies between the code statements. The dependency graph is partitioned by identifying at least one barrier code statement having an unclear dependency and dividing the dependency graph between code statements occurring prior to the at least one barrier code statement and code statements occurring after the at least one barrier code statement. Jobs are scheduled based on the partitioned dependency graph, and the code statements are annotated according to the scheduled jobs. A workflow is then automatically generated based on the annotated code statements.
COMPUTE ELEMENT PROCESSING USING CONTROL WORD TEMPLATES
Techniques for task processing based on compute element processing using control word templates are disclosed. One or more control word templates are generated for use in a two-dimensional array of compute elements. Each compute element within the array is known to a compiler and is coupled to its neighboring compute elements within the array of compute elements. Each control word template designates a topological set of compute elements from the array of compute elements. The one or more control word templates are customized with a specific set of compute element operations. The one or more control word templates that were customized are stored. The specific set of compute element operations is executed on the topological set of compute elements. The one or more control word templates that were stored are reused. The one or more control word templates that were stored are modified and executed using compute elements.
GRAPH INSTRUCTION PROCESSING METHOD AND APPARATUS
Disclosed are a graph instruction processing method and apparatus, which relates to the field of computer technologies One example method includes: detecting whether a first graph instruction has a conditional instruction element; and when the first graph instruction has the conditional instruction element, determining that the first graph instruction is a conditional execution instruction, and processing the first graph instruction when both data flow information and control flow information of the first graph instruction are in a ready state; or when the first graph instruction does not have a conditional instruction element, determining that the first graph instruction is a non-conditional execution instruction, and processing the first graph instruction when data flow information of the first graph instruction is in a ready state.
RESOURCE RESETTABLE DEEP NEURAL NETWORK ACCELERATOR, SYSTEM, AND METHOD
A resource resettable deep neural network accelerator according to an embodiment of the present disclosure includes: a memory layer including a scratchpad memory layer configured to divide deep neural network parameter data (hereinafter, data) in an external memory layer into a plurality of tiles and to load the divided tiles, and a register file memory layer configured to load tiled data of the scratchpad memory layer; and a plurality of cores configured to process an inference operation for the data loaded in the register file memory layer, wherein the memory layer includes a virtual tiling layer added to a certain location for loading the tiled data from a previous memory layer so as to correspond to a specific tiling size.
Apparatus and method for secondary offloads in graphics processing unit
The invention relates to an apparatus for second offloads in a graphics processing unit (GPU). The apparatus includes an engine; and a compute unit (CU). The engine is arranged operably to store an operation table including entries. The CU is arranged operably to fetch computation codes including execution codes, and synchronization requests; execute each execution code; and send requests to the engine in accordance with the synchronization requests for instructing the engine to allow components inside or outside of the GPU to complete operations in accordance with the entries of the operation table.
Pointer alignment computation in program code according to code pattern analyses
Pointer alignment in a computer programming to obtain information enabling a compiler to optimize program code. Equivalence classes of pointers are collected in a program using a flow-insensitive yet field-sensitive pointer analysis operation iterating through an entire program code of the program. The equivalence classes of pointers, once collected, are mapped to and recorded in an equivalence class mapping table (ECTable). A portion of the collected equivalence classes of pointers are identified, from the ECTable, as pointer candidates for a pointer alignment computation according to a code pattern analysis of each pointer candidate. The code pattern analysis is based on available alignment information, and whether the alignment information would enable a compiler to optimize pointer references of the candidate pointer. The pointer alignment computation is then performed for each identified pointer candidate to obtain the alignment information used to optimize execution of the program.
APPLICATION PROGRAMMING INTERFACE TO MODIFY INCOMPLETE GRAPH CODE
Apparatuses, systems, and techniques to modify one or more portions of incomplete graph code. In at least one embodiment, one or more portions of incomplete graph code are modified based on, for example, CUDA or other parallel computing platform code.