Patent classifications
G06F8/45
DATA ANALYSIS APPARATUS AND DATA ANALYSIS METHOD
A data analysis apparatus includes a control unit that performs control of receiving an operation to select analysis data, control of receiving an operation to select a plurality of scripts that execute analysis on the selected analysis data, and control of executing analysis on the selected analysis data by the plurality of selected scripts in parallel. The data analysis apparatus also includes a display unit configured to display analysis results of the analysis data by the control unit on the same screen.
DEPENDENCY-AWARE SERVER PROCESSING OF DATAFLOW APPLICATIONS
A computer implemented method comprises a server processing work requests of a work requester. The work requester can communicate to the server a processing dependency of one work request on a second work request. The server can associate the dependency with the work requests and/or a queue of work requests. The dependency include a condition to be met in association with processing the work requests, and the condition can include an action for the server to take in association with processing a work request. A computing system can comprise a work requester, a server, and a set of dependency-aware queues for processing a set of work requests. A queue and/or work requests on the queues can be associated with a processing dependency and the server can process work requests enqueued to the queues in an order based on the dependencies. A work requester/server interface can comprise a dependency framework.
PARALLELIZATION METHOD, PARALLELIZATION TOOL, AND IN-VEHICLE DEVICE
A computer generates a parallel program, based on an analysis of a single program that includes a plurality of tasks written for a single-core microcomputer, by parallelizing parallelizable tasks for a multi-core processor having multiple cores. The computer includes a macro task (MT) group extractor that analyzes, or finds, a commonly-accessed resource commonly accessed by the plurality of tasks, and extracts a plurality of MTs showing access to such commonly-accessed resource. Then, the computer uses an allocation restriction determiner to allocate the extracted plural MTs to the same core in the multi-core processor. By devising a parallelization method described above, an overhead in an execution time of the parallel program by the multi-core processor is reduced, and an in-vehicle device is enabled to execute each of the MTs in the program optimally.
FINGERPRINTING OF REDUNDANT THREADS USING COMPILER-INSERTED TRANSFORMATION CODE
A first processing element is configured to execute a first thread and one or more second processing elements are configured to execute one or more second threads that are redundant to the first thread. The first thread and the one or more second threads are to selectively bypass one or more comparisons of results of operations performed by the first thread and the one or more second threads depending on whether an event trigger for the comparison has occurred a configurable number of times since a previous comparison of previously encoded values of the results. In some cases the comparison can be performed based on hashed (or encoded) values of the results of a current operation and one or more previous operations.
Method and system for software enhancement and management
A software enhancement and management system (E&M System) can include two ways to decompose existing software such that new functionality can be added: functional decomposition and time-affecting linear pathway (TALP) decomposition. Functional decomposition captures the inputs and outputs of the existing software's functions and attaches the new algorithmic constructs presented as other functions that receive the outputs of the existing software's functions. TALP decomposition allows for the generation of time-prediction polynomials that approximate time-complexity functions, speedup, and automatic dynamic loop-unrolling-based parallelization for each TALP.
PARALLELIZATION METHOD, PARALLELIZATION TOOL, AND IN-VEHICLE DEVICE
A computer is configured to generate a parallel program for a multi-core microcomputer from a single program for a single-core microcomputer, based on a dependency analysis of a bundle of unit processes in the single program. The computer obtains dependency information that enables dependency determination of dependency un-analyzable unit processes. Further, the computer performs a dependency analysis of dependency analyzable unit processes. Then, the computer assigns the dependency un-analyzable unit processes and the dependency analyzable unit processes respectively to multiple cores of the multi-core microcomputer, while fulfilling dependency among those processes, based on the obtained dependency information of the dependency un-analyzable unit processes and an analysis result of the dependency analyzable unit processes.
PROCESSOR THAT INCLUDES A SPECIAL STORE INSTRUCTION USED IN REGIONS OF A COMPUTER PROGRAM WHERE MEMORY ALIASING MAY OCCUR
Processor hardware detects when memory aliasing occurs, and assures proper operation of the code even in the presence of memory aliasing. The processor defines a special store instruction that is different from a regular store instruction. The special store instruction is used in regions of the computer program where memory aliasing may occur. Because the hardware can detect and correct for memory aliasing, this allows a compiler to make optimizations such as register promotion even in regions of the code where memory aliasing may occur.
PROCESSOR THAT DETECTS MEMORY ALIASING IN HARDWARE AND ASSURES CORRECT OPERATION WHEN MEMORY ALIASING OCCURS
Processor hardware detects when memory aliasing occurs, and assures proper operation of the code even in the presence of memory aliasing. Because the hardware can detect and correct for memory aliasing, this allows a compiler to make optimizations such as register promotion even in regions of the code where memory aliasing can occur. The result is code that is more optimized and therefore runs faster.
PERFORMANCE ESTIMATION-BASED RESOURCE ALLOCATION FOR RECONFIGURABLE ARCHITECTURES
The technology disclosed relates to allocating available physical compute units (PCUs) and/or physical memory units (PMUs) of a reconfigurable data processor to operation units of an operation unit graph for execution thereof. In particular, it relates to selecting, for evaluation, an intermediate stage compute processing time between lower and upper search bounds of a generic stage compute processing time, determining a pipeline number of the PCUs and/or the PMUs required to process the operation unit graph, and iteratively, initializing new lower and upper search bounds of the generic stage compute processing time and selecting, for evaluation in a next iteration, a new intermediate stage compute processing time taking into account whether the pipeline number of the PCUs and/or the PMUs produced for a prior intermediate stage compute processing time in a previous iteration is lower or higher than the available PCUs and/or PMUs.
VOICE COMMAND INTEGRATION FOR LOCAL NETWORK CONNECTED DEVICES
Various arrangements for facilitating smart television content receivers in a local network are provided. In an example, a secondary television receiver receives audio data, converts the audio data into voice command data, and transmits the voice command data to a primary television receiver. In response, the primary television receiver transmits the voice command data to a voice processing server via the Internet, receives a command generated based on the voice command data, and transmits the command to the secondary television receiver. Based on the command, an operation of the secondary television receiver is controlled.