G06F5/01

METHOD AND APPARATUS FOR CALCULATING DISTANCE BASED WEIGHTED AVERAGE FOR POINT CLOUD CODING
20220392114 · 2022-12-08 · ·

Aspects of the disclosure provide methods and apparatuses for point cloud compression and decompression. In some examples, an apparatus for point cloud compression/decompression includes processing circuitry. The processing circuitry determines to use a prediction mode for coding (encoding/decoding) information associated with a current point in a point cloud. In the prediction mode, the information associated with the current point is predicted based on one or more neighbor points of the current point. The processing circuitry calculates, using integer operations, a distance-based weighted average value based on distances of the one or more neighbor points to the current point, and determines the information associated with the current point based on the distance-based weighted average value.

METHOD AND APPARATUS FOR CALCULATING DISTANCE BASED WEIGHTED AVERAGE FOR POINT CLOUD CODING
20220392114 · 2022-12-08 · ·

Aspects of the disclosure provide methods and apparatuses for point cloud compression and decompression. In some examples, an apparatus for point cloud compression/decompression includes processing circuitry. The processing circuitry determines to use a prediction mode for coding (encoding/decoding) information associated with a current point in a point cloud. In the prediction mode, the information associated with the current point is predicted based on one or more neighbor points of the current point. The processing circuitry calculates, using integer operations, a distance-based weighted average value based on distances of the one or more neighbor points to the current point, and determines the information associated with the current point based on the distance-based weighted average value.

Computer processing and outcome prediction systems and methods
11520560 · 2022-12-06 ·

Computer processing and outcome prediction systems and methods used to generate algorithm time prediction polynomials, inverse algorithm time prediction polynomials, determine race conditions, determine when a non-linear algorithm can be treated as if it were linear, as well as automatically generate parallel and quantum solutions from classical software or from the relationship between monotonic attribute values.

Computer processing and outcome prediction systems and methods
11520560 · 2022-12-06 ·

Computer processing and outcome prediction systems and methods used to generate algorithm time prediction polynomials, inverse algorithm time prediction polynomials, determine race conditions, determine when a non-linear algorithm can be treated as if it were linear, as well as automatically generate parallel and quantum solutions from classical software or from the relationship between monotonic attribute values.

Neural network accelerator with compact instruct set
11520561 · 2022-12-06 · ·

Described herein is a neural network accelerator with a set of neural processing units and an instruction set for execution on the neural processing units. The instruction set is a compact instruction set including various compute and data move instructions for implementing a neural network. Among the compute instructions are an instruction for performing a fused operation comprising sequential computations, one of which involves matrix multiplication, and an instruction for performing an elementwise vector operation. The instructions in the instruction set are highly configurable and can handle data elements of variable size. The instructions also implement a synchronization mechanism that allows asynchronous execution of data move and compute operations across different components of the neural network accelerator as well as between multiple instances of the neural network accelerator.

Neural network accelerator with compact instruct set
11520561 · 2022-12-06 · ·

Described herein is a neural network accelerator with a set of neural processing units and an instruction set for execution on the neural processing units. The instruction set is a compact instruction set including various compute and data move instructions for implementing a neural network. Among the compute instructions are an instruction for performing a fused operation comprising sequential computations, one of which involves matrix multiplication, and an instruction for performing an elementwise vector operation. The instructions in the instruction set are highly configurable and can handle data elements of variable size. The instructions also implement a synchronization mechanism that allows asynchronous execution of data move and compute operations across different components of the neural network accelerator as well as between multiple instances of the neural network accelerator.

Computer-Implemented Method of Executing SoftMax
20220383077 · 2022-12-01 ·

The present disclosure concerns a method of executing a SoftMax function, the method comprising: (i) pre-storing in memory M fraction components (fc.sub.j) in binary form, derived from the expression 2.sup.(j/M), said fc.sub.j forming a lookup table (T) of size M; (ii) calculating, for each z.sub.i, an element y.sub.i of a number of the form 2.sup.y.sup.i; (iii) separating y.sub.i into an integral part (int.sub.i) and a fractional part (fract.sub.i); (iv) determining a lookup index (ind.sub.i) that corresponds to fract.sub.i scaled by the size M; (v) retrieving a fraction component fc.sub.i from T with ind.sub.i; (vi) generating, in a result register, a binary number representative of the exponential value of said z.sub.i, by combining said fc.sub.i retrieved from T and said int.sub.i; (v) adding the K result registers corresponding to z.sub.i into a sum register R7; and (vi) determining the K probability values p.sub.i from the K result registers and the sum register.

Computer-Implemented Method of Executing SoftMax
20220383077 · 2022-12-01 ·

The present disclosure concerns a method of executing a SoftMax function, the method comprising: (i) pre-storing in memory M fraction components (fc.sub.j) in binary form, derived from the expression 2.sup.(j/M), said fc.sub.j forming a lookup table (T) of size M; (ii) calculating, for each z.sub.i, an element y.sub.i of a number of the form 2.sup.y.sup.i; (iii) separating y.sub.i into an integral part (int.sub.i) and a fractional part (fract.sub.i); (iv) determining a lookup index (ind.sub.i) that corresponds to fract.sub.i scaled by the size M; (v) retrieving a fraction component fc.sub.i from T with ind.sub.i; (vi) generating, in a result register, a binary number representative of the exponential value of said z.sub.i, by combining said fc.sub.i retrieved from T and said int.sub.i; (v) adding the K result registers corresponding to z.sub.i into a sum register R7; and (vi) determining the K probability values p.sub.i from the K result registers and the sum register.

EXTENSIBLE ENVIRONMENTAL DATA COLLECTION PACK

An environmental data collection system includes one or more smart sensors, a controller coupled to the one or more smart sensors, the controller including one or more modular decoders having a processor and a memory storing computer readable program code, that when executed by the processor, causes the modular decoder to configure communication and data retrieval between the one or more smart sensors and the controller, perform signal processing on data retrieved from the one or more smart sensors specific to the sensing capabilities of the one or more smart sensors, convert the signal processed data to a fixed bit format, and convey the fixed bit data to the controller.

Neural network method and apparatus with floating point processing

A processor-implemented includes receiving a first floating point operand and a second floating point operand, each having an n-bit format comprising a sign field, an exponent field, and a significand field, normalizing a binary value obtained by performing arithmetic operations for fields corresponding to each other in the first and second floating point operands for an n-bit multiplication operation, determining whether the normalized binary value is a number that is representable in the n-bit format or an extended normal number that is not representable in the n-bit format, according to a result of the determining, encoding the normalized binary value using an extension bit format in which an extension pin identifying whether the normalized binary value is the extended normal number is added to the n-bit format, and outputting the encoded binary value using the extended bit format, as a result of the n-bit multiplication operation.