Patent classifications
G06K9/36
Device and method for determining the action of active ingredients on nematodes and other organisms in aqueous tests
The invention relates to a device (1) and a method for determining the action of active ingredients on nematodes and other organisms in aqueous tests. The device (1) according to the invention comprises a holder (13) for a cell culture plate (30) having multiple wells (31) in which the nematodes can be filled with the active ingredients, said cell culture plate (30) having a bottom side (33), a top side (32) and also side walls extending between bottom side (33) and top side (32), a camera (11) which is used to record images of preferably the bottom side (33) of the cell culture plate (30), a lighting mechanism (14) having at least a first light source (15) which illuminates the cell culture plate (30), there being arranged between the first light source (15) and a first side wall (34) of the cell culture plate (30) in the installed state a first optical unit which directs the light of the first light source (15) through the first side wall (34) in the direction of the bottom side (33) of the cell culture plate (30). The method according to the invention makes it possible to simultaneously investigate many active ingredients within a very short time.
Systems and methods for compressing three-dimensional image data
Disclosed is a compression system for compressing image data. The compression system receives an uncompressed image file with data points that are defined with absolute values for elements representing the data point position in a space. The compression system stores the absolute values defined for a first data point in a compressed image file, determines a difference between the absolute values of the first data point and the absolute values of a second data point, derives a relative value for the absolute values of the second data point from the difference, and stores the relative value in place of the absolute values of the second data point in the compressed image file.
Identifying objects within images from different sources
Techniques are disclosed for providing a notification that a person is at a particular location. For example, a resident device may receive from a user device an image that shows a face of a first person, the image being captured by a first camera of the user device. The resident device may also receive, from another device having a second camera, a second image showing a portion of a face of a second person, the second camera having a viewable area showing a particular location. The resident device may determine a score indicating a level of similarity between a first set of characteristics associated with the face of the first person and a second set of characteristics associated with the face of a second person. The resident device may then provide to the user device a notification based on determining the score.
Method and encoder relating to encoding of pixel values to accomplish lossless compression of a digital image
Encoder and method for encoding of pixel values of a digital image comprising multiple lines of pixels to accomplish lossless compression of the digital image. For each of said multiple lines the encoder obtains unencoded pixels values of the line. Further, for each of said multiple lines, the encoder determines, for each of one or more pixels of the line, which encoding to be used for encoding of the unencoded pixel value of the pixel (x) in said lossless compression of the digital image. The determination being based on how said unencoded pixel value relates to unencoded pixel values of other, closest neighboring pixels (N1, N2) of said line.
VERIFICATION PIPETTE AND VISION APPARATUS
Manually operated pipettors, widely used in clinical, forensics, pharmaceutical research, hospital and biotech laboratories to transfer small volumes of liquid, may be subject to positional errors, operator use errors and hidden performance degradation. Manual pipette performance cannot be accepted without monitoring and reporting. This invention concerns a computer controlled vision tracking and lighting system working in conjunction with a sensor controlled fluid dispensing device and controller to confirm pipette tip positional locations during aspiration and dispensing operations with automatic monitoring of liquids entering and leaving a pipette apparatus to digitally track a manual pipetting operation with a digital output file of validated liquid transfer results. The invention may also monitor possible error conditions and prevent improper liquid transfers during the manual process.
Video encoding method and video decoding method
Provided is a video encoding/decoding technique for improving the compression efficiency by reducing the motion vector code amount. In a video decoding process, the prediction vector calculation method is switched from one to another in accordance with a difference between predetermined motion vectors among a plurality of motion vectors of a peripheral block of a block to be decoded and already decoded. The calculated prediction vector is added to a difference vector decoded from an encoded stream so as to calculate a motion vector. By using the calculated motion vector, the inter-image prediction process is executed.
Image coding device, image decoding device, methods thereof, and programs
An image coding device including: an edge detecting section configured to perform edge detection using an image signal of a reference image for a coding object block; a transform block setting section configured to set transform blocks by dividing the coding object block such that a boundary between the blocks after division does not include an edge on a basis of a result of the edge detection; and a coding processing section configured to generate coded data by performing processing including an orthogonal transform of each of the transform blocks.
Method, arrangement, and computer program product for coordinating video information with other measurements
The pertinence of digital image material is analysed in respect of matching a given reference. A color of the reference constitutes a reference record in a perceptual color space. Pixels of a piece of digital image material are converted into the perceptual color space, and labelled according to how their converted pixel values belong to environments of principal colors in the perceptual color space. A connected set of pixels is selected that have at least one common label. A subset of the connected set of pixels is determined, so that the pixel(s) of the subset are those for which a color similarity distance to the reference record is at an extremity. For the connected set of pixels, a representative color is selected among or derived from the color or colors of the pixels that belong to the subset.
Automatic selection of optimum algorithms for high dynamic range image processing based on scene classification
A method for processing high dynamic range (HDR) images by selecting preferred tone mapping operators and gamut mapping algorithms based on scene classification. Scenes are classified into indoor scenes, outdoor scenes, and scenes with people, and tone mapping operators and gamut mapping algorithms are selected on that basis. Prior to scene classification, the multiple images taken at various exposure values are fused into a low dynamic range (LDR) image using an exposure fusing algorithm, and scene classification is performed using the fused LDR image. Then, the HDR image generated from the multiple images are tone mapped into a LDR image using the selected tone mapping operator and then gamut mapped to the color space of the output device such as printer.
Electronic apparatus, and method and computer-readable medium for the same
An electronic apparatus including an input unit configured to separate image data into pixel blocks and serially input luminance data and color difference data of each pixel block as individual pieces of block data, respectively, a quantization unit configured to serially convert each piece of the block data into quantized data, an encoding unit configured to serially convert each piece of the quantized data into encoded data, a color generation unit configured to generate compressed color image data using the encoded data, and a monochrome generation unit configured to generate compressed monochrome image data using the encoded data by performing one of deleting particular pieces of encoded data corresponding to the color difference data from a data sequence of the encoded data generated through the conversion by the encoding unit, and replacing the particular pieces of encoded data with encoded data corresponding to monochrome color difference data.