Patent classifications
G06T2207/20068
WAFER ALIGNMENT IMPROVEMENT THROUGH IMAGE PROJECTION-BASED PATCH-TO-DESIGN ALIGNMENT
Image alignment or image-to-design alignment can be improved using normalized cross-correlation. A setup image to a runtime image are aligned and a normalized cross-correlation scores is determined. Image projections for the images can be determined and aligned in the perpendicular x and y directions. Alignment of the image projections can include finding projection peak locations and adjusting the projection peak locations in the x and y directions.
Method and system for vision-based defect detection
A method and a system for vision-based defect detection are proposed. The method includes the following steps. A test audio signal is outputted to a device-under-test (DUT), and a response signal of the DUT with respect to the test audio signal is received to generate a received audio signal. Signal processing is performed on the received audio signal to generate a spectrogram, and whether the DUT has an unacceptable defect with respect to the predefined auditory standard is determined by analyzing a distribution of signal strength according to the spectrogram.
VIDEO STREAM PROCESSING METHOD AND APPARATUS, DEVICE, AND MEDIUM
A video stream processing method and apparatus are provided. The method includes obtaining an image set of a target person from a multi-channel video stream, the multi-channel video stream being obtained for a same scene by a plurality of cameras, and an image in the image set includes a front face image of the target person; determining a virtual viewpoint in a target-person view mode based on the image in the image set; and projecting, based on a depth map of a target image and a pose of a real viewpoint corresponding to the target image, the target image onto an imaging plane corresponding to the virtual viewpoint to obtain a video stream in the target-person view mode, the target image intersecting with a vision field of the target person in the multi-channel video stream.
METHOD AND SYSTEM FOR OPTIMIZING SAMPLING IN SPOT TIME-OF-FLIGHT (TOF) SENSOR
A method for optimizing sampling in a spot Time-of-Flight (ToF) sensor includes receiving an image of a scene, dividing the image into plural rectangular regions, based on an edge feature in the image, computing an edge region alignment for each rectangular region by analyzing a Histogram of oriented Gradients (HoG) distribution corresponding to the rectangular region, re-projecting ToF data on a Complementary Metal Oxide Semiconductor (CMOS) Image Sensor (CIS) image plane according to the edge region alignment, sampling one or more rectangular regions from among the plural rectangular regions by comparing a regional depth variance of each rectangular region with a threshold depth variance, and reconfiguring an illumination pattern for a spot ToF sensor image frame using the one or more rectangular regions that are sampled.
SHOT-PROCESSING DEVICE
The invention relates to a shot-processing device which comprises a memory (10), a detector (20), a preparer (30), a combiner (40), an estimator (50) and a selector (60). The memory (10) is arranged to receive, on the one hand, scene data (12) that comprise three-dimensional object pairs each associating an object identifier, and ellipsoid data which define an ellipsoid and its orientation and a position of its centre in a common frame of reference and, on the other hand, shot data (14) defining a two-dimensional image of the scene associated with the scene data (12), from a viewpoint corresponding to a desired pose. The detector (20) is arranged to receive shot data (14) and to return one or more two-dimensional object pairs (22) each comprising an object identifier present in the scene data, and a shot region associated with this object identifier. The preparer (30) is arranged to determine, for at least some of the two-dimensional object pairs (22) from the detector (20), a set of positioning elements (32) whose number is less than or equal to the number of three-dimensional object pairs in the scene data (12) that comprise the object identifier of the two-dimensional object pair (22) in question, each positioning element (32) associating the object identifier and the ellipsoid data of a three-dimensional object pair comprising this object identifier, and ellipse data which define an ellipse approximating the shot region of the two-dimensional object pair in question and its orientation as well as a position of its centre in the two-dimensional image. The combiner (40) is arranged to generate a list of candidates (42) each associating one or more positioning elements (32) and a shot orientation, and/or the combination of at least two positioning elements (32), the positioning elements (32) of a single candidate (42) being taken from separate two-dimensional object pairs (22) and not relating to the same three-dimensional object pair. The estimator (50) is arranged to calculate, for at least one of the candidates, a pose (52) comprising a position and an orientation in the common frame of reference from the ellipse data and the ellipsoid data of the positioning elements, or from the ellipse data and the ellipsoid data of the one or more positioning elements and the shot orientation. The selector (60) is arranged, for at least some of the poses, to project all of the ellipsoid data of the scene data onto the shot data from the pose, to determine a measurem
Object Recognition Method and Object Recognition Device
An object recognition method including: acquiring a group of points of a plurality of positions of objects in surroundings of an own vehicle ; generating a captured image of surroundings of the own vehicle; grouping points in the group of points into a group of object candidate points; extracting, from among object candidate points, included in the group of object candidate points, a position at which change in distance from the own vehicle between adjacent object candidate points increases from a value equal to or less than a threshold value to greater than the threshold value as a boundary position candidate; extracting a partial region in which a person is detected in the captured image; and when, in the captured image, a position of the boundary position candidate coincides with a boundary position of the partial region in the predetermined direction, recognizing that a pedestrian exists in the partial region.
Selecting defect detection methods for inspection of a specimen
Methods and systems for selecting defect detection methods for inspection of a specimen are provided. One system includes one or more computer subsystems configured for separating polygons in a care area into initial sub-groups based on a characteristic of the polygons on the specimen and determining a characteristic of noise in output generated by a detector of an inspection subsystem for the polygons in the different initial sub-groups. The computer subsystem(s) are also configured for determining final sub-groups for the polygons by combining any two or more of the different initial sub-groups having substantially the same values of the characteristic of the noise. In addition, the computer subsystem(s) are configured for selecting first and second defect detection methods for application to the output generated by the detector of the inspection subsystem during inspection of the specimen or another specimen.
SYSTEMS AND METHODS FOR MEASURING PROXIMITY OF OBJECTS ACROSS SERIAL SECTIONS OF OVERLAYED STAINED SLIDE PATHOLOGY IMAGES
Determining spatial analysis data in images each depicting a proximate serial tissue section obtained from a tissue sample. The stained tissue sections include markers (object, markers, etc.) that correspond to material in the tissue sample. The images each include data depicting the markers of the stained tissue sections corresponding to the image. The images are registered and “overlayed” to form a representation of the tissue sample such that intra-section and inter-section measurements between markers in the images correspond to measurements of corresponding material in the tissue sample. A process determines intra-section and/or inter-section measurements between locations of data depicting markers in the images, the measurements representing a distance in 3D space between objects in the tissue sample associated with the markers, and the measurements are stored for use in subsequent processing, for example, spatial analysis of the markers.
Image handling and display in X-ray mammography and tomosynthesis
A method and system for acquiring, processing, storing, and displaying x-ray mammograms Mp tomosynthesis images Tr representative of breast slices, and x-ray tomosynthesis projection images Tp taken at different angles to a breast, where the Tr images are reconstructed from Tp images
Methods and systems for detecting and combining structural features in 3D reconstruction
A method for updating camera poses includes receiving a set of captured depth maps associated with a scene and detecting a first shape and a second shape present in the scene. The method also includes, for each of the first shape and the second shape, creating a 3D mesh, creating a virtual camera associated with the 3D mesh, and rendering a depth map associated with the virtual camera. The method further includes identifying a subset of the captured depth maps such that each captured depth map in the subset contains at least a portion of either the first shape or the second shape. The method additionally includes updating the physical camera poses by jointly solving for the physical camera poses by optimizing an alignment between the first depth map, the second depth map, and the subset of the set of captured depth maps.