Patent classifications
G06T2200/32
Method and system for imaging three-dimensional feature
Methods and systems for milling and imaging a sample based on multiple fiducials at different sample depths include forming a first fiducial on a first sample surface at a first sample depth; milling at least a portion of the sample surface to expose a second sample surface at a second sample depth; forming a second fiducial on the second sample surface; and milling at least a portion of the second sample surface to expose a third sample surface including a region of interest (ROI) at a third sample depth. The location of the ROI at the third sample depth relative to the first fiducial may be calculated based on an image of the ROI and the second fiducial as well as relative position between the first fiducial and the second fiducial.
IMAGE STITCHING METHOD
An image stitching method is proposed to include: A) acquiring a plurality of segment images for a target scene, each of the segment images containing a part of a target scene; B) for two adjacent segment images, which are two of the segment images that have overlapping fields of view, comparing the two adjacent segment images to determine a stitching position for the two adjacent segment images from a common part of the overlapping fields of view; and C) stitching the two adjacent images together based on the stitching position thus determined.
Video stitching method and device
Disclosed are a video stitching method and a video stitching device. The video stitching method is applicable for stitching a first video and a second video, and includes: performing feature extraction, feature matching and screening on a first target frame of the first video and a second target frame of the second video, so as to obtain a first feature point pair set; performing forward tracking on the first target frame and the second target frame, so as to obtain a second feature point pair set; performing backward tracking on the first target frame and the second target frame, so as to obtain a third feature point pair set; and calculating a geometric transformation relationship between the first target frame and the second target frame according to a union of the first feature point pair set, the second feature point pair set and the third feature point pair set.
Large LED array with reduced data management
An LED controller system includes an LED controller including an image frame buffer able to receive image data. A sensor processing module is used to receive and process sensor data and a decision module is used to determine actions taken in response to processed sensor data. An image creation module is used to create images to be sent to the image frame buffer of the LED controller.
Full-field three-dimensional surface measurement
Embodiments of the present invention may be used to perform measurement of surfaces, such as external and internal surfaces of the human body, in full-field and in 3-D. Embodiments of the present invention may include an electromagnetic radiation source, which may be configured to project electromagnetic radiation onto a surface. The electromagnetic radiation source may be configured to project the electromagnetic radiation in a pattern corresponding to a spatial signal modulation algorithm. The electromagnetic radiation source may also be configured to project the electromagnetic radiation at a frequency suitable for transmission through the media in which the radiation is projected. An image sensor may be configured to capture image data representing the projected pattern. An image-processing module may be configured to receive the captured image data from the image sensor and to calculate a full-field, 3-D representation of the surface using the captured image data and the spatial signal modulation algorithm. A display device may be configured to display the full-field, 3-D representation of the surface.
Target tracking method for panorama video,readable storage medium and computer equipment
The present application is applicable to the field of video processing. Provided are a target tracking method for a panoramic video, a readable storage medium, and a computer device. The method comprises: using a tracker to track and detect a target to be tracked to obtain a predicted tracking position of said target in the next panoramic video frame, calculating the reliability of the predicted tracking position, and using an occlusion detector to calculate an occlusion score of the predicted tracking position; determining whether the reliability of the predicated tracking position is greater than a preset reliability threshold value, and determining whether the occlusion score of the predicted tracking position is greater than a preset occlusion score threshold value; and using a corresponding tracking strategy according to the reliability and the occlusion score. By means of the present application, whether a tracking failure is caused by the loss of a target or occlusion can be determined, such that a corresponding tracking recovery strategy can be used, and tracking can be automatically recovered when tracking fails, thereby achieving the effect of performing tracking continuously for a long time. In addition, the method of the present invention has a low operation complexity and a good real-time performance.
Three-dimensional stabilized 360-degree composite image capture
Many embodiments can comprise a system. The system can comprise one or more processors and one or more storage devices. The one or more storage devices can be configured to store computing instructions that, when executed, cause the processor to receive a plurality of images of an object, the plurality of images comprising different views of the object from around the object; iteratively align one or more images within one or more subsets of the plurality of images until the object is aligned from image to image within the one or more subsets of the plurality of images; and selectively align respective images of the one or more subsets to each other to produce a surround image. Other embodiments are disclosed herein.
Aligning digital images
A digital camera and a method for aligning digital images comprising: receiving images including first and second images depicting a first and a second region of a scene, the regions being overlapping and displaced along a first direction; aligning the images using a transformation; determining disparity values for an overlap between the images; identifying misalignments by identifying blocks of pixels in the first image having a same position along a second direction and having disparity values exhibiting a variability lower than a first threshold and exhibiting an average higher than a second threshold; adjusting the transformation for the identified blocks of pixels in the first image and their matching blocks of pixels in the second image; and realigning the images using the adjusted transformation.
ITEM LOCATION TRACKING FOR DISPLAY RACKS USING DIGITAL IMAGE PROCESSING
A device configured to receive a rack identifier for a rack that is configured to hold items. The device is further configured to identify a master template that is associated with the rack. The device is further configured to receive images of the plurality of items on the rack and to combine the images into a composite image of the rack. The device is further configured to identify shelves on the rack within the composite image and to generate bounding boxes that correspond with an item on the rack. The device is further configured to associate each bounding box with an item identifier and an item location. The device is further configured to generate a rack analysis message based on a comparison of the item locations for each bounding box and the rack positions from the master template and to output the rack analysis message.
Methods and apparatus for automatically defining computer-aided design files using machine learning, image analytics, and/or computer vision
A non-transitory processor-readable medium includes code to cause a processor to receive aerial data having a plurality of points arranged in a pattern. An indication associated with each point is provided as an input to a machine learning model to classify each point into a category from a plurality of categories. For each point, a set of points (1) adjacent to that point and (2) having a common category is identified to define a shape from a plurality of shapes. A polyline boundary of each shape is defined by analyzing with respect to a criterion, a position of each point associated with a border of that shape relative to at least one other point. A layer for each category including each shape associated with that category is defined and a computer-aided design file is generated using the polyline boundary of each shape and the layer for each category.