Patent classifications
G06T3/60
AUGMENTED REALITY SYSTEM WITH INTERACTIVE OVERLAY DRAWING
A method allows an estimator or other party to “Walk the Drawings.” An estimator can open one or more sets of drawings on a mobile device and walk those same electronic drawings as they actually walk or traverse the physical site itself. In other words, as the estimator physically moves across the construction site in the real world, their electronic icon (avatar) moves across the corresponding electronic drawings on their mobile device. Now the estimator can identify present and future such as features that need to be accessible being buried under asphalt. While walking the drawings, the estimator can label any challenges or features by simply clicking their avatar and it will post that geo-stamped location complete with corresponding notes and photos straight to the drawings for later review and analysis.
AUGMENTED REALITY SYSTEM WITH INTERACTIVE OVERLAY DRAWING
A method allows an estimator or other party to “Walk the Drawings.” An estimator can open one or more sets of drawings on a mobile device and walk those same electronic drawings as they actually walk or traverse the physical site itself. In other words, as the estimator physically moves across the construction site in the real world, their electronic icon (avatar) moves across the corresponding electronic drawings on their mobile device. Now the estimator can identify present and future such as features that need to be accessible being buried under asphalt. While walking the drawings, the estimator can label any challenges or features by simply clicking their avatar and it will post that geo-stamped location complete with corresponding notes and photos straight to the drawings for later review and analysis.
SYSTEMS, METHODS, STORAGE MEDIA, AND COMPUTING PLATFORMS FOR SCANNING ITEMS AT THE POINT OF MANUFACTURING
Systems, methods, storage media, and computing platforms for scanning items at the point of manufacturing are disclosed. Exemplary implementations may: receive a first set of images of an item from a first set of camera sources; detect a code in the first set of images; combine, responsive to detecting the code, along a second axis perpendicular to the first axis, the first set of images into a first set of combined images; rotate parallel to the first axis; and combine along the first axis.
SYSTEMS, METHODS, STORAGE MEDIA, AND COMPUTING PLATFORMS FOR SCANNING ITEMS AT THE POINT OF MANUFACTURING
Systems, methods, storage media, and computing platforms for scanning items at the point of manufacturing are disclosed. Exemplary implementations may: receive a first set of images of an item from a first set of camera sources; detect a code in the first set of images; combine, responsive to detecting the code, along a second axis perpendicular to the first axis, the first set of images into a first set of combined images; rotate parallel to the first axis; and combine along the first axis.
ULTRASOUND DIAGNOSTIC APPARATUS AND CONTROL METHOD FOR ULTRASOUND DIAGNOSTIC APPARATUS
An ultrasound diagnostic apparatus (1) includes an ultrasound probe (2), an image generation unit (22) that generates an ultrasound image including a region of interest of a breast of a subject captured in a radiation image, an image adjustment unit (27) that adjusts the radiation image and the ultrasound image such that the region of interest captured in the ultrasound image and the region of interest captured in the radiation image have an identical orientation on the basis of radiation image orientation information stored in a tag of the radiation image and probe orientation information of the ultrasound probe (2) in a case where the ultrasound image is captured, and a monitor (24) that displays the radiation image and the ultrasound image that have been adjusted by the image adjustment unit (27).
ULTRASOUND DIAGNOSTIC APPARATUS AND CONTROL METHOD FOR ULTRASOUND DIAGNOSTIC APPARATUS
An ultrasound diagnostic apparatus (1) includes an ultrasound probe (2), an image generation unit (22) that generates an ultrasound image including a region of interest of a breast of a subject captured in a radiation image, an image adjustment unit (27) that adjusts the radiation image and the ultrasound image such that the region of interest captured in the ultrasound image and the region of interest captured in the radiation image have an identical orientation on the basis of radiation image orientation information stored in a tag of the radiation image and probe orientation information of the ultrasound probe (2) in a case where the ultrasound image is captured, and a monitor (24) that displays the radiation image and the ultrasound image that have been adjusted by the image adjustment unit (27).
Deep learning for optical coherence tomography segmentation
Systems and methods are presented for providing a machine learning model for segmenting an optical coherence tomography (OCT) image. A first OCT image is obtained, and then labeled with identified boundaries associated with different tissues in the first OCT image using a graph search algorithm. Portions of the labeled first OCT image are extracted to generate a first plurality of image tiles. A second plurality of image tiles is generated by manipulating at least one image tile from the first plurality of image tiles, such as by rotating and/or flipping the at least one image tile. The machine learning model is trained using the first plurality of image tiles and the second plurality of image tiles. The trained machine learning model is used to perform segmentation in a second OCT image.
Automatically labeling capability for training and validation data for machine learning
A method for enabling an labeling capability for training and validation data at an edge device to support neural network transfer learning capability is provided. The method includes: inputting candidate data into a first neural network to filter the candidate data by selecting a subset of candidate data based on an output of the first neural network, performing a confidence upgrade check on the subset of candidate data by: (1) performing a data consistency check by generating augmented data from each candidate data from among the subset of candidate data, (2) inputting the subset of candidate data into a second neural network that is trained using data from an environment to determine a second confidence condition, and (3) performing a clustering on the subset of candidate data, and automatically labeling, as training data, the subset of candidate data in accordance with a confidence level label.
Automatically labeling capability for training and validation data for machine learning
A method for enabling an labeling capability for training and validation data at an edge device to support neural network transfer learning capability is provided. The method includes: inputting candidate data into a first neural network to filter the candidate data by selecting a subset of candidate data based on an output of the first neural network, performing a confidence upgrade check on the subset of candidate data by: (1) performing a data consistency check by generating augmented data from each candidate data from among the subset of candidate data, (2) inputting the subset of candidate data into a second neural network that is trained using data from an environment to determine a second confidence condition, and (3) performing a clustering on the subset of candidate data, and automatically labeling, as training data, the subset of candidate data in accordance with a confidence level label.
Systems and methods for digitized document image data spillage recovery
Systems and methods for digitized document image data spillage recovery are provided. One or more memories may be coupled to one or more processors, the one or more memories including instructions operable to be executed by the one or more processors. The one or more processors may be configured to capture an image; process the image through at least a first pass to generate a first contour; remove a preprinted bounding region of the first contour to retain text; generate one or more pixel blobs by applying one or more filters to smudge the text; identify the one or more pixel blobs that straddle one or more boundaries of the first contour; resize the first contour to enclose spillage of the one or more pixel blobs; overlay the text from the image within the resized contour; and apply pixel masking to the resized contour.