Patent classifications
G06V10/24
Image analysis for decoding angled optical patterns
An angled optical pattern is decoded. To decode an optical pattern imaged at an angle, an area of interest of an image is received. A start line and an end line of the optical pattern are estimated. Corners of the optical pattern are localized. A homography is calculated based on the corners. And a scanline of the optical pattern is rectified based on the homography.
Systems and methods for digitized document image data spillage recovery
Systems and methods for digitized document image data spillage recovery are provided. One or more memories may be coupled to one or more processors, the one or more memories including instructions operable to be executed by the one or more processors. The one or more processors may be configured to capture an image; process the image through at least a first pass to generate a first contour; remove a preprinted bounding region of the first contour to retain text; generate one or more pixel blobs by applying one or more filters to smudge the text; identify the one or more pixel blobs that straddle one or more boundaries of the first contour; resize the first contour to enclose spillage of the one or more pixel blobs; overlay the text from the image within the resized contour; and apply pixel masking to the resized contour.
Text line normalization systems and methods
A method for estimating text heights of text line images includes estimating a text height with a sequence recognizer. The method further includes normalizing a vertical dimension and/or position of text within a text line image based on the text height. The method may also further include calculating a feature of the text line image. In some examples, the sequence recognizer estimates the text height with a machine learning model.
System, method, apparatus, and computer program product for utilizing machine learning to process an image of a mobile device to determine a mobile device integrity status
A system, apparatus, method and computer program product are provided for determining a mobile device integrity status. Images of a mobile device captured by the mobile device and using a reflective surface are processed with various trained models, such as neural networks, to verify authenticity, detect damage, and to detect occlusions. A mask may be generated to enable identification of concave occlusions or blocked corners of an object, such as a mobile device, in an image. Images of the front and/or rear of a mobile device may be processed to determine the mobile device integrity status such as verified, not verified, or inconclusive. A user may be prompted to remove covers, remove occlusions, and/or move the mobile device closer to the reflective surface. A real-time response relating to the mobile device integrity status may be provided. The trained models may be trained to improve the accuracy of the mobile device integrity status.
System, method, apparatus, and computer program product for utilizing machine learning to process an image of a mobile device to determine a mobile device integrity status
A system, apparatus, method and computer program product are provided for determining a mobile device integrity status. Images of a mobile device captured by the mobile device and using a reflective surface are processed with various trained models, such as neural networks, to verify authenticity, detect damage, and to detect occlusions. A mask may be generated to enable identification of concave occlusions or blocked corners of an object, such as a mobile device, in an image. Images of the front and/or rear of a mobile device may be processed to determine the mobile device integrity status such as verified, not verified, or inconclusive. A user may be prompted to remove covers, remove occlusions, and/or move the mobile device closer to the reflective surface. A real-time response relating to the mobile device integrity status may be provided. The trained models may be trained to improve the accuracy of the mobile device integrity status.
Vehicle front optical object detection via photoelectric effect of metallic striping
A system and method for reliably determining lanes of a roadway includes an optical sensing arrangement for sensing metallic striping from photoelectric effect. The location of the striping that defines a border of a traffic lane is determined and the location of the striping is displayed on a graphical user interface. The location can be used to provide lane control to ensure the vehicle maintains proper position in a traffic lane, lane warning assistance, collision avoidance, parking control, and guidance for autonomous driving.
Texture extraction
Texture extraction is disclosed.
Annotation Method of Arbitrary-Oriented Rectangular Bounding Box
Disclosed in the present invention is An annotation method of arbitrary-oriented rectangular bounding box, wherein: the elements for annotation being: the coordinates of the center point C, a vector {right arrow over (CD)} formed by the center point C and a chosen vertex D, and the ratio of the vector {right arrow over (CP)} to vector {right arrow over (CD)}, where {right arrow over (CP)} is the projection of the vector {right arrow over (CE)} to {right arrow over (CD)}, and {right arrow over (CE)} is a vector formed by the center of the bounding box to one of the vertex E that close neighbor to vertex D; and it is also required that the vector {right arrow over (CP)} is in the same direction as the vector {right arrow over (CD)}, the vertex E in either of the clockwise or counterclockwise direction of the vertex D. The symbol notation of this method is (x.sub.c, y.sub.c, u, v, ρ), x.sub.c and y.sub.c are the two coordinate values of the center point C, u and v are the two components of vector {right arrow over (CD)}, ρ is the ratio of the vector {right arrow over (CP)} to vector {right arrow over (CD)}. Also let a binary value s to indicate whether the two components of the vector {right arrow over (CD)} have same sign or not to represent {right arrow over (CD)} and −{right arrow over (CD)} at once by (|u|, |v|, s), then getting a method for annotating arbitrary-oriented rectangular bounding box that one bounding box has only two representation vectors. Its symbol notation is (x.sub.c, y.sub.c, |u|, |v|, s, ρ), wherein |u| and |v| are magnitude of two components of the vector {right arrow over (CD)}. This method avoids loss inconsistency between representations of the same bounding box and is beneficial to model regression training.
Annotation Method of Arbitrary-Oriented Rectangular Bounding Box
Disclosed in the present invention is An annotation method of arbitrary-oriented rectangular bounding box, wherein: the elements for annotation being: the coordinates of the center point C, a vector {right arrow over (CD)} formed by the center point C and a chosen vertex D, and the ratio of the vector {right arrow over (CP)} to vector {right arrow over (CD)}, where {right arrow over (CP)} is the projection of the vector {right arrow over (CE)} to {right arrow over (CD)}, and {right arrow over (CE)} is a vector formed by the center of the bounding box to one of the vertex E that close neighbor to vertex D; and it is also required that the vector {right arrow over (CP)} is in the same direction as the vector {right arrow over (CD)}, the vertex E in either of the clockwise or counterclockwise direction of the vertex D. The symbol notation of this method is (x.sub.c, y.sub.c, u, v, ρ), x.sub.c and y.sub.c are the two coordinate values of the center point C, u and v are the two components of vector {right arrow over (CD)}, ρ is the ratio of the vector {right arrow over (CP)} to vector {right arrow over (CD)}. Also let a binary value s to indicate whether the two components of the vector {right arrow over (CD)} have same sign or not to represent {right arrow over (CD)} and −{right arrow over (CD)} at once by (|u|, |v|, s), then getting a method for annotating arbitrary-oriented rectangular bounding box that one bounding box has only two representation vectors. Its symbol notation is (x.sub.c, y.sub.c, |u|, |v|, s, ρ), wherein |u| and |v| are magnitude of two components of the vector {right arrow over (CD)}. This method avoids loss inconsistency between representations of the same bounding box and is beneficial to model regression training.
FISHEYE COLLAGE TRANSFORMATION FOR ROAD OBJECT DETECTION OR OTHER OBJECT DETECTION
A method includes obtaining a fisheye image of a scene and identifying multiple regions of interest in the fisheye image. The method also includes applying one or more transformations to transform and rotate one or more of the regions of interest in the fisheye image to produce one or more transformed regions. The method further includes generating a collage image having at least one portion based on the fisheye image and one or more portions containing the one or more transformed regions. In addition, the method includes performing object detection to identify one or more objects captured in the collage image.