Patent classifications
G06T7/60
GROUND ENGAGING TOOL WEAR AND LOSS DETECTION SYSTEM AND METHOD
An example wear detection system receives a plurality of images from a plurality of sensors associated with a work machine. Individual sensors of the plurality of sensors have respective fields-of-view different from other sensors of the plurality of sensors. The wear detection system identifies a first region of interest and second region of interest associated with the at least one GET. The wear detection system determines a first set of image points and a second set of images points for the at least one GET based on geometric parameters associated with the GET. The wear detection system determines a wear level or loss for the at least one GET based on the GET measurement.
GROUND ENGAGING TOOL WEAR AND LOSS DETECTION SYSTEM AND METHOD
An example wear detection system receives a plurality of images from a plurality of sensors associated with a work machine. Individual sensors of the plurality of sensors have respective fields-of-view different from other sensors of the plurality of sensors. The wear detection system identifies a first region of interest and second region of interest associated with the at least one GET. The wear detection system determines a first set of image points and a second set of images points for the at least one GET based on geometric parameters associated with the GET. The wear detection system determines a wear level or loss for the at least one GET based on the GET measurement.
TEXT BORDER TOOL AND ENHANCED CORNER OPTIONS FOR BACKGROUND SHADING
Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders.
TEXT BORDER TOOL AND ENHANCED CORNER OPTIONS FOR BACKGROUND SHADING
Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a subject image of a detection target from the observation data, calculates a plurality of individual indices indicating degrees of reliability, each of which relates to at least identification information or measurement information regarding the detection target, and also calculates an integrated index, which is obtained by integrating a plurality of calculated individual indices. The output interface outputs the integrated index.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a subject image of a detection target from the observation data, calculates a plurality of individual indices indicating degrees of reliability, each of which relates to at least identification information or measurement information regarding the detection target, and also calculates an integrated index, which is obtained by integrating a plurality of calculated individual indices. The output interface outputs the integrated index.
METHOD AND SYSTEM FOR DETERMINING A FITTED POSITION OF AN OPHTHALMIC LENS WITH RESPECT TO A WEARER REFERENTIAL AND METHOD FOR DETERMINING A LENS DESIGN OF AN OPHTHALMIC LENS
A method for determining a fitted position of an ophthalmic lens to be mounted on a spectacle frame equipping a wearer, the fitted position being defined with respect to a wearer referential linked to the head of the wearer. The method includes defining at least one fitting criteria relating to the positioning of the ophthalmic lens with respect to the spectacle frame, determining frame 3D data at least partially representative of the geometry and position of the spectacle frame with respect to the wearer referential, determining lens 3D data at least partially representative of the geometry of at least a peripheral portion of the ophthalmic lens, and determining the fitted position of said ophthalmic lens with respect to the wearer referential using the frame 3D data and said lens 3D data to fit the ophthalmic lens within the spectacle frame meeting the fitting criteria.
METHOD AND SYSTEM FOR DETERMINING A FITTED POSITION OF AN OPHTHALMIC LENS WITH RESPECT TO A WEARER REFERENTIAL AND METHOD FOR DETERMINING A LENS DESIGN OF AN OPHTHALMIC LENS
A method for determining a fitted position of an ophthalmic lens to be mounted on a spectacle frame equipping a wearer, the fitted position being defined with respect to a wearer referential linked to the head of the wearer. The method includes defining at least one fitting criteria relating to the positioning of the ophthalmic lens with respect to the spectacle frame, determining frame 3D data at least partially representative of the geometry and position of the spectacle frame with respect to the wearer referential, determining lens 3D data at least partially representative of the geometry of at least a peripheral portion of the ophthalmic lens, and determining the fitted position of said ophthalmic lens with respect to the wearer referential using the frame 3D data and said lens 3D data to fit the ophthalmic lens within the spectacle frame meeting the fitting criteria.
METHOD AND SYSTEM FOR AUTOMATIC CHARACTERIZATION OF A THREE-DIMENSIONAL (3D) POINT CLOUD
Methods of and systems for characterization of a 3D point cloud are disclosed. The method comprises accessing a 3D point cloud, the 3D point cloud being a set of data points representative of the object, determining, based on the 3D point cloud, a 3D reconstructed object, determining, based on the 3D reconstructed object, a digital framework of the 3D point cloud, the digital framework being a ramified 3D tree structure, the digital framework being representative of a base structure of the object, morphing a 3D reference model of the object onto the 3D reconstructed object, the morphing being based on the digital framework; and determining, based on the morphed 3D reference model and the 3D reconstructed object, characteristics of the object.
METHODS AND SYSTEMS FOR OBTAINING A SCALE REFERENCE AND MEASUREMENTS OF 3D OBJECTS FROM 2D PHOTOS
Disclosed are systems and methods for obtaining a scale factor and 3D measurements of objects from a series of 2D images. An object to be measured is selected from a menu of an Augmented Reality (AR) based measurement application being executed by a mobile computing device. Measurement instructions corresponding to the selected object are retrieved and used to generate a series of image capture screens. A series of image capture screens assist the user in positioning the device relative to the object in a plurality of imaging positions to capture the series of 2D images. The images are used to determine one or more scale factors and to build a complete scaled 3D model of the object in virtual 3D space. The 3D model is used to generate one or more measurements of the object.