Patent classifications
G06V10/14
Motorized Mounting Device for Positioning an Optical Element Within a Field-of-View of an Optical Sensor and Method of Use
A mounting device for selectively positioning an optical element within a field-of-view of an optical sensor of a vehicle includes: a housing defining an opening sized to fit over an aperture of the optical sensor; a holder for the optical element connected to the housing and positioned such that, when the holder is in a first position, the optical element is at least partially within the field-of-view of the optical sensor; and a motorized actuator. The motorized actuator can be configured to move the holder to adjust the position of the optical element relative to the field-of-view of the optical sensor.
TRAINING DATA GENERATION DEVICE, RECORDING METHOD, AND INFERENCE DEVICE
A training data generation device includes a computer, and a computer-readable storage medium. The computer is configured to: receive an input of an annotation for second image data obtained by imaging an observation target; reflect a result of the annotation in first image data that is related to the same observation target as the observation target of the second image data, the first image data having a different at least one of imaging mode and display mode from the second image data; and generate training data for creating an inference model by using the first image data and the result of the annotation reflected in the first image data, the first image data including image data of a plurality of images, and the second image data being image data of an image obtained by combining the plurality of images included in the first image data.
TRAINING DATA GENERATION DEVICE, RECORDING METHOD, AND INFERENCE DEVICE
A training data generation device includes a computer, and a computer-readable storage medium. The computer is configured to: receive an input of an annotation for second image data obtained by imaging an observation target; reflect a result of the annotation in first image data that is related to the same observation target as the observation target of the second image data, the first image data having a different at least one of imaging mode and display mode from the second image data; and generate training data for creating an inference model by using the first image data and the result of the annotation reflected in the first image data, the first image data including image data of a plurality of images, and the second image data being image data of an image obtained by combining the plurality of images included in the first image data.
DNN Assisted Object Detection and Image Optimization
Systems and methods directed to adjusting an image based on a detected object depicted in the image are described. The method may include receiving an image from an image sensor, receiving statistical information associated with the image, detecting an object depicted in the image using a deep neural network, identifying object-specific statistical information for the detected object, generating a weighted object-specific parameter based on the object-specific statistical information, generating a weighted-image value based on the weighted object-specific parameter, providing the weighted-image value to the image sensor, where the image sensor is configured to update one or more image sensor parameters based on the weighted-image value, and acquiring an image from the image sensor updated with the one or more image sensor parameters.
REMOTE SENSING OF TERRAIN STRENGTH FOR MOBILITY MODELING
Methods for characterizing soil stiffness of an area. One example method includes receiving, with an electronic processor, a parameter corresponding to a soil type of the area; receiving, with the electronic processor, a plurality of thermal images of the area; determining, with the electronic processor, an apparent thermal inertia of the area based on the plurality of thermal images; determining, with the electronic processor, a soil gradation of the area based on the parameter; determining, with a machine learning algorithm executed by the electronic processor, an approximate soil stiffness of the area based on the apparent thermal inertia; and outputting, to a display communicatively coupled to the electronic processor, a representation of the approximate soil stiffness.
REMOTE SENSING OF TERRAIN STRENGTH FOR MOBILITY MODELING
Methods for characterizing soil stiffness of an area. One example method includes receiving, with an electronic processor, a parameter corresponding to a soil type of the area; receiving, with the electronic processor, a plurality of thermal images of the area; determining, with the electronic processor, an apparent thermal inertia of the area based on the plurality of thermal images; determining, with the electronic processor, a soil gradation of the area based on the parameter; determining, with a machine learning algorithm executed by the electronic processor, an approximate soil stiffness of the area based on the apparent thermal inertia; and outputting, to a display communicatively coupled to the electronic processor, a representation of the approximate soil stiffness.
OBJECT PERCEPTION SYSTEM AND METHOD
A system and method for object perception is provided comprising one or more sensors for sensing and delivering to a processor environmental data inputs with reference to an object viewing point. The processor uses the data inputs to determine and select light outputs to be emitted from one or more displays. The light outputs are modulated to comprise one or more wavelengths of light that elicit one or more targeted vision responses in order to facilitate object perception from the object viewing point. The light outputs can be modulated according to how the environmental parameters sensed change over time (e.g light conditions). Light outputs may include the emission of equiluminance based wavelengths of light and peak sensitivity wavelengths of light to elicit peripheral and foveal vision responses, respectively.
OBJECT PERCEPTION SYSTEM AND METHOD
A system and method for object perception is provided comprising one or more sensors for sensing and delivering to a processor environmental data inputs with reference to an object viewing point. The processor uses the data inputs to determine and select light outputs to be emitted from one or more displays. The light outputs are modulated to comprise one or more wavelengths of light that elicit one or more targeted vision responses in order to facilitate object perception from the object viewing point. The light outputs can be modulated according to how the environmental parameters sensed change over time (e.g light conditions). Light outputs may include the emission of equiluminance based wavelengths of light and peak sensitivity wavelengths of light to elicit peripheral and foveal vision responses, respectively.
APPARATUS AND METHODS OF IDENTIFYING TUBE ASSEMBLY TYPE
A method of identifying a tube type. The method includes capturing one or more pixelated images of a cap affixed to a tube; identifying a color of one or more pixels of the pixilated image of the cap; identifying one or more gradients of a dimension of the cap; and identifying the tube type based at least on: the color of the one or more pixels, and the one or more gradients of a dimension of the cap. Apparatus adapted to carry out the method are disclosed as are other aspects.
APPARATUS AND METHODS OF IDENTIFYING TUBE ASSEMBLY TYPE
A method of identifying a tube type. The method includes capturing one or more pixelated images of a cap affixed to a tube; identifying a color of one or more pixels of the pixilated image of the cap; identifying one or more gradients of a dimension of the cap; and identifying the tube type based at least on: the color of the one or more pixels, and the one or more gradients of a dimension of the cap. Apparatus adapted to carry out the method are disclosed as are other aspects.