G05D1/249

Pool cleaning system and method to automatically clean surfaces of a pool using images from a camera

A pool cleaning system for cleaning debris from a submerged surface of a swimming pool includes a self-propelled pool cleaner having rotatably-mounted supports for supporting and guiding the cleaner on the pool surface; an electric motor for enabling the rotation of the rotatably-mounted supports on the pool surface; at least one camera to capture imagery of the pool surface; a controller, in electronic communication with the at least one camera, to determine a cleanliness characteristic of the pool surface on which the cleaner has passed based on the camera imagery and generate a control signal to direct movement of the cleaner based on the cleanliness characteristic of the pool surface, and a portable electronic device configured to present a graphic on a display, the graphic depicting the submerged surface of the pool and those portions of the surface that remain uncleaned as the cleaner traverses the pool surface.

Auto clean machine and auto clean machine control method
11880205 · 2024-01-23 · ·

An auto clean machine, comprising: a light source configured to emit light to illuminate at least one light region outside and in front of the auto clean machine; a first image sensing area, configured to sense a first brightness distribution of the light region; a second image sensing area below the first image sensing area, configured to sense a second brightness distribution of the light region; and a processor, configured to control movement of the auto clean machine according the first brightness distribution and the second brightness distribution. The processor generates a wall detection result based on the first brightness distribution of the light region, generates a cliff detection result based on the second brightness distribution of the light region, and controls the movement of the auto clean machine according to the wall detection result and the cliff detection result.

Continuous convolution and fusion in neural networks

Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.

Image selection method, self-propelled apparatus, and computer storage medium

An image selection method, applied to a self-propelled apparatus, includes: collecting an image from a surrounding environment through an image collection device during the self-propelled apparatus travels; scoring the image according to a scoring rule when there is a recognizable obstacle in the collected image, wherein a value of the scoring is used to indicate an imaging quality of the recognizable obstacle in the image; and selecting an image that comprises the recognizable obstacle and that has a highest score as a to-be-displayed image in response to receive a request to view the image of the recognizable obstacle. A computer-readable storage medium and a self-propelled apparatus are further provided.

Apparatus for acquiring surrounding information of vehicle and method for controlling thereof

An apparatus for acquiring surrounding information of a vehicle includes: a camera configured to acquire an entire image of at least one surrounding vehicle; and a controller configured to derive at least one of coordinates of a wheel image area or coordinates of a front-rear image area included in an entire image area, and determine distance information from the vehicle to the at least one surrounding vehicle based on a relative positional relationship between the entire image area and the at least one of the wheel image area coordinates or the front-rear image area coordinates.

Auto-locating and positioning relative to an aircraft
11880995 · 2024-01-23 · ·

Techniques for auto-locating and positioning relative to an aircraft are disclosed. An example method can include a robot receiving a multi-dimensional representation of an enclosure that includes a candidate target aircraft. The robot can extract a geometric feature from the multi-dimensional representation associated with the candidate target aircraft. The robot can compare the geometric feature of the candidate target aircraft with a second geometric feature from a reference model of a target aircraft. The robot can determine whether the candidate target aircraft is the target aircraft based on the comparison. The robot can calculate a path from a location of the robot to the target aircraft based on the determination. The robot can traverse the path from the location to the target aircraft based on the calculation.

Method and apparatus with pose estimation
11880997 · 2024-01-23 · ·

A method and apparatus with pose estimation, where the method may include obtaining, using a depth network, a respective depth image for each of a plurality of successive input images, obtaining, using a pose network, respective image pose transformation matrices between images, of the successive input images, at adjacent time points, obtaining, based on initial pose information and the respective image pose transformation matrices, image pose information for each of the adjacent times, estimating final pose information dependent on the obtained image pose information, accumulating the image pose transformation matrices, calculating a pose loss value based on a result of comparing image position information, obtained from a result of the accumulating, and sensor position information obtained from a sensor. The pose and depth networks may be updated based on the pose loss value and a composite loss value dependent on the image pose transformation matrices and the input images.

MULTIMODAL MULTI-TECHNIQUE SIGNAL FUSION SYSTEM FOR AUTONOMOUS VEHICLE
20200081450 · 2020-03-12 ·

An autonomous vehicle incorporating a multimodal multi-technique signal fusion system is described herein. The signal fusion system is configured to receive at least one sensor signal that is output by at least one sensor system (multimodal), such as at least one image sensor signal from at least one camera. The at least one sensor signal is provided to a plurality of object detector modules of different types (multi-technique), such as an absolute detector module and a relative activation detector module, that generate independent directives based on the at least one sensor signal. The independent directives are fused by a signal fusion module to output a fused directive for controlling the autonomous vehicle.

ADAPTIVE ILLUMINATION FOR A TIME-OF-FLIGHT CAMERA ON A VEHICLE
20200084361 · 2020-03-12 ·

Disclosed are devices, systems and methods for capturing an image. In one aspect an electronic camera apparatus includes an image sensor with a plurality of pixel regions. The apparatus further includes an exposure controller. The exposure controller determines, for each of the plurality of pixel regions, a corresponding exposure duration and a corresponding exposure start time. Each pixel region begins to integrate incident light starting at the corresponding exposure start time and continues to integrate light for the corresponding exposure duration. In some example embodiments, at least two of the corresponding exposure durations or at least two of the corresponding exposure start times are different in the image.

Operator assistance for autonomous vehicles

Disclosed are autonomous vehicles that may autonomously navigate at least a portion of a route defined by a service request allocator. The autonomous vehicle may, at a certain portion of the route, request remote assistance. In response to the request, an operator may provide input to a console that indicates control positions for one or more vehicle controls such as steering position, brake position, and/or accelerator position. A command is sent to the autonomous vehicle indicating how the vehicle should proceed along the route. When the vehicle reaches a location where remote assistance is no longer required, the autonomous vehicle is released from manual control and may then continue executing the route under autonomous control.