G05D2101/20

Mobile correctional facility robots
12264902 · 2025-04-01 · ·

The present disclosure is directed to mobile correctional facility robots and systems and methods for coordinating mobile correctional facility robots to perform various tasks in a correctional facility. The mobile correctional facility robots can be used to perform many of the tasks traditionally assigned to correctional facility guards to help reduce the number of guards needed in any given correctional facility. When cooperation is employed among multiple mobile correctional facility robots to execute tasks, a central controller can be used to coordinate the efforts of the multiple robots to improve the performance of the overall system of robots as compared to the performance of the robots when working in uncoordinated effort to execute the tasks.

Machine-learned architecture for efficient object attribute and/or intention classification

A system for faster object attribute and/or intent classification may include an machine-learned (ML) architecture that processes temporal sensor data (e.g., multiple instances of sensor data received at different times) and includes a cache in an intermediate layer of the ML architecture. The ML architecture may be capable of classifying an object's intent to enter a roadway, idling near a roadway, or active crossing of a roadway. The ML architecture may additionally or alternatively classify indicator states, such as indications to turn, stop, or the like. Other attributes and/or intentions are discussed herein.

CALCULATION DEVICE, CALCULATION SYSTEM, INFORMATION PROCESSING DEVICE, AND FACTORY

A calculation device includes a sensor information acquisition unit configured to acquire first sensor information acquired by a first sensor that detects a moving object classified into a first state from an outside of the moving object, and acquire second sensor information acquired by a second sensor that detects the moving object classified into a second state from the outside, and a calculation unit configured to calculate at least one of a position and an orientation of the moving object by using the first sensor information without executing preprocessing, and execute the preprocessing to calculate at least one of the position and the orientation of the moving object by using the second sensor information.

In-port object occupied space recognition apparatus
12271205 · 2025-04-08 · ·

An occupied space recognition apparatus for recognizing an occupied space of an object in a port, includes an input unit that receives an image photographed from an external photographing unit, an object recognition unit that recognizes the object based on the image, a position acquisition unit that acquires position coordinates of the object for each of a plurality of frames of the image, and an occupied space recognition unit that recognizes an occupied space of the object based on the position coordinates.

Device and method for generating object image, recognizing object, and learning environment of mobile robot

According to the present invention, disclosed are a device and a method of generating an object image, recognizing an object, and learning an environment of a mobile robot which perform a deep learning algorithm which allows a robot to create a map and load environment information acquired during the autonomous movement while the autonomous mobile robot is being charged and may be used for an application which finds out a location by finally recognizing objects such as furniture using a method of checking a location of the recognized objects to mark the location on the map.

Search system, search method, and storage medium

A search system according to an embodiment is a search system including one or more boarding-type mobile objects moving within a predetermined area and a mobile object management server managing the boarding-type mobile objects and includes a receiver that receives registration of information on a search target, an imaging controller that activates an imager mounted on the one or more boarding-type mobile objects when the registration of the information on the search target is received by the receiver, an acquirer that acquires an image of surroundings of the boarding-type mobile object captured by the imager, and a determiner that determines whether the search target is included in the image acquired by the acquirer.

Systems and methods to determine object position using images captured from mobile image collection vehicle

An object identification method is disclosed. The method includes obtaining images of a target geographical area and telemetry information of an image-collection vehicle at a time of capture, analyzing each image to identify objects, and determining a position of the objects. The method further includes determining an image capture height, determining a position of the image using the capture height and the telemetry information, performing a transform on the image based on the capture height and the telemetry information, identifying the objects in the transformed image, determining first pixel locations of the objects within the transformed image, performing a reverse transform on the first pixel locations to determine second pixel locations in the image, and determining positions of the objects within the area based on the second pixel locations within the captured image and the determined image position.

Object collection system and method

An object-collection system is disclosed. The system including a vehicle connected to a bucket, a camera connected to the vehicle, and an object picking assembly configured to pick up objects off of ground. The system further includes a processor that obtains object information for identified objects, guides the object-collection system over a target geographical area toward the identified objects based on the object information, captures images of the ground relative to the object picker as the object-collection system is guided towards the identified objects, identifies a target object in the images, tracks movement of the target object across the images as the object-collection system is guided towards the identified objects, and employs the tracked movement of the target object to instruct the object picker to pick up the target object.

Sensor integration for large autonomous vehicles
12248323 · 2025-03-11 · ·

The technology relates to autonomous vehicles for transporting cargo and/or people between locations. Distributed sensor arrangements may not be suitable for vehicles such as large trucks, busses or construction vehicles. Side view mirror assemblies are provided that include a sensor suite of different types of sensors, including LIDAR, radar, cameras, etc. Each side assembly is rigidly secured to the vehicle by a mounting element. The sensors within the assembly may be aligned or arranged relative to a common axis or physical point of the housing. This enables self-referenced calibration of all sensors in the housing. Vehicle-level calibration can also be performed between the sensors on the left and right sides of the vehicle. Each side view mirror assembly may include a conduit that provides one or more of power, data and cooling to the sensors in the housing.

Mobile robot and a method for controlling the mobile robot

A mobile robot and a method for controlling a mobile robot are provided. The method includes: accessing a 3D map representation of an environment including a respective indication of a crosswalk, a waiting area, and a visual traffic indicator; detecting an object located in the waiting area for generating a bounding box of the object; mapping a position of the bounding box to the 3D map representation; determining a visual occlusion zone of the object in the waiting area; determining a reduced waiting area as a difference between the waiting area and the visual occlusion area, and triggering control of the propulsion system for moving the mobile robot to the reduced waiting area.