Patent classifications
G05D1/0255
Method for providing an assistance signal and/or a control signal for an at least partially automated vehicle
A method for providing an assistance signal and/or a control signal for an at least partially automated vehicle includes receiving surroundings data, in particular an acoustic signal; recognizing a warning signal emitted by a further road user based on the received surroundings data; determining whether a hazardous situation relating to the vehicle is or was indicated by the warning signal; and providing the assistance signal and/or the control signal based on the result of the determination.
Vehicle using spatial information acquired using sensor, sensing device using spatial information acquired using sensor, and server
Provided is a method of sensing a three-dimensional (3D) space using at least one sensor. The method can include acquiring spatial information over time for the sensed 3D space, applying a neural network based object classification model to the acquired spatial information over time to identify at least one object in the sensed 3D space. The method can also include tracking the sensed 3D space including the identified at least one object, and using information related to the tracked 3D space.
ROAD CONDITION DEEP LEARNING MODEL
The technology relates to using on-board sensor data, off-board information and a deep learning model to classify road wemess and/or to perform a regression analysis on road wetness based on a set of input information. Such information includes on-board and/or off-board signals obtained from one or more sources including on-board perception sensors, other on-board modules. external weather measurement, external weather services, etc. The ground truth includes measurements of water film thickness and/or ice coverage on road surfaces. The ground truth, on-board and off-board signals are used to build the model. The constructed model can be deployed in autonomous vehicles for classifying/regressing the road wetness with on-board and/or off-board signals as the input, without referring to the ground truth. The model can be applied in a variety of ways to enhance autonomous vehicle operation, for instance by altering current driving actions, modifying planned routes or trajectories, activating on-board cleaning systems, etc.
AI mobile robot for learning obstacle and method of controlling the same
An artificial intelligence (AI) mobile robot and a method of controlling the same for learning an obstacle are configured to capture an image while traveling through an image acquirer, to store a plurality of captured image data, to determine an obstacle from image data, to set a response motion corresponding to the obstacle, and to operate the set response motion depending on the obstacle, and thus, the obstacle is recognized through the captured image data, the obstacle is easily determined by repeatedly learning an image, and the obstacle is determined before the obstacle is detected or from a time point of detecting the obstacle to perform an operation of a response motion, and even if the same detection signal is input when a plurality of different obstacles is detected, the obstacle is determined through the image and different operations are performed depending on the obstacle to respond to various obstacles, and accordingly, the obstacle is effectively avoided and an operation is performed depending on a type of the obstacle.
System of configuring active lighting to indicate directionality of an autonomous vehicle
Systems, apparatus and methods may be configured to implement actively-controlled light emission from a robotic vehicle. A light emitter(s) of the robotic vehicle may be configurable to indicate a direction of travel of the robotic vehicle and/or display information (e.g., a greeting, a notice, a message, a graphic, passenger/customer/client content, vehicle livery, customized livery) using one or more colors of emitted light (e.g., orange for a first direction and purple for a second direction), one or more sequences of emitted light (e.g., a moving image/graphic), or positions of light emitter(s) on the robotic vehicle (e.g., symmetrically positioned light emitters). The robotic vehicle may not have a front or a back (e.g., a trunk/a hood) and may be configured to travel bi-directionally, in a first direction or a second direction (e.g., opposite the first direction), with the direction of travel being indicated by one or more of the light emitters.
Collision avoidance perception system
A collision avoidance system may validate, reject, or replace a trajectory generated to control a vehicle. The collision avoidance system may comprise a secondary perception component that may receive sensor data, receive and/or determine a corridor associated with operation of a vehicle, classify a portion of the sensor data associated with the corridor as either ground or an object, determine a position and/or velocity of at least the nearest object, determine a threshold distance associated with the vehicle, and control the vehicle based at least in part on the position and/or velocity of the nearest object and the threshold distance.
MOVING ROBOT AND METHOD OF CONTROLLING THE SAME
According to a moving robot and a method of controlling the same of the present disclosure, the moving robot detect the sound generated in the area, moves a sound generation point according to a type of the sound and an operation mode, analyzes an image of the sound generation point and determines an indoor situation to perform the corresponding operation. The moving robot detects the sound to determine an accident at a location at which the sound is generated, can automatically perform a specified operation corresponding to the generated accident even when there is no control command of a user, and thus, it is possible rapidly respond to the generated accident. The moving robot can divide an object generating the sound into a person, a companion animal, and a subject, and can perform different operations according to the object.
AUTOMATIC GUIDANCE ASSIST SYSTEM USING GROUND PATTERN SENSORS
An automatic guidance system is adapted to be mounted on a work vehicle such as a farm tractor for assisting an operator steer the vehicle on a desired track relative to a furrow. The system includes sensors for transmitting and receiving ultrasonic ranging signals. The sensors are ultrasound transducers mountable on ends of a planter drawn by the vehicle for directing ranging signals downwardly toward field adjacent of a furrow such that the ranging signals strike the field or furrow and are reflected back into the respective sensor. Guidance logic stored in a memory of a controller is executed by a processor to determine tractor headway direction and headland turning directions representative of desired tractor headway and headland turning directions, and a human interface device generates guidance images viewable by an operator for steering the tractor relative to furrows in the field and in the headland.
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR CONTROLLING A ROBOT
A method for controlling a robot is provided. The method includes the steps of: acquiring at least one of sound information and action information for a robot from a user in a serving place; determining identification information on the user on the basis of at least one of the sound information and the action information; and determining an operation to be performed by the robot on the basis of the identification information.
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR CONTROLLING A ROBOT
A method for controlling a robot is provided. The method includes the steps of: acquiring information on a sound associated with a robot call in a serving place; determining a call target robot associated with the sound, among a plurality of robots in the serving place, on the basis of the acquired information; and providing feedback associated with the sound by the call target robot.