G06V20/00

Computerized systems and methods for detecting product title inaccuracies
11568425 · 2023-01-31 · ·

Systems and methods are provided for detecting inaccuracy in a product title, comprising identifying, by running a string algorithm on a title associated with a product, at least one product type associated with the product, predicting, using a machine learning algorithm, at least one product type associated with the product based on the title, detecting an inaccuracy in the title, based on at least one of the identification or the prediction, and outputting, to a remote device, a message indicating that the title comprises the inaccuracy. Running the string algorithm may comprise receiving a set of strings, generating a trie based on the received set of strings, receiving the title, and traversing the generated trie using the title to find a match. Using the machine learning algorithm may comprise identifying words in the title, learning a vector representation for each character n-gram of each word, and summing each character n-gram.

Augmented reality display device and program recording medium
11568608 · 2023-01-31 · ·

Provided is an augmented reality display technology capable of more entertaining a user. An augmented reality display device 10 includes an imaging unit 13, a special effect execution unit 11b, and a display unit 14. The imaging unit 13 acquires a background image of a real world. When a plurality of models for a specific combination are present in a virtual space, the special effect execution unit 11b executes a special effect corresponding to the combination of the models. The display unit 14 displays the models together with the background image based on the special effect.

Augmented reality display device and program recording medium
11568608 · 2023-01-31 · ·

Provided is an augmented reality display technology capable of more entertaining a user. An augmented reality display device 10 includes an imaging unit 13, a special effect execution unit 11b, and a display unit 14. The imaging unit 13 acquires a background image of a real world. When a plurality of models for a specific combination are present in a virtual space, the special effect execution unit 11b executes a special effect corresponding to the combination of the models. The display unit 14 displays the models together with the background image based on the special effect.

DEEP LEARNING-BASED MARINE OBJECT CLASSIFICATION USING 360-DEGREE IMAGES
20230023434 · 2023-01-26 ·

Marine object detection, localization and classification systems and related techniques include an imaging system configured capture a stream of panoramic images of the water surrounding a mobile structure, including a view of the horizon. The images may include a 360-degree view from the mobile structure. The system is configured to analyze the stream of images using a marine video analytics system and/or a convolutional neural network to detect a region of interest comprising an object on the surface of the water, classify the detected object and relay the results to the user and/or a processing system. The analysis may include determining a horizon in a captured image, defining tiles across the horizon, and detecting objects in each tile.

USER-IN-THE-LOOP OBJECT DETECTION AND CLASSIFICATION SYSTEMS AND METHODS
20230028196 · 2023-01-26 ·

A detection device is adapted to traverse a search area and generate sensor data associated with an object that may be present in the search area, the detection device comprising a first logic device configured to detect and classify the object in the sensor data, communicate object detection information to a control system when the detection device is within a range of communications of the control system, and generate and store object analysis information for a user of the control system when the detection device is not in communication with the control system. A control system facilitates user monitoring and/or control of the detection device during operation and to access the stored object analysis information. The object analysis information is provided in an interactive display to facilitate user detection and classification of the detected object by the user to update the detection information, trained object classifier, and training dataset.

Vehicular vision system
11560092 · 2023-01-24 · ·

A vehicular vision system includes a camera disposed at a vehicle, at least one non-vision sensor disposed at the vehicle, and a display system of the vehicle that displays video images for viewing by the driver of the vehicle. Image data captured by the camera and sensor data sensed by the non-vision sensor are provided to a control of the vehicle. Responsive at least in part to processing at the control of image data captured by the camera, video images are displayed by a video display screen of the display system. The vehicular vision system determines an augmented reality overlay and the video display screen also displays the augmented reality overlay. The displayed augmented reality overlay pertains to at least one accessory of the equipped vehicle and/or is responsive at least in part to a driving condition of the equipped vehicle.

Object type identifying apparatus, object type identifying method, and recording medium
11562559 · 2023-01-24 · ·

Provided is an object type identifying apparatus that is capable of correctly identifying the types of objects held in a hand of a person. This object type identifying apparatus is provided with: a memory storing instructions; a storage device storing information indicating a type of an object at a position of each object; and one or more processors configured to execute the instructions to: acquire a position of an object; determine whether an object is picked up or an object is placed, based on sensor information; when determined that an object is picked up, identify a type of the picked-up object, based on the position of the object acquired and information stored in the storage device; and when determined that an object is placed, update information stored in the storage device, using an image captured by a camera that captures arrangement of each object from a front side.

Object type identifying apparatus, object type identifying method, and recording medium
11562559 · 2023-01-24 · ·

Provided is an object type identifying apparatus that is capable of correctly identifying the types of objects held in a hand of a person. This object type identifying apparatus is provided with: a memory storing instructions; a storage device storing information indicating a type of an object at a position of each object; and one or more processors configured to execute the instructions to: acquire a position of an object; determine whether an object is picked up or an object is placed, based on sensor information; when determined that an object is picked up, identify a type of the picked-up object, based on the position of the object acquired and information stored in the storage device; and when determined that an object is placed, update information stored in the storage device, using an image captured by a camera that captures arrangement of each object from a front side.

WATER AREA OBJECT DETECTION SYSTEM AND MARINE VESSEL
20230228576 · 2023-07-20 ·

A water area object detection system includes a first imager to image an object around a hull, a second imager provided on the hull such that an imaging direction of the second imager is the same or substantially the same as an imaging direction of the first imager and operable to image the object around the hull, and a controller configured or programmed to perform a control to create a water area map around the hull based on images captured by the first imager and the second imager. The second imager is spaced apart in an upward-downward direction of the hull from the first imager, and the first imager is spaced apart in the imaging direction from the second imager so as not to overlap the second imager in the upward-downward direction perpendicular to the imaging direction.

Control method, terminal, and system using environmental feature data and biological feature data to display a current movement picture

A control method includes obtaining feature data using at least one sensor, the feature data being acquired by the terminal using the at least one sensor, generating an action instruction based on the feature data and a decision-making mechanism of the terminal, and executing the action instruction. In this application, various aspects of feature data are acquired using a plurality of sensors, data analysis is performed on the feature data, and a corresponding action instruction is then generated based on a corresponding decision-making mechanism to implement interactive control.