Patent classifications
G06V20/50
INFORMATION PUSHING METHOD IN VEHICLE DRIVING SCENE AND RELATED APPARATUS
This disclosure relates to an information pushing method in a vehicle driving scene. The method may include receiving push information in the vehicle driving scene and obtaining driving scene image information collected by an in-vehicle image collection device. The method may further include identifying scene category identification information based on the driving scene image information. The scene category identification information is for indicating a category of the environmental information. The method may further include pushing, in response to the scene category identification information satisfying a push condition, the push information in the vehicle driving scene.
GARBAGE DISPOSAL SUPPORT APPARATUS AND GARBAGE DISPOSAL SUPPORT METHOD
A garbage disposal support apparatus capable of reducing the load on a person disposing of garbage is achieved. A garbage disposal support apparatus includes: a position obtaining unit configured to obtain position information about a person having an object; an object image obtaining unit configured to obtain image information about the object that the person has; a garbage determination unit configured to check the image information about the object that the person has against garbage image information stored in a database, and determine whether the object is garbage or not; a route generation unit configured to generate route information to a disposal place of the garbage, based on the position information about the person and on disposal place information stored in the database, if the object is garbage; and a transmitter unit configured to transmit the route information to a display apparatus.
GARBAGE DISPOSAL SUPPORT APPARATUS AND GARBAGE DISPOSAL SUPPORT METHOD
A garbage disposal support apparatus capable of reducing the load on a person disposing of garbage is achieved. A garbage disposal support apparatus includes: a position obtaining unit configured to obtain position information about a person having an object; an object image obtaining unit configured to obtain image information about the object that the person has; a garbage determination unit configured to check the image information about the object that the person has against garbage image information stored in a database, and determine whether the object is garbage or not; a route generation unit configured to generate route information to a disposal place of the garbage, based on the position information about the person and on disposal place information stored in the database, if the object is garbage; and a transmitter unit configured to transmit the route information to a display apparatus.
COMPUTER VISION-BASED SURGICAL WORKFLOW RECOGNITION SYSTEM USING NATURAL LANGUAGE PROCESSING TECHNIQUES
Systems, methods, and instrumentalities are disclosed for computer vision-based surgical workflow recognition using natural language processing (NLP) techniques. Surgical video of surgical procedures may be processed and analyzed, for example, to achieve workflow recognition. Surgical phases may be determined based on the surgical video and segmented to generate an annotated video representation. The annotated video representation of the surgical video may provide information associated with the surgical procedure. For example, the annotated video representation may provide information on surgical phases, surgical events, surgical tool usage, and/or the like.
COMPUTER VISION-BASED SURGICAL WORKFLOW RECOGNITION SYSTEM USING NATURAL LANGUAGE PROCESSING TECHNIQUES
Systems, methods, and instrumentalities are disclosed for computer vision-based surgical workflow recognition using natural language processing (NLP) techniques. Surgical video of surgical procedures may be processed and analyzed, for example, to achieve workflow recognition. Surgical phases may be determined based on the surgical video and segmented to generate an annotated video representation. The annotated video representation of the surgical video may provide information associated with the surgical procedure. For example, the annotated video representation may provide information on surgical phases, surgical events, surgical tool usage, and/or the like.
TECHNIQUES FOR DETECTION/NOTIFICATION OF PACKAGE DELIVERY AND PICKUP
Systems, computer-readable media, methods, and approaches described herein may identify delivery and/or pickup of packages. For example, packages may be identified within the areas captured by images and/or video. Based on the identification of the packages, it may be determined whether the package was delivered or picked up. A notification may be initiated that indicates that a package has been delivered and/or picked up.
TECHNIQUES FOR DETECTION/NOTIFICATION OF PACKAGE DELIVERY AND PICKUP
Systems, computer-readable media, methods, and approaches described herein may identify delivery and/or pickup of packages. For example, packages may be identified within the areas captured by images and/or video. Based on the identification of the packages, it may be determined whether the package was delivered or picked up. A notification may be initiated that indicates that a package has been delivered and/or picked up.
MATERIAL DETERMINING DEVICE, MATERIAL DETERMINING METHOD, AUTONOMOUS CLEANING DEVICE
A material determining device comprising a first image sensor, a second image sensor, and a light source is provided. The material determining method comprises: (a) sensing a first image by the first image sensor according to light from the light source; (b) sensing a second image by the second image sensor according to the light; and (c) determining whether material corresponding to material images in the first image and the second image is first type of material or second type of material, according to locations of the material images in the first image and the second image and according to shapes of the material images in the first image and the second image. By this way an electronic device using the material determining device can properly operate according to the type of material.
Real Time Object Detection and Tracking
Methods, systems, and apparatus for recognizing objects and providing content related to the recognized objects are described. In one aspect, a method includes detecting presence of one or more objects depicted in a viewfinder of a camera of the mobile device. In response to detecting the presence of the one or more objects, image data representing the one or more objects is sent to a content distribution system that selects content related to objects depicted in images. A location of each of the one or more objects in the viewfinder of the camera is tracked while waiting to receive content from the content distribution system. Content related to the one or more objects is received from the content distribution system. A current location of each object in the viewfinder is determined and the content related to the object is presented within the viewfinder at the current location of the object.
Real Time Object Detection and Tracking
Methods, systems, and apparatus for recognizing objects and providing content related to the recognized objects are described. In one aspect, a method includes detecting presence of one or more objects depicted in a viewfinder of a camera of the mobile device. In response to detecting the presence of the one or more objects, image data representing the one or more objects is sent to a content distribution system that selects content related to objects depicted in images. A location of each of the one or more objects in the viewfinder of the camera is tracked while waiting to receive content from the content distribution system. Content related to the one or more objects is received from the content distribution system. A current location of each object in the viewfinder is determined and the content related to the object is presented within the viewfinder at the current location of the object.