G06V10/235

CONTROLLING LIGHTING LOADS TO ACHIEVE A DESIRED LIGHTING PATTERN

A visible light sensor may be configured to sense environmental characteristics of a space using an image of the space. The visible light sensor may be controlled in one or more modes, including a daylight glare sensor mode, a daylighting sensor mode, a color sensor mode, and/or an occupancy/vacancy sensor mode. In the daylight glare sensor mode, the visible light sensor may be configured to decrease or eliminate glare within a space. In the daylighting sensor mode and the color sensor mode, the visible light sensor may be configured to provide a preferred amount of light and color temperature, respectively, within the space. In the occupancy/vacancy sensor mode, the visible light sensor may be configured to detect an occupancy/vacancy condition within the space and adjust one or more control devices according to the occupation or vacancy of the space. The visible light sensor may be configured to protect the privacy of users within the space via software, a removable module, and/or a special sensor.

METHODS AND APPARATUS TO OPERATE A MOBILE CAMERA FOR LOW-POWER USAGE
20230237791 · 2023-07-27 ·

Disclosed examples include accessing sensor data; recognizing, by executing an instruction with programmable circuitry, a feature in the sensor data based on a convolutional neural network; and transitioning, by executing an instruction with the programmable circuitry, a mobile device between at least two of motion feature detection, audio feature detection, or camera feature detection after the feature is recognized in the sensor data, the mobile device to operate at a different level of power consumption after the transition than before the transition.

Fool-Proofing Product Identification
20230237091 · 2023-07-27 · ·

A method includes receiving, from an image capture device in communication with the data processing hardware, image data for an area of interest of a user. The method also includes receiving a query from the user referring to one or more objects detected within the image data and requesting a digital assistant to discern insights associated with the one or more objects referred to by the query. The method also includes processing the query and the image data to: identify, based on context data extracted from the image data, the one or more objects referred to by the query; and determine the insights associated with the identified one or more objects for the digital assistant to discern. The method also includes generating, for output from a user device associated with the user, content indicating the discerned insights associated with the identified one or more objects.

MEDICATION CHANGE SYSTEM AND METHODS

Method for generating user interface that indicates medication changes in medication starts with a processor detecting a medication change event. Processor retrieves medication information based on the medication change event including images of two medications. Processor generates color difference output using a color neural network, image of first medication and second medication. Color difference output comprises information on a difference in hue, saturation or color distribution. Processor generates medication appearance difference output using medication appearance neural network, image of first medication and second medication. Medication appearance difference output comprises information on a difference in shape, segmentation or form. Processor generates a differential record using the color difference output and medication appearance difference output. Processor causes medication change user interface to be displayed that comprises medication images and color and appearance descriptions of the medication which are displayed to emphasize differences identified in the differential record. Other embodiments are disclosed herein.

Electronic device for performing payment and operation method therefor

Disclosed is an electronic device for processing a touch input. The electronic device may comprise: a touch screen; a biometric sensor disposed overlappingly at a position of at least a part of the touch screen; and a processor for acquiring biometric information of a user from an input relating to an object displayed through the touch screen, by using the biometric sensor, receiving a payment command associated with a payment function for the object, and performing the payment function for a product corresponding to the object by using the biometric information according to the payment command. Various other embodiments may be provided.

HUMAN ABNORMAL BEHAVIOR RESPONSE METHOD AND MOBILITY AID ROBOT USING THE SAME

Response methods to human abnormal behaviors for a mobility aid robot having a user-facing camera are disclosed. The mobility aid robot responds to human abnormal behaviors by detecting a face of a human during the robot aiding the human to move through the camera, comparing an initial size of the face and an immediate size of the face in response to the face of the human having detected during the robot aiding the human to move, determining the human as in abnormal behavior(s) in response to the immediate size of the face being smaller than the initial size of the face, and performing response(s) corresponding to the abnormal behavior(s) in response to the human being in the abnormal behavior(s), where the response(s) include slowing down the robot.

System and method for displaying objects of interest at an incident scene

A system and method for displaying an image of an object of interest located at an incident scene. The method includes receiving, from the image capture device, a first video stream of the incident scene, and displaying the video stream. The method includes receiving an input indicating a pixel location in the video stream, and detecting the object of interest in the video stream based on the pixel location. The method includes determining an object class, an object identifier, and metadata for the object of interest. The metadata includes the object class, an object location, an incident identifier corresponding to the incident scene, and a time stamp. The method includes receiving an annotation input for the object of interest, and associating the annotation input and the metadata with the object identifier. The method includes storing, in a memory, the object of interest, the annotation input, and the metadata.

SYSTEMS AND METHODS OF IMAGE SEARCHING
20230229690 · 2023-07-20 ·

Systems and methods of image searching include receiving content, receiving a request to select an image from content, selecting a plurality of items in the image, retrieving information about the selected item, and providing display data based on the retrieved information.

Mammography apparatus and program

A mammography apparatus includes a diagnostic image acquisition unit that acquires a diagnostic image in which a calcification as a biopsy target is marked; a scout image acquisition unit that acquires a scout image obtained by imaging a mamma undergoing the biopsy from a specific direction; and a display unit that highlights a calcification (candidate for biological tissue examination) in the scout image which matches at least the marked calcification in the diagnostic image.

MATCHING CONTENT TO A SPATIAL 3D ENVIRONMENT
20230229381 · 2023-07-20 · ·

Systems, methods, and computer program product for displaying virtual content with a wearable display device that in response to identifying a first change from a first field of view to a second field of view, determines, based at least in part upon attribute(s) or criterion (criteria), first virtual content element(s) and second virtual content element(s) from the set that match to first surface(s) within the first field of view, determines second surface(s) within the second field of view based at least in part upon attribute(s) or criterion (criteria), the second surface(s) matching to the second virtual content element(s), moves the first virtual content element(s) from the first surface(s) to the second surface(s), and maintains the second virtual content element(s) with respect to the first surface(s) while the first field of view has been changed into the second field of view.