Patent classifications
H04N23/611
METHOD AND APPARATUS FOR VERIFICATION OF MEDICATION ADMINISTRATION ADHERENCE
A system and method of confirming administration of medication is provided. The method comprises the steps of receiving information identifying a particular medication prescription regimen, determining one or more procedures for administering such prescription regimen and identifying one or more activity sequences associated with such procedures. Activity sequences of actual administration of such prescription regimen are captured and then compared to the identified activity sequences to determine differences therebetween. A notice is provided if differences are determined.
METHODS AND APPARATUS TO OPERATE A MOBILE CAMERA FOR LOW-POWER USAGE
Disclosed examples include accessing sensor data; recognizing, by executing an instruction with programmable circuitry, a feature in the sensor data based on a convolutional neural network; and transitioning, by executing an instruction with the programmable circuitry, a mobile device between at least two of motion feature detection, audio feature detection, or camera feature detection after the feature is recognized in the sensor data, the mobile device to operate at a different level of power consumption after the transition than before the transition.
METHODS AND APPARATUS TO OPERATE A MOBILE CAMERA FOR LOW-POWER USAGE
Disclosed examples include accessing sensor data; recognizing, by executing an instruction with programmable circuitry, a feature in the sensor data based on a convolutional neural network; and transitioning, by executing an instruction with the programmable circuitry, a mobile device between at least two of motion feature detection, audio feature detection, or camera feature detection after the feature is recognized in the sensor data, the mobile device to operate at a different level of power consumption after the transition than before the transition.
IMAGING SYSTEM AND ROBOT SYSTEM
An imaging system includes: an unmanned flight vehicle; an imager that is mounted on the unmanned flight vehicle and takes an image of a robot which performs work with respect to a target object; a display structure which is located away from the unmanned flight vehicle and displays the image taken by the imager to a user who manipulates the robot; and circuitry which controls operations of the imager and the unmanned flight vehicle. The circuitry acquires operation related information that is information related to an operation of the robot. The circuitry moves the unmanned flight vehicle such that a position and direction of the imager are changed so as to correspond to the operation related information.
IMAGING SYSTEM AND ROBOT SYSTEM
An imaging system includes: an unmanned flight vehicle; an imager that is mounted on the unmanned flight vehicle and takes an image of a robot which performs work with respect to a target object; a display structure which is located away from the unmanned flight vehicle and displays the image taken by the imager to a user who manipulates the robot; and circuitry which controls operations of the imager and the unmanned flight vehicle. The circuitry acquires operation related information that is information related to an operation of the robot. The circuitry moves the unmanned flight vehicle such that a position and direction of the imager are changed so as to correspond to the operation related information.
ACTIVATING LIGHT SOURCES FOR OUTPUT IMAGE
In some examples, a computing device can include a processor resource and a non-transitory memory resource storing machine-readable instructions stored thereon that, when executed, cause the processor resource to: instruct an imaging device to capture an input image, determine image properties of the input image, activate a portion of a plurality of light sources based on a physical location of the plurality of light sources and the determined image properties of the input image, and instruct the imaging device to capture an output image when the portion of the plurality of light sources are activated.
METHODS AND SYSTEMS OF LOW POWER FACIAL RECOGNITION
An image sensor comprises a plurality of pixels. Pixels are capable of detecting a change in an amount of light intensity and pixels are capable of detecting an amount of light intensity. In a first mode the sensor outputs data from the first one or more of the pixels. In a second mode the sensor outputs data from the second one or more of the pixels. The first mode may be a lower power operation mode and the second mode may be a higher power operation mode. At least one of the first mode and the second mode is selected by a processor based on at least one of a result of processing data output in the first mode and a result of processing data output in the second mode.
IMAGE CAPTURING METHOD AND DEVICE, APPARATUS, AND STORAGE MEDIUM
Provided are an image capturing method and apparatus, a device and a storage medium. The method includes: at a new acquisition moment, predicting a predicted projection area position of a target object in a current captured image on an image sensor and estimated exposure brightness information of the target object in the predicted projection area position; adjusting, according to a type of the target object and the estimated exposure brightness information, an exposure parameter of the target object in the predicted projection area position when the new acquisition moment arrives; and acquiring a new captured image at the new acquisition moment according to the adjusted exposure parameter, where both the new captured image and the current captured image include the target object.
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD
Provided are a device and method that calculate a predicted motion vector corresponding to a type and posture of a tracked subject, and generate a camera control signal necessary for capturing an image of a tracked subject. There are included a predicted subject motion vector calculation unit that detects a tracked subject of a previously designated type from a captured image input from an imaging unit and calculates a predicted motion vector corresponding to a type and posture of the detected tracked subject, and a camera control signal generation unit that generates, on the basis of the predicted motion vector calculated by the predicted subject motion vector calculation unit, a camera control signal for capturing an image of a tracked image of the tracked subject. By using a neural network or the like, the predicted subject motion vector calculation unit executes processing of detecting a tracked subject of a type designated from the captured image by a user, and predicted motion vector calculation processing.
EYE TRACKING USING EFFICIENT IMAGE CAPTURE AND VERGENCE AND INTER-PUPILLARY DISTANCE HISTORY
Tracking an eye characteristic (e.g., gaze direction or pupil position) of a user's eyes by staggering image capture and using a predicted relationship between the user's eyes between eye captures to predict that eye's eye characteristic between those eye captures. Images of a user's eyes are captured in a staggered manner in the sense that the images of second eye are captured between the capture times of the images of the first eye and vice versa. An eye characteristic of the first eye at the capture times is determined based on the images of the first eye at those times. In addition, the eye characteristic of that first eye is predicted at additional times between captures based on a predicted relationship between the eyes.