Patent classifications
H04N23/90
SYSTEMS AND METHODS FOR BIOMECHANICALLY-BASED EYE SIGNALS FOR INTERACTING WITH REAL AND VIRTUAL OBJECTS
Systems and methods are provided for discerning the intent of a device wearer primarily based on movements of the eyes. The system may be included within unobtrusive headwear that performs eye tracking and controls screen display. The system may also utilize remote eye tracking camera(s), remote displays and/or other ancillary inputs. Screen layout is optimized to facilitate the formation and reliable detection of rapid eye signals. The detection of eye signals is based on tracking physiological movements of the eye that are under voluntary control by the device wearer. The detection of eye signals results in actions that are compatible with wearable computing and a wide range of display devices.
IMAGE CAPTURE WITH A CAMERA INTEGRATED DISPLAY
Certain aspects of the technology disclosed herein integrate a camera with an electronic display. An electronic display can include several layers, such as a cover layer, a color filter layer, a display layer including light emitting diodes or organic light emitting diodes, a thin film transistor layer, etc. A processor initiates light emission from a plurality of display elements. The processor can suspend the light emission from the plurality of display elements for a period of time imperceptible to a human observer. The processor initiates a camera to capture an image during the period of time the plurality of display elements are suspended. The processor can capture a plurality of images corresponding to a plurality of pixels and produce an image comprising depth information.
Computer Vision Based Driver Assistance Devices, Systems, Methods and Associated Computer Executable Code
The present invention includes computer vision based driver assistance devices, systems, methods and associated computer executable code (hereinafter collectively referred to as: “ADAS”). According to some embodiments, an ADAS may include one or more fixed image/video sensors and one or more adjustable or otherwise movable image/video sensors, characterized by different dimensions of fields of view. According to some embodiments of the present invention, an ADAS may include improved image processing. According to some embodiments, an ADAS may also include one or more sensors adapted to monitor/sense an interior of the vehicle and/or the persons within. An ADAS may include one or more sensors adapted to detect parameters relating to the driver of the vehicle and processing circuitry adapted to assess mental conditions/alertness of the driver and directions of driver gaze. These may be used to modify ADAS operation/thresholds.
Workpiece inspection device and workpiece inspection method
A workpiece inspection device 1 includes a table (3), image capturing unit fixing part (7), first light projection unit (4), second light projection unit (5), linear movement mechanism (8), turning mechanism (9), quality determination unit (10), and control unit (11). The control unit (11) performs first image capturing step of causing first light projection unit (4) to project light and causing image capturing unit (6) to capture image, detailed inspection portion-determination step of setting, portion of workpiece (2) determined to require detailed inspection based on image captured in the first image capturing step, second image capturing step of causing second light projection unit (5) to project light onto the workpiece (2) and causing image capturing unit (6) to capture image of the detailed inspection-requiring portion, and quality determination step of determining quality of the detailed inspection-requiring portion based on image captured in the second image capturing step.
Camera triggering and multi-camera photogrammetry
A photogrammetry system includes a memory, a processor, and a geo-positioning device. The geo-positioning device outputs telemetry regarding a vehicle on which one or more cameras are mounted. The processor can receive first telemetry from the geo-positioning device characterizing the vehicle telemetry at a first time, camera specification(s) regarding the cameras, photogrammetric requirement(s) for captured images, and a last camera trigger time. The processor can determine a next trigger time for the cameras based upon the received telemetry, camera specification(s), photogrammetric requirement(s), and last trigger time. The processor can transmit a trigger signal to the camera(s) and the geo-positioning device to cause the camera(s) to acquire images of a target and the geo-positioning device to store second vehicle telemetry data characterizing the vehicle telemetry at a second time that is after the first time and during acquisition of the images. The processor can receive the acquired images from the cameras.
Camera triggering and multi-camera photogrammetry
A photogrammetry system includes a memory, a processor, and a geo-positioning device. The geo-positioning device outputs telemetry regarding a vehicle on which one or more cameras are mounted. The processor can receive first telemetry from the geo-positioning device characterizing the vehicle telemetry at a first time, camera specification(s) regarding the cameras, photogrammetric requirement(s) for captured images, and a last camera trigger time. The processor can determine a next trigger time for the cameras based upon the received telemetry, camera specification(s), photogrammetric requirement(s), and last trigger time. The processor can transmit a trigger signal to the camera(s) and the geo-positioning device to cause the camera(s) to acquire images of a target and the geo-positioning device to store second vehicle telemetry data characterizing the vehicle telemetry at a second time that is after the first time and during acquisition of the images. The processor can receive the acquired images from the cameras.
LED And/Or Laser Light Device Has Projection
The (LED or-and Laser) light source for bulb or light device such as garden light that has at least one of or more than one optics-lens, and light device has one top cover having shape of flat or ½ ball, ⅔ ball, sphere, dome shape for top cover. Foe laser light source incorporate with flat top protective lens and laser film or grating film to enlarge or created plurality of image, lighted patterns. For LED light source can has project assembly which is a built-in or add-on or assembled inside of said light device. Further can incorporated flexible bendable arms to change position, direction, orientation of (LED or-and Laser) light beam. The said Light device also can offer near-by and far-away illumination, or-and lighted image, pattern projection with desired light effects by rotating optic-lens or LED(s). It also can get desired effects while LED(s) controlled by IC or circuitry making LED(s) for color changing or on-off on desired time, duration, cycles. The light device further can have more than one function selected from USB charger, power failure, RF remote control, Infra-red controller, blue-tooth, wifi, internet, App software, motion sensor and wireless with multiple-way communication. Also, light device may have rechargeable circuit, batteries or rechargeable battery, USB ports for the (LED or-and Laser)-bulb be charged or supply other device current.
Switching method and system of interactive modes of head-mounted device
A switching method and system of interactive modes of a head-mounted device is provided. The interactive modes include gamepad tracking interactive mode and bare hand tracking interactive mode. A standard deviation of position data, a standard deviation of attitude data and a standard deviation of accelerometer data among IMU data are acquired, respectively. Whether the standard deviation of the position data, the standard deviation of the attitude data and the standard deviation of the accelerometer data meet a first preset condition is determined. Moreover, the standard deviation of the accelerometer data within a second preset duration is acquired in real time, and whether the standard deviation of the accelerometer data meets a second preset condition is determined. In cases where the standard deviation of the accelerometer data meets the second preset condition, the bare hand tracking interactive mode is paused, and the gamepad tracking interactive mode is started.
Switching method and system of interactive modes of head-mounted device
A switching method and system of interactive modes of a head-mounted device is provided. The interactive modes include gamepad tracking interactive mode and bare hand tracking interactive mode. A standard deviation of position data, a standard deviation of attitude data and a standard deviation of accelerometer data among IMU data are acquired, respectively. Whether the standard deviation of the position data, the standard deviation of the attitude data and the standard deviation of the accelerometer data meet a first preset condition is determined. Moreover, the standard deviation of the accelerometer data within a second preset duration is acquired in real time, and whether the standard deviation of the accelerometer data meets a second preset condition is determined. In cases where the standard deviation of the accelerometer data meets the second preset condition, the bare hand tracking interactive mode is paused, and the gamepad tracking interactive mode is started.
System and method for object tracking and metric generation
Disclosed herein is a system and method directed to object tracking and metric generation using a plurality of cameras. The system includes the plurality of cameras disposed around a playing surface in a mirrored configuration, where the plurality of cameras are time-synchronized. The system further includes logic that, when executed by a processor, causes performance of operations including: obtaining a sequence of images from the plurality of cameras, continuously detecting an object in image pairs at successive points in time, wherein each image pair corresponds to a single point in time, continuously determining a location of the object within the playing space through triangulation of the object within each image pair, detecting a player and the object within each image of a subset of image pairs of the sequence of images, identifying a sequence of interactions between the object and the player, and storing the sequence of interactions.