G06V10/245

Apparatus and method for measuring rotational speed of rotary shaft based on variable density sinusoidal fringe

The present invention provides a shaft rotational speed measurement device and method based on variable density sinusoidal fringe pattern. The device comprises a variable density sinusoidal fringe pattern sensor, a high speed image acquisition and transmission module, a computer, and an image processing software module. The method comprises the following steps: make the variable density sinusoidal fringe pattern sensor attached on the circumferential surface of the measured shaft, the variable density sinusoidal fringe pattern sensor is continuously imaged and recorded by the high speed image acquisition module, the image transmission module transfers the fringe pattern signal to the computer, the image processing software module carries out Fourier transform to the fringe pattern signal in the same position of each frame, and corrects the peak frequency accurately by using the peak frequency correction method to obtain the accurate fringe pattern density information of each frame, obtains the time domain curve of the rotational angular velocity of the measured shaft, and then calculate the rotational speed of the measured shaft through the rotational angular velocity and sampling frequency. The present invention can realize non-contact measurement of rotational speed of measured shaft within a certain speed range, and the measuring device is simple, the measuring method is fast and accurate.

Information processing device and recognition support method
11580720 · 2023-02-14 · ·

In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device 100 comprises a detection unit 101 and an environment acquisition unit 102. The detection unit 101 detects a marker, which has been disposed within a recognition target zone for the purpose of acquiring information, from an image captured by means of an imaging device which captures images of objects located within the recognition target zone. The environment acquisition unit 102 acquires the recognition environment information based on image information of the detected marker. The recognition environment information is information representing the way in which a recognition target object is reproduced in an image captured by the imaging device when said imaging device captures an image of the recognition target object located within the recognition target zone.

Optical encoder capable of identifying absolute positions
11557113 · 2023-01-17 · ·

The present disclosure is related to an optical encoder which is configured to provide precise coding reference data by feature recognition technology. To apply the present disclosure, it is not necessary to provide particular dense patterns on a working surface. The precise coding reference data can be generated by detecting surface features of the working surface.

Thermopile array fusion tracking

A simultaneous location and mapping (SLAM)-enabled video game system, a user device of the video game system, and a computer-readable storage medium of the user device are disclosed. Generally, the video game system includes a video game console, a plurality of thermal beacons, and a user device communicatively coupled with the video game console. The user device includes a thermopile array, a processor, and a memory. The user device may receive thermal data from the thermopile array, the thermal data corresponding to a thermal signal emitted from a thermal beacon of the plurality of thermal beacons and detected by the thermopile array. The user device may determine, based on the thermal data, its location in 3D space, and then transmit that location to the video game system.

GENERATION OF IMAGE FOR ROBOT OPERATION
20230010302 · 2023-01-12 ·

A robot control system includes circuitry configured to: generate a command to a robot; receive a frame image in which a capture position changes according to a motion of the robot based on the command; extract a partial region from the frame image according to the command; superimpose a delay mark on the partial region to generate an operation image; and display the operation image on a display device, so as to represent a delay of the motion of the robot with respect to the command.

Systems and methods for digitally representing a scene with multi-faceted primitives
11593959 · 2023-02-28 · ·

Disclosed is a system and associated methods for generating and rendering a polyhedral point cloud that represents a scene with multi-faceted primitives. Each multi-faceted primitive stores multiple sets of values that represent different non-positional characteristics that are associated with a particular point in the scene from different angles. For instance, the system generates a multi-faceted primitive for a particular point of the scene that is captured in first capture from a first position and a second capture from a different second position. Generating the multi-faceted primitive includes defining a first facet with a first surface normal oriented towards the first position and first non-positional values based on descriptive characteristics of the particular point in the first capture, and defining a second facet with a second surface normal orientated towards the second position and second non-positional values based on different descriptive characteristics of the particular point in the second capture.

Entity identification using machine learning

Methods, systems, and apparatus, including computer programs encoded on computer storage media for identification and re-identification of fish. In some implementations, first media representative of aquatic cargo is received. Second media based on the first media is generated, wherein a resolution of the second media is higher than a resolution of the first media. A cropped representation of the second media is generated. The cropped representation is provided to the machine learning model. In response to providing the cropped representation to the machine learning model, an embedding representing the cropped representation is generated using the machine learning model. The embedding is mapped to a high dimensional space. Data identifying the aquatic cargo is provided to a database, wherein the data identifying the aquatic cargo comprises an identifier of the aquatic cargo, the embedding, and a mapped region of the high dimensional space.

CONTROL SYSTEM FOR NAVIGATING A PRINCIPAL DIMENSION OF A DATA SPACE
20180011541 · 2018-01-11 ·

Systems and methods are described for navigating through a data space. The navigating comprises detecting a gesture of a body from gesture data received via a detector. The gesture data is absolute three-space location data of an instantaneous state of the body at a point in time and physical space. The detecting comprises identifying the gesture using the gesture data. The navigating comprises translating the gesture to a gesture signal, and navigating through the data space in response to the gesture signal. The data space is a data-representational space comprising a dataset represented in the physical space.

IMAGE CAPTURE DEVICE AND IMAGE PROCESSING METHOD
20180013984 · 2018-01-11 ·

A mark irradiation unit (130) irradiates an object with a mark. An image capture unit (140) captures an image of the object, and generates image data. Then, an image capture area data generation unit recognizes a position of the mark in the object, and cuts out image capture area data which is a part of the image data on the basis of the mark. For this reason, the mark irradiation unit (130) irradiates the object with the mark, and thus even when a positioning symbol is not printed on the object to be stored as the image data, only a necessary portion in the image data is cut out.

REMOTE STATE FOLLOWING DEVICE
20180014382 · 2018-01-11 ·

A system and method for a remote state following device that includes an electronic device with a controllable operating state; an imaging device; and control system that when targeted at a control interface interprets a visual state from the control interface, and modifies the operating state in coordination with the visual state.