Patent classifications
G01B11/22
Mesh updates via mesh frustum cutting
Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating, based on sensor data captured by a depth sensor on a device, one or more 3D meshes representing a physical space, wherein each of the 3D meshes comprises a corresponding set of vertices and a corresponding set of faces comprising edges between pairs of vertices; determining that a mesh is visible in a current frame captured by an image sensor on the device; determining, based on the corresponding set of vertices and the corresponding set of faces for the mesh, a portion of the mesh that lies within a view frustum associated with the current frame; and updating the one or more 3D meshes by texturing the portion of the mesh with one or more pixels in the current frame onto which the portion is projected.
Image-based kitchen tracking system with anticipatory preparation management
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data including one or more image frames indicative of a current state of a meal preparation area. The processing device determines a first quantity of a first ingredient disposed within a first container based on the image data. The processing device determines a meal preparation procedure associated with the first ingredient based on the first quantity. The processing device causes a notification indicative of the meal preparation procedure to be displayed on a graphical user interface (GUI).
Image-based kitchen tracking system with anticipatory preparation management
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data including one or more image frames indicative of a current state of a meal preparation area. The processing device determines a first quantity of a first ingredient disposed within a first container based on the image data. The processing device determines a meal preparation procedure associated with the first ingredient based on the first quantity. The processing device causes a notification indicative of the meal preparation procedure to be displayed on a graphical user interface (GUI).
Image-based kitchen tracking system with dynamic labeling management
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data having one or more image frames indicative of a state of a meal preparation area. The method may further include, determining, based on the image data, a first feature characterization of a first meal preparation item associated with the state of the meal preparation area. The method may further include determining that the first feature characterization does not meet object classification criteria for a set of object classifications. The method may further include causing a notification indicating the first meal preparation item and one of an object classification or a classification status corresponding to the first meal preparation item on a graphical user interface (GUI).
Image-based kitchen tracking system with dynamic labeling management
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data having one or more image frames indicative of a state of a meal preparation area. The method may further include, determining, based on the image data, a first feature characterization of a first meal preparation item associated with the state of the meal preparation area. The method may further include determining that the first feature characterization does not meet object classification criteria for a set of object classifications. The method may further include causing a notification indicating the first meal preparation item and one of an object classification or a classification status corresponding to the first meal preparation item on a graphical user interface (GUI).
Method and apparatus for estimating depth of molten pool during printing process, and 3D printing system
Disclosed are a method and apparatus of estimating a depth of a molten pool formed during a 3D printing process, and a 3D printing system. A surface temperature of the molten pool is measure by taking a thermal image of a laminated printing object during the 3D printing process with a thermal imaging camera. The measured surface temperature is compared with a melting point of the base material to determine a surface boundary of the molten pool. The maximum lengths in x-axis and y-axis directions of a surface region of the molten pool defined by the surface boundary of the molten pool are determined as a length and a width of the surface of the molten pool, respectively. A maximum depth in the z-axis direction of the molten pool is determined in real time based on the length and width of the surface region of the molten pool.
Image-based drive-thru management system
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data including one or more image frames indicative of a current state of a drive-thru area. The processing device determines a vehicle disposed within the drive-thru area based on the image data. The processing device receives order data with a pending meal order. The processing device determines a first association between the vehicle and the pending meal order based on the image data. The processing devices determine a meal delivery procedure associated with the based on the association between the vehicle and the pending meal order. The processing device performs may perform the meal delivery procedure. The processing device may provide the meal delivery procedure for display on a graphical user interface (GUI).
Image-based drive-thru management system
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data including one or more image frames indicative of a current state of a drive-thru area. The processing device determines a vehicle disposed within the drive-thru area based on the image data. The processing device receives order data with a pending meal order. The processing device determines a first association between the vehicle and the pending meal order based on the image data. The processing devices determine a meal delivery procedure associated with the based on the association between the vehicle and the pending meal order. The processing device performs may perform the meal delivery procedure. The processing device may provide the meal delivery procedure for display on a graphical user interface (GUI).
DETECTOR FOR AN OPTICAL DETECTION OF AT LEAST ONE OBJECT
A detector (110) for an optical detection of at least one object (112) is proposed. The detector (110) comprises: —at least one transfer device (120), wherein the transfer device (120) comprises at least two different focal lengths (140) in response to at least one incident light beam (136); —at least two longitudinal optical sensors (132), wherein each longitudinal optical sensor (132) has at least one sensor region (146), wherein each longitudinal optical sensor (132) is designed to generate at least one longitudinal sensor signal in a manner dependent on an illumination of the sensor region (146) by the light beam (136), wherein the longitudinal sensor signal, given the same total power of the illumination, is dependent on a beam cross-section of the light beam (136) in the sensor region (146), wherein each longitudinal optical sensor (132) exhibits a spectral sensitivity in response to the light beam (136) in a manner that two different longitudinal optical sensors (132) differ with regard to their spectral sensitivity; wherein each optical longitudinal sensor (132) is located at a focal point (138) of the transfer device (120) related to the spectral sensitivity of the respective longitudinal optical sensor (132); and —at least one evaluation device (150), wherein the evaluation device (150) is designed to generate at least one item of information on a longitudinal position and/or at least one item of information on a color of the object (112) by evaluating the longitudinal sensor signal of each longitudinal optical sensor (132). Thereby, a simple and, still, efficient detector for an accurate determining of a position and/or a color of at least one object in space is provided.
DETECTOR FOR AN OPTICAL DETECTION OF AT LEAST ONE OBJECT
A detector (110) for an optical detection of at least one object (112) is proposed. The detector (110) comprises: —at least one transfer device (120), wherein the transfer device (120) comprises at least two different focal lengths (140) in response to at least one incident light beam (136); —at least two longitudinal optical sensors (132), wherein each longitudinal optical sensor (132) has at least one sensor region (146), wherein each longitudinal optical sensor (132) is designed to generate at least one longitudinal sensor signal in a manner dependent on an illumination of the sensor region (146) by the light beam (136), wherein the longitudinal sensor signal, given the same total power of the illumination, is dependent on a beam cross-section of the light beam (136) in the sensor region (146), wherein each longitudinal optical sensor (132) exhibits a spectral sensitivity in response to the light beam (136) in a manner that two different longitudinal optical sensors (132) differ with regard to their spectral sensitivity; wherein each optical longitudinal sensor (132) is located at a focal point (138) of the transfer device (120) related to the spectral sensitivity of the respective longitudinal optical sensor (132); and —at least one evaluation device (150), wherein the evaluation device (150) is designed to generate at least one item of information on a longitudinal position and/or at least one item of information on a color of the object (112) by evaluating the longitudinal sensor signal of each longitudinal optical sensor (132). Thereby, a simple and, still, efficient detector for an accurate determining of a position and/or a color of at least one object in space is provided.