Patent classifications
G05B2219/23021
TOUCHLESS FOOD DISPENSER
A touchless food dispenser. The touchless food dispenser includes one or more types of food stored therein, one or more removable nozzles, and one or more nozzle replacement mechanisms. The touchless food dispenser is configured to dispense the food through the one or more removable nozzles. The one or more nozzle replacement mechanisms are configured to remove the one or more removable nozzles and replaces the one or more removable nozzles with a different removable nozzle.
Processing unit, method for operating a processing unit and use of a processing unit
A processing facility having at least one processing station and at least one covering at least partly surrounding the processing station. The covering has at least one viewing window. Improved error correction is then made possible if a display unit displays at least one item of information about the processing station on the viewing window.
Movable robot capable of providing a projected interactive user interface
The present invention discloses a moveable robot that includes a processing control system; a rotation mechanism, a projection system that can be titled by the rotation mechanism to a first position to project a first image on a horizontal surface outside the body of the moveable robot, wherein the projection system is configured to be titled by the rotation mechanism to a second position to project a second image on a wall surface, and an optical sensing system configured to detect the user's movement, location, facial expression or gesture over the first image. The processing control system can interpret user's inputs based on the user's movement, location, facial expression or gesture over the first image projected on the horizontal surface outside the body of the moveable robot. The processing control system can control one or more outputs of the moveable robot based on the interpreted user's inputs.
SYSTEM AND METHOD FOR CONTROLLING OPERATION OF A RIDE SYSTEM BASED ON GESTURES
A ride control system for controlling operation of an amusement park ride having a ride station area wherein patrons embark and disembark from a ride vehicle under supervision of one or more ride operators, includes a vision system and a ride control processor. The vision system capture images of one or more of the one or more ride operators at one or more locations within the ride station area. The ride control processor receives one or more images from the vision system and includes a machine-learned module configured to recognize one or more valid gestures within the one or more images, where a valid gesture corresponds to a gesture from at least one of the one or more ride operators, and program logic configured to process the one or more valid gestures within the one or more images to enable a ride operation.
GESTURE RECOGNITION METHOD, CORRESPONDING CIRCUIT, DEVICE AND COMPUTER PROGRAM PRODUCT
A programmable data processing circuit is configured for receiving sensor signals indicative of gestures for identification by the processing circuit. The processing circuit applies to the sensor signals finite state machine processing resources to provide identification output signals indicative of gestures identified as a function of the sensor signals. A plurality of finite state machine processing programs loaded into the processing circuit include a data section and an instruction section. The data section including a fixed size part specifying respective processing resources used by the programs in the plurality of finite state machine processing programs and a variable size part with respective sizes for allocating the respective processing resources used by the programs in the plurality of finite state machine processing programs. The instruction section including conditions and commands for execution by the respective processing resources used by the programs by operating on data located in the respective data sections.
CONTROL OVER PROCESSES AND MACHINERY
A method for operating an industrial processes is described. An operator of the industrial process is observed by a camera. A gesture of the observed operator is recognized. A command based on the recognized gesture is generated by comparison of the recognized gesture to a database. The generated command is input to control logic operable for controlling the industrial process. A control function is exerted over the industrial process according to the command input. The industrial process is operated automatically based on the exerted control function without a touch-based operator input to a manual control device.
Multimodal object identification
Methods, systems, and apparatus for receiving a command for controlling a robot, the command referencing an object, receiving sensor data for a portion of an environment of the robot, identifying, from the sensor data, a gesture of a human that indicates a spatial region located outside of the portion of the environment described by the sensor data, searching map data for the object, determining, based at least on searching the map data for the object referenced in the command, that the object referenced in the command is present in the spatial region, and in response to determining that the object referenced in the command is present in the spatial region, controlling the robot to perform an action with respect to the object referenced in the command.
Method for linking information with a workpiece data record, and flatbed machine tool
A method for linking information with a workpiece data record of a workpiece includes registering a selection time relating to a first action of a user that has a spatial relationship with a workpiece, determining a position of a hand of the user in a space at the selection time using a locating system, deriving a selection position in a support plane from the determined position of the hand, selecting a workpiece data record from the workpiece data records by comparing the selection position with relative positions and contours of the workpieces, registering a second action in which the user carries out a gesture movement, wherein an information item to be logged in a database is assigned to the gesture movement, reading the information item from the database, and linking the information item with the selected workpiece data record.
Multimodal Object Identification
Methods, systems, and apparatus for receiving a command for controlling a robot, the command referencing an object, receiving sensor data for a portion of an environment of the robot, identifying, from the sensor data, a gesture of a human that indicates a spatial region located outside of the portion of the environment described by the sensor data, searching map data for the object, determining, based at least on searching the map data for the object referenced in the command, that the object referenced in the command is present in the spatial region, and in response to determining that the object referenced in the command is present in the spatial region, controlling the robot to perform an action with respect to the object referenced in the command.
A METHOD AND A DEVICE FOR EXCHANGING DATA BETWEEN A SMART DISPLAY TERMINAL AND MOTION-SENSING EQUIPMENT
Exchanging information between a smart display terminal and motion-sensing equipment includes: reading equipment data uploaded by motion-sensing equipment on the basis of a smart display terminal; converting said equipment data to standardized motion-sensing data; and reading said standardized motion-sensing data.