Patent classifications
Y10S901/46
Apparatus, system, and methods for weighing and positioning wafers
An apparatus for characterizing a wafer comprising an aligner comprising a chuck for receiving and rotating the wafer, a sensor for detecting the position of the wafer as it is rotated, a first actuator for lowering and raising the wafer vertically, and a second actuator for moving the chuck horizontally; and a weighing scale comprising a weight sensor disposed proximate to the aligner, and a cantilevered arm extending laterally from the weight sensor over the chuck of the aligner, the cantilevered arm having a through hole surrounding the chuck. The chuck is vertically movable relative to the weighing scale from a first position in which the wafer is supported by the chuck to a second position in which the wafer is supported by the cantilevered arm of the weighing scale. A method for characterizing a wafer using the instant apparatus is also disclosed.
Apparatus and methods for control of robot actions based on corrective user inputs
Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.
Handling gait disturbances with asynchronous timing
An example method may include i) detecting a disturbance to a gait of a robot, where the gait includes a swing state and a step down state, the swing state including a target swing trajectory for a foot of the robot, and where the target swing trajectory includes a beginning and an end; and ii) based on the detected disturbance, causing the foot of the robot to enter the step down state before the foot reaches the end of the target swing trajectory.
MECHANISM-PARAMETER-CALIBRATION METHOD FOR ROBOTIC ARM SYSTEM
A mechanism-parametric-calibration method for a robotic arm system is provided. The method includes controlling the robotic arm to perform a plurality of actions so that one end of the robotic arm moves toward corresponding predictive positioning-points; determining a predictive relative-displacement between each two of the predictive positioning-points; after the robotic arm performs each of the actions, sensing three-dimensional positioning information of the end of the robotic arm; determining, according to the three-dimensional positioning information, a measured relative-displacement moved by the end of the robotic arm when the robotic arm performs each two of the actions; deriving an equation corresponding to the robotic arm from the predictive relative-displacements and the measured relative-displacements; and utilizing a feasible algorithm to find the solution of the equation.
ROBOTIC SURGERY SYSTEM INCLUDING POSITION SENSORS USING FIBER BRAGG GRATINGS
A method for determining a shape of a lumen in an anatomical structure comprises reading information from a plurality of strain sensors disposed substantially along a length of a flexible medical device when the flexible medical device is positioned in the lumen. When the flexible medical device is positioned in the lumen, the flexible medical device conforms to the shape of the lumen. The method further comprises computationally determining, by a processing system, the shape of the lumen based on the information from the plurality of strain sensors.
Smart robot part
Example implementations may relate a robot part including a processor, at least one sensor, and an interface providing wireless connectivity. The processor may determine that the robot part is removablly connected to a particular robotic system and may responsively obtain identification information to identify the particular robotic system. While the robot part is removablly connected to the particular robotic system, the processor may (i) transmit, to an external computing system, sensor data that the processor received from the at least one sensor and (ii) receive, from the external computing system, environment information (e.g., representing characteristics of an environment in which the particular robotic system is operating) based on interpretation of the sensor data. And based on the identification information and the environment information, the processor may generate a command that causes the particular robotic system to carry out a task in the environment.
Method and system for hand presence detection in a minimally invasive surgical system
In a minimally invasive surgical system, a hand tracking system tracks a location of a sensor element mounted on part of a human hand. A system control parameter is generated based on the location of the part of the human hand. Operation of the minimally invasive surgical system is controlled using the system control parameter. Thus, the minimally invasive surgical system includes a hand tracking system. The hand tracking system tracks a location of part of a human hand. A controller coupled to the hand tracking system converts the location to a system control parameter, and injects into the minimally invasive surgical system a command based on the system control parameter.
Disparity value deriving device, equipment control system, movable apparatus, and robot
A disparity value deriving device includes an acquisition unit configured to acquire a degree of matching between a reference region in a reference image captured from a first imaging position and each of a plurality of region in a designated range including a corresponding region to the reference region in a comparison image captured from a second imaging position; a synthesizer configured to synthesize the degree of matching of a reference region in neighborhood of a predetermined reference region in the reference image and the degree of matching of the predetermined reference region in the reference image; and a deriving unit configured to derive a disparity value of an object whose image is being captured in the predetermined reference region and a corresponding region to the predetermined reference region, based on a synthesized degree of matching obtained by the synthesizer.
Optimization of observer robot locations
Example implementations may relate to optimization of observer robot locations. In particular, a control system may detect an event that indicates desired relocation of observer robots within a worksite. Each such observer robot may have respective sensor(s) configured to provide information related to respective positions of a plurality of target objects within the worksite. Responsively, the control system may (i) determine observer robot locations within the worksite at which one or more of the respective sensors are each capable of providing information related to respective positions of one or more of the plurality of target objects and (ii) determine a respectively intended level of positional accuracy for at least two respective target objects. Based on the respectively intended levels of positional accuracy, the control system may select one or more of the observer robot locations and may direct one or more observer robots to relocate to the selected locations.
Configurable robotic surgical system with virtual rail and flexible endoscope
Systems and methods for moving or manipulating robotic arms are provided. A group of robotic arms are configured to form a virtual rail or line between the end effectors of the robotic arms. The robotic arms are responsive to outside force such as from a user. When a user moves a single one of the robotic arms, the other robotic arms will automatically move to maintain the virtual rail alignments. The virtual rail of the robotic arm end effectors may be translated in one or more of three dimensions. The virtual rail may be rotated about a point on the virtual rail line. The robotic arms can detect the nature of the contact from the user and move accordingly. Holding, shaking, tapping, pushing, pulling, and rotating different parts of the robotic arm elicits different movement responses from different parts of the robotic arm.