Patent classifications
G05B2219/40393
VARYING EMBEDDING(S) AND/OR ACTION MODEL(S) UTILIZED IN AUTOMATIC GENERATION OF ACTION SET RESPONSIVE TO NATURAL LANGUAGE REQUEST
As opposed to a rigid approach, implementations disclosed herein utilize a flexible approach in automatically determining an action set to utilize in attempting performance of a task that is requested by natural language input of a user. The approach is flexible at least in that embedding technique(s) and/or action model(s), that are utilized in generating action set(s) from which the action set to utilize is determined, are at least selectively varied. Put another way, implementations leverage a framework via which different embedding technique(s) and/or different action model(s) can at least selectively be utilized in generating different candidate action sets for given NL input of a user. Further, one of those action sets can be selected for actual use in attempting real-world performance of a given task reflected by the given NL input. The selection can be based on a suitability metric for the selected action set and/or other considerations.
Systems And Methods For Robotic Process Automation Of Mobile Platforms
In some embodiments, a robotic process automation (RPA) design application provides a user-friendly graphical user interface that unifies the design of automation activities performed on desktop computers with the design of automation activities performed on mobile computing devices such as smartphones and wearable computers. Some embodiments connect to a model device acting as a substitute for an actual automation target device (e.g., smartphone of specific make and model) and display a model GUI mirroring the output of the respective model device. Some embodiments further enable the user to design an automation workflow by directly interacting with the model GUI.
Systems and Methods for Robotic Process Automation of Mobile Platforms
In some embodiments, a robotic process automation (RPA) design application provides a user-friendly graphical user interface that unifies the design of automation activities performed on desktop computers with the design of automation activities performed on mobile computing devices such as smartphones and wearable computers. Some embodiments connect to a model device acting as a substitute for an actual automation target device (e.g. smartphone of specific make and model) and display a model GUI mirroring the output of the respective model device. Some embodiments further enable the user to design an automation workflow by directly interacting with the model GUI.
Deep Compositional Robotic Planners That Follow Natural Language Commands
The present approach similarly combines task and motion planning, but does so without symbolic representations and begins with simpler tasks than other models in such domains can handle. Unlike prior approaches, the present approach does so in continuous action and state spaces which require many precise steps in the configuration space to execute what otherwise is a single output token such as “pick up” for discrete problems.
METHOD AND DEVICE FOR ROBOT INTERACTIONS
Embodiments of the disclosure provide a method and device for robot interactions. In one embodiment, a method comprises: collecting to-be-processed data reflecting an interaction output behavior; determining robot interaction output information corresponding to the to-be-processed data; controlling a robot to execute the robot interaction output information to imitate the interaction output behavior; collecting, in response to an imitation termination instruction triggered when the imitation succeeds, interaction trigger information corresponding to the robot interaction output information; and storing the interaction trigger information in relation to the robot interaction output information to generate an interaction rule.
GENERAL PURPOSE ROBOTICS OPERATING SYSTEM WITH UNMANNED AND AUTONOMOUS VEHICLE EXTENSIONS
The present disclosure provides a general purpose operating system (GPROS) that shows particular usefulness in the robotics and automation fields. The operating system provides individual services and the combination and interconnections of such services using built-in service extensions, built-in completely configurable generic services, and ways to plug in additional service extensions to yield a comprehensive and cohesive framework for developing, configuring, assembling, constructing, deploying, and managing robotics and/or automation applications. The disclosure includes GPROS extensions and features directed to use as an autonomous vehicle operating system. The vehicle controlled by appropriate versions of the GPROS can include unmanned ground vehicle (UGV) applications such as a driverless or self-driving car. The vehicle can likewise or instead include an unmanned aerial vehicle (UAV) such as a helicopter or drone. In cases, the vehicle can include an unmanned underwater vehicle (UUV), such as a submarine or other submersible.
Systems and methods for robotic process automation of mobile platforms
In some embodiments, a robotic process automation (RPA) design application provides a user-friendly graphical user interface that unifies the design of automation activities performed on desktop computers with the design of automation activities performed on mobile computing devices such as smartphones and wearable computers. Some embodiments connect to a model device acting as a substitute for an actual automation target device (e.g., smartphone of specific make and model) and display a model GUI mirroring the output of the respective model device. Some embodiments further enable the user to design an automation workflow by directly interacting with the model GUI.
Systems and methods for robotic process automation of mobile platforms
In some embodiments, a robotic process automation (RPA) design application provides a user-friendly graphical user interface that unifies the design of automation activities performed on desktop computers with the design of automation activities performed on mobile computing devices such as smartphones and wearable computers. Some embodiments connect to a model device acting as a substitute for an actual automation target device (e.g., smartphone of specific make and model) and display a model GUI mirroring the output of the respective model device. Some embodiments further enable the user to design an automation workflow by directly interacting with the model GUI.
General purpose robotics operating system with unmanned and autonomous vehicle extensions
The present disclosure provides a general purpose operating system (GPROS) that shows particular usefulness in the robotics and automation fields. The operating system provides individual services and the combination and interconnections of such services using built-in service extensions, built-in completely configurable generic services, and ways to plug in additional service extensions to yield a comprehensive and cohesive framework for developing, configuring, assembling, constructing, deploying, and managing robotics and/or automation applications. The disclosure includes GPROS extensions and features directed to use as an autonomous vehicle operating system. The vehicle controlled by appropriate versions of the GPROS can include unmanned ground vehicle (UGV) applications such as a driverless or self-driving car. The vehicle can likewise or instead include an unmanned aerial vehicle (UAV) such as a helicopter or drone. In cases, the vehicle can include an unmanned underwater vehicle (UUV), such as a submarine or other submersible.
Robot systems, methods, control modules, and computer program products that leverage large language models
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via a recursive sequence of NL prompts or queries. Corresponding NL responses from the LLM may then be converted into robot control parameters and/or instructions. In this way, an LLM may be leveraged by the robot control system to enhance the autonomy of various operations and/or functions, including without limitation task planning, motion planning, human interaction, and/or reasoning about the environment.