Patent classifications
G05B2219/40131
STREAMING MEDIA TRANSMISSION METHOD AND CLIENT APPLIED TO VIRTUAL REALITY TECHNOLOGY
Embodiments of the present invention describe streaming media transmission methods and apparatus applied to a virtual reality technology. A method for streaming media transmissions may include sending a media information obtaining request to a server, where the media information obtaining request includes client capability information and auxiliary information, the client capability information indicates that the client supports reception of data pushed by the server, and the auxiliary information indicates an attribute that the client supports virtual reality presentation. The method may also include receiving a media presentation description and media data, where the media presentation description and the media data are sent by the server after the server responds to the media information obtaining request. According to the streaming media transmission methods and apparatus applied to a virtual reality technology in the embodiments of the present invention, a transmission delay can be reduced, and transmission efficiency can be improved.
METHODS AND SYSTEMS FOR PERFORMING NAVIGATION-ASSISTED MEDICAL PROCEDURES
Systems and methods are described for performing navigation-assisted medical procedures such as biopsies, surgeries and pathology procedures by obtaining location information of an item of interest located within at least a portion of a subject; sensing position information of a moveable device; determining a relative position of the moveable device to the item of interest using the location information of the item of interest and the position information of the moveable device; and providing feedback based on the relative position of the moveable device to the item of interest that can be used to change the relative position of the moveable device to the item of interest.
Methods and systems for providing robotic operation constraints for remote controllable robots
A method of constraining operation of a robot includes receiving, using network interface hardware of a robot, one or more robotic control instructions from a user, where the robotic control instructions instruct the robot to perform a robotic movement, determining whether the user providing the one or more robotic control instructions to the robot is a primary user or a secondary user, and comparing the robotic movement instructed by the one or more robotic control instructions with one or more robotic operation constraints. The method further includes determining whether the robotic movement instructed by the one or more robotic control instructions conflicts with the one or more robotic operation constraints and constraining operation of the robot when the user is the secondary user and the robotic movement instructed by the one or more robotic control instructions conflicts with the one or more robotic operation constraints.
ROBOT CONTROL, TRAINING AND COLLABORATION IN AN IMMERSIVE VIRTUAL REALITY ENVIRONMENT
System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.
SYSTEM FOR THE VIRTUAL ASSISTANCE OF AN OPERATOR FOR WOOD PROCESSING MACHINES
A system (1) for the virtual assistance of an operator of a processing apparatus (B1, B2, B3) for processing workpieces (W) which preferably consist at least in sections of wood, wood materials, synthetic material or the like, comprising: a mobile terminal device (10) which is configured to be worn by the operator, and that has: position detection means (11) which are configured to detect data for determining the position of the mobile terminal device (10) in relation to the processing apparatus (B1, B2, B3), information output means (12) which are configured to output information and/or operational instructions to the operator, and a preferably wireless data transmission interface (13) which is configured to communicate with a data server (20).
Methods and systems for performing navigation-assisted medical procedures
Systems and methods are described for performing navigation-assisted medical procedures such as biopsies, surgeries and pathology procedures by obtaining location information of an item of interest located within at least a portion of a subject; sensing position information of a moveable device; determining a relative position of the moveable device to the item of interest using the location information of the item of interest and the position information of the moveable device; and providing feedback based on the relative position of the moveable device to the item of interest that can be used to change the relative position of the moveable device to the item of interest.
Robot control, training and collaboration in an immersive virtual reality environment
System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.
SYSTEMS AND METHOD FOR ROBOTIC LEARNING OF INDUSTRIAL TASKS BASED ON HUMAN DEMONSTRATION
A system for performing industrial tasks includes a robot and a computing device. The robot includes one or more sensors that collect data corresponding to the robot and an environment surrounding the robot. The computing device includes a user interface, a processor, and a memory. The memory includes instructions that, when executed by the processor, cause the processor to receive the collected data from the robot, generate a virtual recreation of the robot and the environment surrounding the robot, receive inputs from a human operator controlling the robot to demonstrate an industrial task. The system is configured to learn how to perform the industrial task based on the human operator's demonstration of the task, and perform, via the robot, the industrial task autonomously or semi-autonomously.
DETERMINATION OF EXTENTS OF A VIRTUAL REALITY (VR) ENVIRONMENT TO DISPLAY ON A VR DEVICE
A computer-implemented method, according to one approach, includes identifying machines involved in performance of a manufacturing process at a manufacturing location, and identifying a workflow sequence of execution of the machines. Conditions associated with remote operators using virtual reality (VR) devices to remotely control the machines to perform the workflow sequence of execution at the manufacturing location are received. The method further includes determining, for each of the VR devices, an extent of a VR collaborative environment to display. The extents are determined based on the conditions, thereby reducing latency in performance of the workflow sequence of execution at the manufacturing location. The method further includes outputting the extents to the VR devices.
Interactive system control apparatus and method
A mixed reality control apparatus for a system having at least one remote data source requiring at least one physical control input, the apparatus comprising a headset for placing over a user's eyes, in use, the headset including a screen, the apparatus further including a processor configured to receive data from the at least one remote data source and display the data on the screen within a three-dimensional virtual environment, and image capture means for capturing images of the real world environment in the vicinity of the user, the processor being further configured to: blend at least portions or objects of the images of the real world environment into the three-dimensional virtual environment to create a mixed reality environment, including the data, to be displayed on the screen; and generate a virtual representation of the at least one physical control input and blend the virtual representation into the mixed reality environment at a selected location, and generate a marker representative of the selected location; wherein the apparatus is configured to receive a user input in association with the marker, identify the user input as an action in respect of the control input, or the virtual representation thereof, and generate a control signal to effect the action accordingly.