APPARATUS AND METHOD FOR MANIPULATING OBJECTS WITH GESTURE CONTROLS
20230122811 · 2023-04-20
Assignee
Inventors
Cpc classification
G06F3/017
PHYSICS
International classification
Abstract
An apparatus for manipulating an object includes first and second gesture controllers, each operatively connected to the object and structured and programmed such that, in a first-action active state, each can causes a first action to be carried out on the object by an appropriate first-action gesture made in the gesture controller. Only one of the first and second gesture controllers at any given time is capable of being in the first-action active state, and the first-action active state is transferable between the first and second gesture controllers upon the detecting of a first-action transfer gesture by one of said first gesture controller and said second gesture controller. Specific gesture control apparatus and methods for manipulating an object are also disclosed.
Claims
1-20. (canceled)
21. A system for manipulating an object comprising: a first gesture controller operated by a secondary user, the first gesture controller comprising a first sensor, the first sensor configured to produce a first sensing field, the first gesture controller operatively connected to the object, wherein the secondary user manipulates the object by performing a series of gestures in the first sensing field; and a second gesture controller operated by a primary user, the second gesture controller comprising a second sensor, the second sensor configured to produce a second sensing field, the second gesture controller operatively connected to the object, wherein the primary user monitors manipulation of the object by the secondary user in the first sensing field, the primary user configured to assume control of manipulating the object when the series of gestures deviates from a desired operation.
22. The system of claim 21, wherein the first gesture controller is in an active state when the secondary user manipulates the object and the second gesture controller is in an inactive state when the secondary user manipulates the object.
23. The system of claim 22, wherein the first gesture controller is placed in an inactive state and the second gesture controller is placed in an active state prior to the primary user assuming controller over the object.
24. The system of claim 21, wherein the second gesture controller detects an acquire gesture performed by the primary user in the second sensing field.
25. The system of claim 24, wherein responsive to the second gesture controller detecting the acquire gesture in the second sensing field, the second gesture controller places the first gesture controller in an inactive state.
26. The system of claim 25, wherein the second gesture controller detects a pass gesture performed by the primary user in the second sensing field.
27. The system of claim 26, wherein responsive to the second gesture controller detecting the pass gesture performed in the second sensing field, the second gesture controller activates the first gesture controller.
28. The system of claim 21, wherein the first gesture controller and the second gesture controller cannot simultaneously manipulate the object.
29. The system of claim 21, wherein the object is a physical object.
30. The system of claim 21, wherein the object is a virtual object.
31. A method of manipulating an object, comprising: receiving, by a computing system from a trainee gesture controller of a set of gesture controllers, an indication of a first gesture within a first sensing field of the trainee gesture controller; determining, by the computing system, an operation performable on a first object corresponding to the first gesture; based on the determining, causing, by the computing system, the operation to be performed to the first object; transferring, by the computing system, control from the trainee gesture controller to a primary gesture controller in the set of gesture controllers during training of the trainee; receiving, by the computing system, a second indication of a second gesture within a second sensing field of the primary gesture controller; determining, by the computing system, that the second gesture corresponds to a second operation to be performed on a second object; and based on the determining, causing, by the computing system, the second operation to be performed to the second object.
32. The method of claim 31, wherein the first object is a virtual object and the second object is a physical object, or the first object is a physical object and the second object is a virtual object.
33. The method of claim 31, further comprising: placing, by the computing system, the trainee gesture controller in an active state before causing the operation to be performed to the first object.
34. The method of claim 33, further comprising: setting, by the computing system, the trainee gesture controller to an inactive state prior to transferring control to the primary gesture controller.
35. The method of claim 34, further comprising: setting, by the computing system, the primary gesture controller in the active state before causing the second operation to be performed to the second object.
36. The method of claim 31, wherein transferring, by the computing system, control from the trainee gesture controller to the primary gesture controller in the set of gesture controllers during training of the trainee comprises: detecting an acquire gesture performed in the second sensing field.
37. The method of claim 31, wherein transferring, by the computing system, control from the trainee gesture controller to the primary gesture controller in the set of gesture controllers during training of the trainee comprises: detecting a pass gesture performed in the first sensing field.
38. The method of claim 31, further comprising: detecting, by the computing system, a third indication of a third gesture within the first sensing field of the trainee gesture controller; determining, by the computing system, that the trainee gesture controller is in an inactive state; and based on the determining, disregarding, by the computing system, the third indication.
39. The method of claim 31, further comprising: detecting, by the computing system, a third indication of a third gesture within the first sensing field of the trainee gesture controller; determining, by the computing system, that the trainee gesture controller is in an active state; and based on the determining, processing, by the computing system, the third indication.
40. The method of claim 31, wherein the first object and the second object are the same.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
[0040] The present invention teaches novel ways to structure systems that employ multiple gesture controllers to manipulate an object, whether that object be virtual or physical. With reference to
[0041] A “gesture” as used herein can include static and dynamic gestures. Static gestures would include making shapes or different signs with a gesturing device, often a hand or other appendage. Making an open hand, making a closed fist, making a one, two or three with the finger are all examples of static gestures. These are often useful for programmed actions. For example, make an open hand gesture could open the grip of a machine and making a closed fist could close the grip. Dynamic gestures involve motion. For example, twisting the hand might result in a rotation of an object or 3D computer model thereof, with rotation dynamically following movement of the hand.
[0042] The gesture controllers are operatively connected to each other and to one or more objects for manipulating the objects. Computers and/or other processing units, herein “processors,” interpret gestures and communicate with apparatus to cause the manipulation. The multiple gesture controllers need not be connected to a single computer but may be connected by the web, cloud, mobile device, etc. The present invention teaches that, at any given time, only one, non-conflicting command can be sent to the processors/apparatus that control particular manipulations of the object. These gesture controllers are shown simply for illustrative purposes and it should be appreciated that virtually any gesture control can be employed in the present system. There are a multitude of gesture controllers to choose from, including, without limitation, those described in U.S. Pat. Nos. 8,106,749, 8,316,324, and U.S. 2013/0182902. Gesture controllers are well known by those of skill in the art and even by the general public, for example, with such gesture control systems as that offered by Microsoft's Kinect™ system offered with its Xbox™ gaming systems.
[0043] The first gesture controller 12 and second gesture controller 14 are operatively connected by appropriate processors and/or apparatus to operate as disclosed herein, the communication being shown at arrow 13, and, in some embodiments, the state of various components and actions and other data relevant to the system 10 and its implementation to manipulate an object can be displayed at operator interface 15 associated with the first gesture controller 12 and operator interface 17 associated with the second gesture controller. To operatively connect the first gesture controller 12 to the desired object, the sensor 16 communicates with processors and/or apparatus, with the communication being represented by arrow 24 and the processors and/or apparatus being represented by apparatus A and apparatus A′. Sensor 20 communicates with processors and/or apparatus, with the communication being represented by arrow 26, and the processors and/or apparatus being represented by apparatus A′ and apparatus A″. In
[0044] With respect to the first gesture controller 12, a gesture made in the sensing field 18 of sensor 16 is detected and processed to cause a particular action either between the multiple gesture controllers 12, 16 or on the virtual object t or physical object t′. With respect to the second gesture controller 14, a gesture made in the sensing field 22 of sensor 20 is detected and processed to cause a particular action either between the multiple gesture controllers 12, 16 or on an object t, t′. Some gestures will be interpreted to cause apparatus A or apparatus A″ to effect manipulation of a virtual object t, in some embodiments, or a physical object t′, in other embodiments. The present system 10 can be employed to manipulate virtual objects t, such as icons and/or images on a computer screen, or it can be employed to physically manipulate a physical object t′. Virtual objects could include computer-drafted images and 3D models, with a 3D model being represented in
[0045] With this general understanding of the system 10, the particular structure and programming of the plurality of gesture controllers in various embodiments is next disclosed. The disclosure focuses on the first gesture controller 12, being represented in flow charts by GC1. When necessary, the second gesture controller 14 is referenced as GC2. The flow charts also focus on particular broadly defined actions, but it will be appreciated that each gesture controller in the system can have any number of actions, each action relating to a specific gesture. In some embodiments, the present invention introduces novel safety measures by ensuring that actions shared by multiple gesture controllers can only be effected by one gesture controller at a time.
[0046] Referring now to
[0047] Through a first-action gesture 200, the object, whether virtual object t or physical object t′, may be manipulated in a first type of way according to a gesture made in the sensing field. The term “first” is employed only to indicate the action is a distinct manipulation of the object t, t′. The gesture controllers 12, 14 and others when employed, can be programmed to recognize any number of second, third, etc. gestures and communicate with the apparatus A, A′, A″ so as to manipulate the object in second, third, etc ways. Through the transfer gesture 300, the ability to manipulate the object in a particular way through a given gesture, for example the first-action gesture 200, is transferred from one gesture controller to another. Through the pause gesture 400, the ability to manipulate the object in a particular way through a given gesture, for example the first-action gesture 200, is disabled in one or more gesture controllers. More particulars will be apparent from further disclosures below.
[0048] Referring now to
[0049] In some embodiments, only one gesture controller can be in an active state for a given gesture resulting in a manipulation of the object t, t′, i.e., for any given first-action gesture, second-action gesture, third-action gesture etc. at any given time. Thus, though multiple gesture controllers might be programmed similarly to carry out various actions, it is important in some embodiments, that two different gesture controllers cannot at the same time control a particular action. For instance, it may be important to ensure that a first operator of a first gesture controller 12 cannot attempt an action that conflicts with an action taken by an operator of a second gesture controller 14. This safety feature will be found very important as gesture controlled systems become more commonly implemented. Notably, for purposes of this invention, there may be certain actions that two or more controllers can carry out without being required to be in the active state. That is, there may be some manipulations of the object that are acceptable for always being controllable by two or more gesture controllers, for purposes of this invention, the system has at least one action that can only be carried out by one gesture controller at a time, thus requiring determination of an “active state” for the gesture that causes that action, as disclosed above. In some embodiments, every action might require an active state.
[0050] Referring now to
[0051] The pass gesture allows one to pass the active state for a given action to another controller. For example the first-action active state can be passed from a first gesture controller 12 to a second gesture controller 14. This ability might always be available, or, as shown in
[0052] The inactive state as discussed herein for various functions can be achieved in a number of ways, and the present invention is not limited to or by any particular apparatus or programming in this regard. The apparatus A, A′, A″ might remain engaged with the object t, t′, but would not be able to receive commands from the gesture controller and related processors etc. to cause motion. Other methods to achieve the inactive state include disengaging the signal from the apparatus to the object, disengaging the signal from any processors to the apparatus, and disengaging the signal from the gesture controller sensor to a processor. Combinations of these methods may also be employed. Logic relays, solid state relays or other components not shown but commonly known must be added to the gesture controllers to permit the implementation of the inactive state.
[0053] In some embodiments, as shown at 308, a determination must first be made as to whether or not the second gesture controller 14 (GC2) is permitted to and is ready to accept the first-action active state being passed to that controller. If it is not permitted, the transfer will be denied. A notice could be made to the individual attempting the pass gesture, as, for example, at operator interface 15, shown as an informative screen in
[0054] The acquire gesture is now addressed. The acquire gesture allows one to acquire the active state for a given action from another gesture controller. Here, a first-action gesture is the focus. This ability might always be available, or, as shown in
[0055] The acquire gesture and its implementation can have wide application in education and training, allowing a mentor at one gesture controller to monitor the manipulation of the object as carried out by one or more trainees at separate and distinct gesture controllers. The mentor can take possession of certain actions if needed to assist in education or training or for safety reasons.
[0056] Referring now to
[0057] As shown in
[0058] In some embodiments, the paused action can be reinstated. In some embodiments, any gesture controller can be used to restart, while in other embodiments, only specifically dedicated gesture controllers can be used to restart. In specific embodiments, only the gesture controller that was employed to pause the action can be used to restart the action, i.e., the restart gesture must be made in the sensing field of the same gesture controller in which the pause gesture was made. As at 406, the first gesture controller 12 awaits a restart gesture for the first-action gesture. If the restart gesture is detected in the sensing field 18 of the first gesture controller 12, one or more gesture controllers are placed in the first-action active state, as at 408. In some embodiments, only the gesture controller in which the restart gesture is made is place in the active state.
[0059] Another embodiment of this invention provides safety measures and transfer gesture capabilities based on the sensing field dimensions. More particularly, removing a gesturing object from the boundaries of the sensing field can cause various desired actions. Herein, simply for convenience, reference will be made to a hand making gestures. With reference to
[0060] In some embodiments, when a gesture is made touching upon or leaving the boundaries of the sensing field of a gesture controller the action being performed is stopped. For example if movement of the hand to the right within the sensing field causes movement of the object t, t′ to the right, the hand touching upon or leaving the boundaries of the sensing field will result in a stopping of the object movement to the right. In some such embodiments, the gesture controller is placed in the inactive state with respect to a given action when a gesture is made touching upon or leaving the boundaries of the sensing field. In some embodiments, the gesture must fully leave the boundaries of the sensing field (in the example shown, the entire hand would have to leave the boundaries). For the purposes of this disclosure, the above concept will be referred to as a “boundary switch”. Other ways may be possible to achieve a boundary switch but the concept of said switch is unique to the safe function of the invention. 100611 In some embodiments, when the pause gesture disclosed above with respect to
[0061] In some embodiments, the pass gesture disclosed above with respect to
[0062] In one embodiment, a removal gesture across a right boundary causes a passing of the active state to a particular gesture controller, a removal gesture across a left boundary causes passing of the active state to a separate and distinct gesture controller, and removal across the near boundary (e.g., boundary plane afg) causes the gesture-based action being performed to stop.
[0063] In other embodiments, any boundary (not just the near boundary) could be used to cause the stop a desired action being performed.
[0064] Referring now to
[0065] In
[0066] It can be appreciated that movement of the hand is limited by the boundaries of the sensing area, and different movement systems might benefit from the ability to create a home position. If the object is moved from its current position only when the hand moves, the distance the object can move to the right is limited by the right boundary because the operator cannot leave the boundary and still effect movement. Thus, in
[0067] Thus, in some embodiments, the present invention provides a gesture control system form manipulating an object wherein a gesture controller is operatively connected to the object and structured and programmed such that the object is moved upon movement of an gesturing object in the gesture controller relative to a home position, and wherein a new home position is established in the gesture controller when a home gesture is detected by the gesture controller.
[0068] In some embodiments, as seen in
[0069] Aspects of the invention relating to transferring the active state between multiple controllers has broad application in any gesture control system employing multiple controllers. Aspects of motion as disclosed with respect to the boundary switch and the creation of new home positions for motion have application in gesture controls systems employing one or more gesture controllers.
[0070] In particular embodiments, one or more aspects of the present invention are implemented in a microscopy assembly having a microscope and a specimen on translation stage. A translation stage serves to move the specimen under the microscope to change the portion of the specimen that is imaged thereby. The movement can be in x, y and/or z directions. In some embodiments, the specimen and/or portions of the microscope can be manipulated to image the desired portion. The microscope could be movable relative to the specimen or a combination of movement of the microscope and movement of the specimen could be practiced. General aspects of using gesture controls to in microscopy are shown in patent DE 102012220195.
[0071] In light of the foregoing, it should be appreciated that the present invention significantly advances the art of gesture control systems. While particular embodiments of the invention have been disclosed in detail herein, it should be appreciated that the invention is not limited to or by and specific example and as variations on the invention herein will be readily appreciated by those of ordinary skill in the art. The scope of the invention shall be appreciated from the claims that follow.