Generating and utilizing spatial affordances for an object in robotics applications
10853646 ยท 2020-12-01
Assignee
Inventors
- Adrian Li (San Francisco, CA, US)
- Nicolas Hudson (San Mateo, CA, US)
- Aaron Edsinger (Port Costa, CA, US)
Cpc classification
B25J9/1633
PERFORMING OPERATIONS; TRANSPORTING
B25J9/1661
PERFORMING OPERATIONS; TRANSPORTING
B25J9/161
PERFORMING OPERATIONS; TRANSPORTING
G06V10/7788
PHYSICS
Y10S901/47
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
B25J9/1669
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
Methods, apparatus, systems, and computer-readable media are provided for generating spatial affordances for an object, in an environment of a robot, and utilizing the generated spatial affordances in one or more robotics applications directed to the object. Various implementations relate to applying vision data as input to a trained machine learning model, processing the vision data using the trained machine learning model to generate output defining one or more spatial affordances for an object captured by the vision data, and controlling one or more actuators of a robot based on the generated output. Various implementations additionally or alternatively relate to training such a machine learning model.
Claims
1. A method implemented by one or more processors of a robot, the method comprising: receiving vision data, the vision data generated based on output from one or more vision sensors of a vision component viewing an object in an environment of a robot; applying the vision data as input to at least one trained neural network model; processing the vision data using the trained neural network model to generate output defining multiple spatial affordances for the object in the environment, wherein processing of the vision data is based on trained parameters of the trained neural network model, wherein the output is generated directly by the processing of the vision data using the trained neural network model, and wherein the multiple spatial affordances defined by the output include a first spatial affordance for a first spatial region of the object and a second spatial affordance for a second spatial region of the object; determining that the first spatial affordance is a target affordance for the object; and based on the first spatial affordance being the target affordance for the object and being defined for the first spatial region of the object: controlling one or more actuators of the robot to cause one or more components of the robot to interact with the first spatial region of the object to perform the first spatial affordance through interaction of the one or more components with the first spatial region of the object.
2. The method of claim 1, wherein the first spatial affordance further defines an affordance application parameter for the first spatial affordance for the first spatial region.
3. The method of claim 2, wherein the affordance application parameter defines at least one magnitude of force to be applied in performance of the first spatial affordance for the first spatial region.
4. The method of claim 2, wherein the affordance application parameter defines at least one direction of force to be applied in performance of the first spatial affordance for the first spatial region.
5. The method of claim 1, wherein the first spatial affordance defines a collection of affordances.
6. The method of claim 5, wherein the collection of affordances is an ordered collection of affordances.
7. The method of claim 1, wherein the first spatial affordance defines the first spatial affordance for the first spatial region based on the output including a probability, that corresponds to the first spatial region and to the first affordance, satisfying a first threshold.
8. The method of claim 1, wherein the vision data comprises a plurality of pixels or voxels, and wherein the first spatial region defines a first pixel or first voxel of the plurality of pixels or voxels and the second spatial region defines a second pixel or voxel of the plurality of pixels or voxels.
9. The method of claim 8, wherein the first spatial region defines only the first pixel or voxel and the second spatial region defines only the second pixel or voxel.
10. The method of claim 8, wherein the first spatial region defines a first collection of contiguous pixels or voxels that include the first pixel or voxel, and wherein the second spatial region defines a second collection of contiguous pixels or voxels that include the first pixel or voxel and that exclude the second pixel or voxel.
11. A robot, comprising: a vision component that comprises one or more vision sensors; actuators; an end effector; memory storing at least one trained machine learning model; one or more processors configured to: receive vision data generated by the vision component, the vision data capturing an object in an environment of the robot; apply the vision data as input to the at least one trained machine learning model; process the vision data using the trained machine learning model to generate output defining multiple spatial affordances for the object in the environment, wherein processing of the vision data is based on trained parameters of the trained neural network model, wherein the output is generated directly by the processing of the vision data using the trained neural network model, and wherein the multiple spatial affordances defined by the output include a first spatial affordance for a first spatial region of the object and a second spatial affordance for a second spatial region of the object; determine that the first spatial affordance is a target affordance for the object; and in response to the first spatial affordance being the target affordance for the object and being defined for the first spatial region of the object: control one or more of the actuators to cause performance of the first spatial affordance through interaction with the first spatial region of the object.
12. The robot of claim 11, wherein the vision data comprises a plurality of pixels or voxels, and wherein the first spatial region defines a first pixel or first voxel of the plurality of pixels or voxels and the second spatial region defines a second pixel or voxel of the plurality of pixels or voxels.
13. The robot of claim 12, wherein the first spatial region defines only the first pixel or voxel and the second spatial region defines only the second pixel or voxel.
14. The robot of claim 12, wherein the first spatial region defines a first collection of contiguous pixels or voxels that include the first pixel or voxel, and wherein the second spatial region defines a second collection of contiguous pixels or voxels that include the first pixel or voxel and that exclude the second pixel or voxel.
15. The robot of claim 11, wherein the first spatial affordance further defines an affordance application parameter for the first spatial affordance for the first spatial region.
16. The robot of claim 15, wherein the affordance application parameter defines at least one magnitude of force, and at least one direction of force, to be applied in performance of the first spatial affordance for the first spatial region.
17. The robot of claim 15, wherein the affordance application parameter defines at least one direction of force to be applied in performance of the first spatial affordance for the first spatial region.
18. The robot of claim 11, wherein the first spatial affordance defines an ordered collection of affordances.
19. The robot of claim 11, wherein the first spatial affordance defines the first spatial affordance for the first spatial region based on the output including a probability, that corresponds to the first spatial region and to the first affordance, satisfying a first threshold.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION
(11)
(12) In some implementations, processor(s) 102 may be operably coupled with one or more actuators 104.sub.1-n, at least one end effector 106, and/or one or more sensors 108.sub.1-m, e.g., via one or more buses 110. The robot 100 may have multiple degrees of freedom and each of the actuators 104.sub.1-n may control actuation of the robot 100 within one or more of the degrees of freedom responsive to control commands. The control commands are generated by one or more of the processor(s) 102 and provided to the actuators 104.sub.1-n (e.g., via one or more of the buses 110) to control the robot 100. As described herein, various control commands can be generated based on spatial affordances determined according to techniques described herein. As used herein, actuator encompasses a mechanical or electrical device that creates motion (e.g., a motor), in addition to any driver(s) that may be associated with the actuator and that translate received control commands into one or more signals for driving the actuator. Accordingly, providing a control command to an actuator may comprise providing the control command to a driver that translates the control command into appropriate signals for driving an electrical or mechanical device to create desired motion.
(13) As used herein, end effector may refer to a variety of tools that may be operated by robot 100 in order to accomplish various tasks. For example, some robots may be equipped with an end effector 106 that takes the form of a claw with two opposing fingers or digits. Such a claw is one type of gripper known as an impactive gripper. Other types of grippers may include but are not limited to ingressive (e.g., physically penetrating an object using pins, needles, etc.), astrictive (e.g., using suction or vacuum to pick up an object), or contigutive (e.g., using surface tension, freezing or adhesive to pick up object). More generally, other types of end effectors may include but are not limited to drills, brushes, force-torque sensors, cutting tools, deburring tools, welding torches, containers, trays, and so forth. In some implementations, end effector 106 may be removable, and various types of modular end effectors may be installed onto robot 100, depending on the circumstances.
(14) Sensors 108.sub.1-m may take various forms, including but not limited to vision components (e.g., laser scanners, stereographic cameras, monographic cameras), force sensors, pressure sensors, pressure wave sensors (e.g., microphones), proximity sensors (also referred to as distance sensors), torque sensors, bar code readers, radio frequency identification (RFID) readers, accelerometers, gyroscopes, compasses, position sensors (e.g., odometer, a global positioning system), speedometers, edge detectors, and so forth. While sensors 108.sub.1-m are depicted as being integral with robot 100, this is not meant to be limiting. In some implementations, sensors 108.sub.1-m may be located external to, but may be in direct or indirect communication with, robot 100.
(15) Also illustrated in
(16) The robot 100A also includes a vision component 108A. The vision component 108A includes one or more vision sensors and may be, for example, a stereographic camera, a monographic camera, or a laser scanner. Vision data described herein can be generated based on output from vision sensor(s) of the vision component 108A. For example, the output can be raw output from the vision sensor(s), or processed output. In some implementations, a stereographic camera includes two or more sensors (e.g., charge-coupled devices (CCDs)), each at a different vantage point. Vision data can be generated based on sensor data generated by the two sensors at a given instance, such as vision data that is a two-and-a-half-dimensional (2.5D) (2D with depth) image, where each of the pixels of the 2.5D image defines an X, Y, and Z coordinate of a surface of a corresponding object, and optionally color values (e.g., R, G, B values) and/or other parameters for that coordinate of the surface. In some other implementations, a stereographic camera may include only a single sensor and one or more mirrors utilized to effectively capture sensor data from two different vantage points. A monographic camera can include a single sensor and captures two-dimensional (2D) vision data. A laser scanner includes one or more lasers that emit light and one or more sensors that generate vision sensor data related to reflections of the emitted light. The vision data generated based on sensor output from a laser scanner may be 2.5D point cloud data. A laser scanner may be, for example, a time-of-flight laser scanner or a triangulation based laser scanner and may include a position sensitive detector (PSD) or other optical position sensor. In some implementations, vision data can be a voxel map as described herein. In some of those implementations, the voxel map is generated by processing of multiple instances of vision data. For example, multiple 2.5D images and/or multiple 2.5D instances of point cloud data from multiple different vantages can be processed to generate a voxel map of at least a portion of an environment of a robot.
(17) As described herein, robot 100A may operate autonomously at least part of the time and control actuators thereof in performance of various actions. For example, in performing various actions, one or more processors of the robot 100A may provide control commands to actuators associated with the wheels 107A1 and/or 107A1, the robot arm 105A and/or the end effector 106A. Further, in various situations the control commands provided at a given instance can be generated based at least in part on spatial affordances as described herein.
(18) Control system 150 is also illustrated in
(19) The spatial affordances engine 152 receives vision data that is generated based on output from one or more vision components of the sensors 108.sub.1-m. The spatial affordances engine 152 uses one or more trained neural network models 160T to process the received vision data to generate output that defines spatial affordances for one or more objects captured by the vision data. For example, the received vision data can be a 3D voxel map and the spatial affordances engine 152 can process the 3D voxel map using the trained neural network model 160T to generate output that defines multiple spatial affordances. The multiple spatial affordances can each define at least: a corresponding spatial region that corresponds to a portion of the voxel map (e.g., to a single voxel, or to a collection of contiguous voxels), and a corresponding affordance for the corresponding spatial region. In some implementations, an affordance of a spatial affordance is indicated by a probability, in the output, that corresponds to the spatial region and to the affordance. In some of those implementations, an affordance can be considered to be defined for a spatial affordance if the probability satisfies a threshold (e.g., a probability of greater than 0.3 where probabilities range from 0 to 1.0). As another example, the received vision data can be a 2D or 2.5D image, and the spatial affordances engine 152 can process the 2D or 2.5D image using one or more of the trained neural network model 160T to generate output that defines multiple spatial affordances. The multiple spatial affordances can each define at least: a corresponding spatial region that corresponds to a portion of the 2D or 2.5D image (e.g., to a single pixel, or to a collection of contiguous pixels), and a corresponding affordance for the corresponding spatial region.
(20) The selection engine 154 selects a spatial affordance generated by the spatial affordances engine. The selection engine 154 can select a spatial affordance based on conformance of the selected spatial affordance to a target affordance, and/or based on other criteria such as other parameter(s) of the selected spatial affordance and/or task criteria.
(21) A target affordance is an affordance to be performed by the robot. In some implementations, the target affordance is generated by one or more other components of the control system 150, such as a task planner, and/or is generated based on user interface input. For example, in performance of a clean the dining table task by a robot, a task planner can dictate that a pick up affordance needs to be performed for all items on a dining table. Accordingly, the target affordance can be a pick up affordance. The selection engine 154 can then select one or more spatial affordances based on those spatial affordances defining a pick up affordance and optionally based on those spatial affordances defining a spatial region that is atop or otherwise close to an area classified as a dining table (e.g., classified by a separate object detection and localization engine). Additional or alternative criteria can optionally be utilized by the selection engine 154 in selecting the spatial affordances. For example, a spatial affordance can be selected based on a probability defined by the spatial affordance satisfying a threshold (e.g., a set threshold or a threshold relative to other spatial affordance(s)). Also, for example, a spatial affordance can be selected based on affordance application parameter(s) defined by the spatial affordance being achievable by the robot (e.g., a magnitude of force indicated by an affordance application parameter actually being achievable by the robot). For instance, the selection engine may not choose a candidate spatial affordance based on that candidate spatial affordance defining an affordance application parameter that is not achievable by the robot (e.g., the robot is incapable of applying a magnitude of force indicated by the affordance application parameter).
(22) As another example, user interface input provided by a user can directly indicate a target affordance, or can be processed to determine the target affordance. The user interface input can be, for example, spoken user interface input provided via a microphone, of the sensors 108.sub.1-m of the robot 100, and/or provided via a separate client computing device and transmitted to the control system 150 (optionally after pre-processing by the client computing device and/or other computing devices). For example, a user can provide spoken input of open the drawer, and a target affordance of open can be determined based on the spoken input. The selection engine 154 can then select one or more spatial affordances based on those spatial affordances defining an open affordance and optionally based on those spatial affordances defining a spatial region that is on or otherwise close to an area classified as a drawer (e.g., classified by a separate object detection and localization engine).
(23) The implementation engine 156 uses the selected spatial affordance to control actuators 104 of the robot 100. For example, the implementation engine 156 can control actuators 104 of the robot 100 to cause the actuators 104 to interact with all or portions of the spatial region defined by the selected spatial affordance in performing the affordance, and/or to perform the affordance based on an affordance application parameter defined by the selected spatial affordance. As one example, for a push affordance, the implementation engine 156 can determine a target area for the push and/or an approach vector for the push based on the spatial region defined by the selected spatial affordance. For instance, the implementation engine 156 can determine an approach vector for the push affordance based on one or more surface normals for the spatial region. A surface normal can be determined, for example, based on one or more vision data points that correspond to the spatial region.
(24) In various implementations, the implementation engine 156 can include, and/or be in communication with, one or more action planners such as a path planner, a motion planner, a grasp planner, etc. In some of those implementations, the action planners can utilize the spatial region and/or other parameters of the spatial affordance as constraint(s) in planning a corresponding action. For example, a point of the spatial region can be utilized as a target point for an action planner and/or a direction and/or magnitude defined by the affordance application parameter can be utilized as a target direction and/or target magnitude for interaction with the target point.
(25) The implementation engine 156 can provide control commands to one or more actuators of the robot to effectuate planned actions. For example, the implementation engine 156 can include a real-time module that generates real-time control commands to provide to actuators 104 of the robot 100 to effectuate one or more actions that have been determined based on selected spatial affordance(s).
(26) Also illustrated in the environment of
(27) With reference to
(28)
(29) The training instance 167A also includes a training instance output 682, that defines one or more spatial affordances for one or more objects captured by the vision data of the training instance input 681. One example of training instance output 682 is illustrated by training instance output 682A of
(30) As one example, assume that for Region.sub.1, Affordance.sub.1 is the only affordance achievable, and has an affordance application parameter that is a magnitude of force of 15 Newton. In such an example, the training example output for Region.sub.1 can include a vector of values of <1.0, 15, 0, 0, 0, . . . 0>, where 1.0 and 15 are the only non-zero values, and 1.0 represents a maximum probability, and 15 represents the force in Newton. The remaining zero values of the vector represent that the corresponding affordances are not defined for Region.sub.1 in the training instance output 682A. Thus, in such an example, the first position in the vector of values corresponds to the Affordance.sub.1 Probability, the second position corresponds to the Affordance.sub.1 Application Parameter, the third position corresponds to the Affordance.sub.2 Probability, the fourth position corresponds to the Affordance.sub.2 Application Parameter, and so forth. For instance, if the N in Affordance.sub.N were equal to 10, then probabilities and affordance application parameters for 10 affordances would be described by the vector, and the vector would have a dimension of 20 (a probability and an affordance application parameter for each of the 10 affordances). It is noted that in such an example, each affordance is indicated by its corresponding probability, and its corresponding probability's position in the vector of values. Further, the corresponding probability defines whether the affordance is present in the spatial region (e.g., whether a spatial affordance is included in the training example that defines the affordance for the spatial region).
(31) In some implementations, each of the spatial regions is defined by its dimensional position in the training instance output 682. For example, where the training instance output 682A defines a vector of values for each of the spatial regions, a first spatial region can correspond to the first vector of values, a second spatial region can correspond to the second vector of values, and so forth. In some of those and other implementations, each spatial region corresponds to a single pixel or a single voxel of the corresponding training instance input. For example, if the training instance input has a dimension of 256256, with 4 channels, the training instance output 682A can have a dimension of 256256, with 20 channelswhere each 1120 stack is a vector of values describing the affordances (e.g., via the affordance probabilities), affordance probabilities, and affordance application parameters for a corresponding single pixel or voxel. In some other implementations, each spatial region corresponds to a collection of pixels or voxels of the corresponding training instance input. For example, if the training instance input has a dimension of 2562564, the training instance output 682A can have a dimension of 646420where each 1120 stack is a vector of values describing the affordances, affordance probabilities, and affordance application parameters for 4 contiguous pixels or voxels.
(32) Turning again to
(33) It is noted that in some implementations, the neural network model 160 is trained to predict, for each of a plurality of spatial regions of vision data, multiple probabilities that each indicate whether a corresponding one of multiple disparate affordances is achievable for the spatial region. For example, the neural network model 160 can be trained to predict, based on applied vision data, a first probability that a given spatial region has a push affordance, a second probability that the given spatial region has a lift affordance, a third probability that the given spatial region has a rotate affordance, a fourth probability that the given spatial region has an open affordance, etc. However, in some other implementations multiple neural network models may be trained and subsequently utilized in combination, with each being trained for only a subset of affordances (e.g., one or more being trained for only a single affordance).
(34)
(35) Turning now to
(36) The door 180 is illustrated in
(37) The affordances indicated by
(38) In
(39) The diagonal line shading 184C indicates those areas of the door 180 for which a close by push affordance can be defined, and indicates a magnitude of 5 N for the close by push affordance. The shading 184C further indicates that an open by rotate, then pull affordance can also be defined, and indicates a magnitude of 5 N for the rotate portion of the affordance, and a magnitude of 20 N for the pull portion of the affordance.
(40) The affordances and corresponding magnitudes indicated by
(41) A neural network model trained based on such training examples with affordances and corresponding magnitudes can enable prediction, on a spatial region by spatial region basis, of affordance(s) and corresponding magnitudes for each of a plurality of spatial regions. For example, output generated based on vision sensor data that captures a new unique door can indicate, for each of a plurality of spatial regions of the new door, a probability that Affordance1 is achievable for the spatial region, a predicted magnitude for Affordance 1 for the spatial region, a probability that Affordance 2 is achievable for the spatial region, a predicted magnitude for Affordance 2 for the spatial region, and optionally probabilities for additional Affordances for the spatial region and corresponding magnitudes for the spatial region. Although particular magnitudes of force are illustrated in
(42) In
(43) The diagonal line shading 184D indicates those areas of the door 180 for which a close by push affordance can be defined, and indicates a probability of 0.4 for the close by push affordance. The shading 184D further indicates that an open by rotate, then pull affordance can also be defined, and indicates a probability of 1.0 for the open by rotate, then pull affordance.
(44) The affordances and corresponding probabilities indicated by
(45) Turning now to
(46) The cup 190 is illustrated in
(47) The affordances indicated by
(48) Referring now to
(49) At block 502, the system receives vision data. The vision data can be generated based on output from one or more vision sensors of a vision component of a robot, and captures an object in the environment of the robot.
(50) At block 504, the system applies the vision data as input to a trained machine learning model. For example, the system can apply the vision data as input to a trained neural network model, such as trained neural network model 160T of
(51) At block 506, the system processes the vision data using the trained machine learning model to generate output defining spatial affordance(s) for an object captured by the vision data. The spatial affordances can each define: a spatial region of the object, an affordance for the spatial region, and optionally: an affordance application parameter for the affordance for the spatial region, and/or a probability for the affordance for the spatial region.
(52) At block 508, the system identifies a target affordance. In some implementations, the target affordance is generated by one or more other components of the system, such as a task planner, and/or is generated based on user interface input.
(53) At block 510, the system selects a generated spatial affordance based on the target affordance and/or other parameters. The other parameters based on which the system selects a generated spatial affordance can include, for example, a probability defined by the spatial affordance satisfying a threshold (e.g., a set threshold or a threshold relative to other spatial affordance(s)) and/or affordance application parameter(s) defined by the spatial affordance being achievable by the robot (e.g., a magnitude of force indicated by an affordance application parameter actually being achievable by the robot). The other parameters based on which the system selects a generated spatial affordance can additionally or alternatively include one or more criteria associated with the target affordance, such as location criteria. For example, the system can select the generated spatial affordance based on it being in an area defined as an area of interest for the target affordance. For instance, the target affordance can be pick up and associated with a criterion of objects that are on a tableand the generated spatial affordance selected based on it being in an area that is atop, and/or near, an area classified as a table.
(54) At block 512, the system controls actuator(s) of the robot to cause robot component(s) to interact with the object in conformance with the selected spatial affordance. For example, the system can provide one or more points of the spatial region, of the selected spatial affordance, as a target point to a motion planner, and the motion planner can generate a trajectory based on the target point and/or one or more other criteria. The system can then generate control commands to effectuate the trajectory. As another example, the system can utilize surface normal(s) of the spatial region, of the selected spatial affordance, to determine an approach vector for an end effector of the robot. The system can then generate control commands to cause the approach vector followed by the end effector.
(55) Referring now to
(56) At block 602, the system selects a vision data instance.
(57) At block 604, the system assigns the vision data instance as training instance input of a training instance.
(58) At block 606, the system annotates spatial affordances for an object captured by the vision data. The spatial affordances each define: a spatial region of the object, an affordance for the spatial region, and optionally an affordance application parameter and/or probability for the affordance for the spatial region. In some implementations, the system annotates the spatial affordances based on user interface input provided by a user via a computing device in manually annotating the spatial affordances in the vision data.
(59) At block 608, the system assigns the spatial affordances as training instance output of the training instance.
(60) At block 610, the system applies the training instance input to a machine learning model, such as the neural network model 160 of
(61) At block 612, the system processes the training instance input, using the machine learning model, to generate output.
(62) At block 614, the system updates the machine learning model parameters based on comparison of the output generated at block 612 to the training instance output of block 608. For example, the system can generate an error based on differences between the generated output and the training instance output of the training instance, and backpropagate the error through the machine learning model to update the model. Although method 600 is described with respect to a single training instance, it is understood that the machine learning model will be trained based on a large quantity of training instances (e.g., thousands of training instances).
(63)
(64) User interface input devices 722 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term input device is intended to include all possible types of devices and ways to input information into computer system 710 or onto a communication network.
(65) User interface output devices 720 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term output device is intended to include all possible types of devices and ways to output information from computer system 710 to the user or to another machine or computer system.
(66) Storage subsystem 724 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 724 may include the logic to perform selected aspects of method 500, method 600, and/or to implement one or more aspects of robot 100 or control system 170. Memory 727 used in the storage subsystem 724 can include a number of memories including a main random access memory (RAM) 730 for storage of instructions and data during program execution and a read only memory (ROM) 732 in which fixed instructions are stored. A file storage subsystem 726 can provide persistent storage for program and data files, and may include a hard disk drive, a CD-ROM drive, an optical drive, or removable media cartridges. Modules implementing the functionality of certain implementations may be stored by file storage subsystem 726 in the storage subsystem 724, or in other machines accessible by the processor(s) 714.
(67) Bus subsystem 712 provides a mechanism for letting the various components and subsystems of computer system 710 communicate with each other as intended. Although bus subsystem 712 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
(68) Computer system 710 can be of varying types including a workstation, server, computing cluster, blade server, server farm, smart phone, smart watch, smart glasses, set top box, tablet computer, laptop, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system 710 depicted in
(69) While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.