INTELLIGENCE SYSTEMS, METHODS, AND DEVICES
20230229719 · 2023-07-20
Assignee
Inventors
- Peter H. Diamandis (Santa Monica, CA, US)
- Morgan Rawls-McDermott (Portland, OR, US)
- Eben Pagan (Sunny Isles Beach, FL, US)
Cpc classification
G06F16/9535
PHYSICS
International classification
Abstract
A method includes performing a search on a body of information based on a first perspective, wherein the first perspective is determined using a first corpus of information associated with a first particular set of people; and providing at least some results of the search. The search may be a perspective search. Results of the search may be evaluated based on a second perspective, which is based on a second corpus of information associated with a second particular set of people. The perspective may be determined using an avatar mechanism that was trained using a corpus of information.
Claims
1. A computer-implemented method comprising: (A) performing a perspective search on a body of information based on a first perspective, wherein the first perspective is determined using a first corpus of information associated with a first particular set of people; and (B) providing at least some results of the search.
2. (canceled)
3. The method of claim 1, wherein the first corpus of information associated with the first particular set of people comprises a first one or more of: books, magazine articles, online posts, social network posts, blog posts, social media posts, digitized books, and/or transcribed videos of or by or including the first particular set of people.
4. The method of claim 1, further comprising: (C) evaluating the at least some results of the search based on a second perspective, wherein the second perspective is based on a second corpus of information associated with a second particular set of people.
5. The method of claim 4, further comprising: (D) providing an indication of agreement between the first perspective and the second perspective.
6. The method of claim 1, wherein the first particular set of people comprises a first particular person.
7. The method of claim 4, wherein the second particular set of people comprises a second particular person.
8. The method of claim 1, wherein the first perspective was determined using a first avatar mechanism that was trained using the first corpus of information.
9. The method of claim 6, wherein the first perspective corresponds to a first perspective of the first particular person, as emulated a first avatar mechanism that was trained using the first corpus of information.
10. The method of claim 4, wherein the second perspective was determined using a second avatar mechanism that was trained using the second corpus of information.
11. A computer-implemented method comprising: performing a perspective search on a body of information; and providing results of the search based on a first avatar mechanism that was trained using a first corpus of information.
12. The method of claim 11, further comprising: training the first avatar mechanism using the first corpus of information.
13. (canceled)
14. The method of claim 11, wherein the results of the search are based on a perspective of the first avatar mechanism.
15. The method of claim 11, wherein the first corpus of information comprises information associated with a first set of one or more people.
16. The method of claim 15, wherein the first set of one or more people comprises a first particular person.
17. The method of claim 16, wherein the results of the search are based on a perspective of the first particular person, as determined by the first avatar mechanism.
18. The method of claim 16, wherein a perspective of the first avatar mechanism emulates a particular perspective of the first particular person, and wherein the first corpus of information comprises at least some information associated with the first particular person.
19. The method of claim 16, wherein the information associated with the first particular person comprises one or more of: books, magazine articles, online posts, social network posts, blog posts, social media posts, digitized books, and/or transcribed videos of or by or including the first particular person.
20. The method of claim 14, wherein the providing comprises filtering and/or sifting a set of search results based on the perspective of the first avatar mechanism.
21. The method of claim 11, further comprising: using a second avatar mechanism to evaluate the results of the search, wherein the second avatar mechanism was trained using a second corpus of information.
22. The method of claim 21, wherein the using comprises: determining whether the second avatar mechanism agrees with the first avatar mechanism.
23. The method of claim 21, wherein the second corpus of information comprises second information associated with a second set of one or more people.
24. The method of claim 23, wherein the second set of one or more people comprises a second particular person.
25. The method of claim 22, wherein at least some of the results of the search are provided in a user interface, and wherein agreement of the second avatar mechanism with the first avatar mechanism is indicated in the user interface.
26. A computer-implemented method comprising: (A) performing a perspective search on a body of information based on a corresponding perspective of at least one of one or more avatar mechanisms, wherein each of the one or more avatar mechanisms has a corresponding perspective based on a corresponding corpus of information; and (B) providing at least some results of the perspective search.
27. The method of claim 26, wherein each of the one or more avatar mechanisms was trained using the corresponding corpus of information.
28. The method of claim 26, further comprising: training the one or more avatar mechanisms.
29. The method of claim 26, further comprising: evaluating a result of the perspective search based on a corresponding perspective of at least one other of the one or more avatar mechanisms.
30. The method of claim 26, further comprising: providing an indication of agreement between the at least some results and the at least one other of the one or more avatar mechanisms.
31. The method of claim 26, wherein the corresponding corpus of information for each avatar mechanism comprises information associated with a corresponding one or more people.
32. The method of claim 31, wherein the corresponding one or more people comprise a particular person.
33. The method of claim 31, wherein the information associated with the corresponding one or more people comprises one or more of: books, magazine articles, online posts, social network posts, blog posts, social media posts, digitized books, and/or transcribed videos of or by or including the corresponding one or more people.
34. The method of claim 26, wherein the at least some results of the perspective search are based on a perspective of at least one particular person, as determined by the at least one of the one or more avatar mechanisms.
35-63. (canceled)
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0170] Other objects, features, and characteristics of the present invention as well as the methods of operation and functions of the related elements of structure, and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification. None of the drawings are to scale unless specifically stated otherwise.
[0171]
[0172]
[0173]
[0174]
[0175]
[0176]
[0177]
DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EXEMPLARY EMBODIMENTS
[0178] As used herein, the term “mechanism,” as used herein, refers to any device(s), process(es), service(s), or combination thereof. A mechanism may be implemented in hardware, software, firmware, using a special-purpose device, or any combination thereof. A mechanism may be integrated into a single device, or it may be distributed over multiple devices. The various components of a mechanism may be co-located or distributed. The mechanism may be formed from other mechanisms. In general, as used herein, the term “mechanism” may thus be considered shorthand for the term device(s) and/or process(es) and/or service(s).
Overview and Structure
[0179]
[0180] The users 104 may have distinct roles and may be provided with role-specific access interfaces and/or mechanisms.
[0181] Each user 104 may access the avatar system 102 using one or more computing devices, as is known in the art. The avatar system 102 may also access and be accessible by various external systems and/or databases 108. These external systems and/or databases 108 may include social media sites such as Twitter, Facebook, Email, and blogs, stock and/or commodity price databases, published articles, books, magazines, etc.
[0182] As shown in
[0183] The database(s) 112 may be or comprise multiple separate or integrated databases, at least some of which may be distributed. The database(s) 112 may be implemented in any manner, and, when made up of more than one database, the various databases need not all be implemented in the same manner. It should be appreciated that the system is not limited by the nature or location of database(s) 112 or by the manner in which they are implemented.
[0184] Each of the applications 110 is essentially a mechanism (as defined above, e.g., a software application) that may provide one or more services via an appropriate interface. Although shown as separate mechanisms for the sake of this description, it should be appreciated that some or all of the various mechanisms/applications 110 may be combined. The various mechanisms/applications 110 may be implemented in any manner and need not all be implemented in the same manner (e.g., with the same languages or interfaces or protocols).
[0185] The applications 110 may include one or more of the following mechanisms:
[0186] 1. perspective search mechanism(s) 114
[0187] 2. avatar mechanism(s) 116
[0188] 3. avatar social network mechanism(s) 118
[0189] 4. intake mechanism(s) 120
[0190] 5. Interaction and presentation mechanism(s) 122
[0191] 6. search mechanism(s) 124
[0192] 7. conversation mechanism(s) 126
[0193] 8. animation mechanism(s) 128
[0194] 9. Miscellaneous/auxiliary mechanisms 130
[0195] Note that the above list of mechanisms/mechanisms is exemplary and is not intended to limit the scope of the system 100 in any way. Those of ordinary skill in the art will appreciate and understand, upon reading this description, that the system 100 may include any other types of data processing mechanisms, image recognition mechanisms, and/or other types of mechanisms that may be necessary for the system 100 to generally perform its functionalities as described herein. In addition, as should be appreciated, embodiments or implementations of the system 100 need not include all of the mechanisms listed, and that some or all of the mechanisms may be optional.
[0196] The database(s) 112 may include one or more of the following database(s):
[0197] 1. Avatar database(s) 132
[0198] 2. Miscellaneous and auxiliary database(s) 134
[0199] The above list of databases is exemplary and is not intended to limit the scope of the system 100 in any way.
[0200] As shown in
[0201] Various mechanisms in the avatar system 102 may be accessible via application interface(s) 136. These application interfaces 136 may be provided in the form of APIs (application programming interfaces) or the like, made accessible to external users 104 via one or more gateways and interfaces 138. For example, the avatar mechanism(s) 116 may provide APIs thereto (via application interface(s) 136), and the system 102 may provide external access to aspects of the presentation mechanism(s) 122 (to users 104) via appropriate gateways and interfaces 138 (e.g., via a web-based mechanism and/or a mechanism running on a user's device).
Mechanisms And Data Structures
[0202] Details of various mechanisms, applications, processes and functionalities of an exemplary avatar system 102 are now described.
Perspective Search
[0203] In many situations, it is useful to be able to search through textual documents given a perspective defined by a weighted collection of natural language statements. A goal of this approach is to search semantically (based on the meaning of a query, not on its phrasing) utilizing meaning-information from all query texts to determine the extent to which each element of a search corpus is similar to and agrees or disagrees with the aggregate query texts. An example of this approach is shown in
[0204] Aspects of an exemplary perspective search mechanism 114 (implementing a perspective search algorithm) include one or more of: [0205] The mechanism takes as inputs all query texts and the entire search corpus [0206] The mechanism outputs a score for each element of the search corpus, or for some subset of the search corpus (for efficiency). This score may reflect the extent to which an element in the search corpus agrees with the aggregate meaning of the query texts. [0207] The score function for each element of the search corpus is likely non-linear and considers all (or some subset) of the provided query texts.
[0208] Various score functions may be used, alone or in combination, and, as should be appreciated, different score functions provide different results or degrees of accuracy.
[0209] Some exemplary score functions are listed here: [0210] the score function may be the sum of the Jaccard distance between each search corpus text and all query texts, although this would be a non-semantic metric. [0211] the score function may be the average cosine similarity between neural-network-generated text embeddings of all query texts and neural-network-generated text embeddings for each element of the search corpus. [0212] the score function may be the sum of the square of the distance of neural-network-generated text embeddings of all query texts and neural-network-generated text embeddings for each element of the search corpus. [0213] the score may be the direct output of a neural network which takes as input (or considers during its training) all query texts and also takes as input one or more search corpus texts, then outputs a score or multiple scores representing the extent to which the meaning of the search text agrees with the aggregate meaning of the query texts.
Example Perspective Search Algorithm
[0214] An example perspective search algorithm, implemented, e.g., in a perspective search mechanism 114, is described here. Those of skill in the art will understand, upon reading this description, that different and/or other perspective search algorithms may be used.
[0215] Producing scores for all search texts and all query texts for a search corpus scales at best like 0(m*n*d) where m is the number of query texts, n is the number of search texts, and d is the size (dimension) of the embedding vectors. As can thus be appreciated, for a large search corpus, utilizing a brute force approach to calculate a score for all search texts and given all query texts may be infeasible due to computational limitations.
[0216] Accordingly, the system may use a far more efficient approach than a brute force methodology for performing a perspective search with a large corpus of search texts.
[0217] In some embodiments, this exemplary algorithm performs k-Nearest-Neighbor queries operating on an efficient data structure for each query text, relying on overlapping k-Nearest-Neighbor search results across multiple query texts to identify the results being searched for. In practice, this algorithm provides a very good approximation of the non-optimized approach when common non-linear scoring functions are utilized.
[0218] To further refine results, one or more transformations may be applied to each embedded query-text-vector before performing the k-Nearest-Neighbor search. These transformation(s) may include: [0219] Masking (setting to 0 or reducing in magnitude) components in a vector determined to be irrelevant to this particular search, or increase the quality of this particular search [0220] Increasing the magnitude of certain components in this vector determined to be particularly relevant to, or likely increase the quality of, this particular search
[0221] An exemplary perspective algorithm may include the following (with reference to the flowchart in
TABLE-US-00001 V = set of embedded search texts Q = set of embedded query texts MIN_SCORE = minimum score desired for results MIN_RESULT_LENGTH = minimum K = 1 results = Empty List WHILE K < log(V): result_scores = Map/Dictionary/Hash from elements in V to a numerical score. Scores not present are assumed to be 0. FOR q IN Q: potential_results = Find the K-Nearest-Neighbors to transform(q) in V using the index from (3) FOR r IN potential_results: result_scores[r] += score(q, r) FOR r IN result_scores: if result_scores[r] > MIN_SCORE: results.add(r) if len(results) > MIN_RESULT_LENGTH: return results return results
[0225] This algorithm's runtime scales according to
[0226] 0(|Q|*log(|V|)*log(|V|)*d) which is clearly better than the
[0227] 0(m*n*d) of the brute force approach.
Avatar Mechanisms
[0228] Based on the textual digital footprints of individuals (e.g., from blog posts, social media posts, digitized books, transcribed videos, etc.), the perspective search mechanism 114 (e.g., as above) may be used to treat the digital footprint of a given person as an “Avatar” and it may be used to search through a corpus of textual data (e.g., data obtained from external systems and/or databases 108). The score for each result may reflect, in this case, the extent to which the given person might agree, and be interested in, a given statement.
[0229] An avatar mechanism 116 (also referred to as an avatar) may be constructed via any combination of textual data of interest. For instance, all content from particular website(s), or from particular book(s), or from multiple different peoples' digital footprints, may be used.
[0230] As an example, with reference to
[0231] With reference again to
Avatar Social Networks
[0232] Digital avatars or avatar mechanisms for multiple people may interact with one another, and optionally with users, to perform actions typical in a social network. This approach is based, in part, on an understanding that the digital footprint of any person may be used to determine the extent to which they would read and/or share a given piece of content.
[0233] In some cases, this process may work as follows: [0234] Avatar mechanisms (or avatars) “select” or “post” content based on a score provided by a perspective search based upon content presented and on the Avatar's digital footprint. This selection may be a score threshold (selecting all content with a score greater than some number), or may be selecting the top K (e.g., 3) pieces of content. [0235] Content from the previous step may be input into a perspective search for other avatar mechanisms using their respective digital footprints. If the score output is above a certain (lower) threshold, the system may record that the second avatar “Likes” this content. [0236] The final selection of content, and the “likes” associated therewith, may be presented to an end user. Preferably it is made clear which person's avatar (i.e., which avatar mechanism(s)) selected which piece of content, and which people's avatar mechanisms have liked each piece of content.
Conversational Avatars
[0237] An algorithm may use language-model text generation and the perspective-search methodology, to respond to user input based upon a given prompt, much as the real person might do. Those of skill in the art will understand, upon reading this description, that using perspective search as an avatar mechanism's “Memory”, allows the system to bypass a primary issue with language-model text generation: typically, the generated text does not embody precise facts or knowledge.
[0238] Aspects of this approach may include: [0239] 1. Fine-Tune a Language Model on a user's Digital Footprint (optional). Training a pre-existing language model (a machine learning model that predicts the next token in a sequence) that has been trained on a large corpus, to produce the text present in the user's digital footprint. This step is optional, since language models may not need fine-tuning to contextually infer the linguistic style and knowledge of an individual given the perspective search results (see (2)) alone. [0240] 2. Memory: Search through a person's digital footprint using Perspective Search for content that matches (A) the user-provided prompt, and (B) the previous conversational context between the avatar mechanism and the user (optionally with a discounted weight applied), if applicable. [0241] 3. Generation: By feeding (A) the user provided prompt, (B) the previous conversational context, and (C) the collection of statements most relevant from a person's digital footprint into this language model, the system may produce text much like the real person might. This information may be fed into the language model in numerous ways: [0242] a) As special input to any underlying part of the language model, separate from the provided prompt (e.g., as numerical input to some layer(s) of the original language model's neural network) [0243] b) As combined plain text, such as: [0244] [Formatted Perspective Search Results] [0245] Hal: My name is Hal [0246] Hal: I am an artificial intelligence [0247] [Conversational Context] [0248] User: Hello, Hal [0249] Hal: Hello. [0250] [Prompt] [0251] User: What is your name? [0252] Hal: [0253] 4. The final generated text can be presented to a user, and the user can then provide an additional response in return.
[0254] This process may also be used to allow the avatars to produce commentary, in general, for any textual input. For example, by replacing the Prompt in the above process with the contents of a news article, the language model may produce commentary about the article using knowledge from the Avatar's source digital footprint.
[0255]
Exemplary Operation
[0256] An overview of exemplary operation of a framework/system 100 is described here with reference to the screenshots in
EXAMPLE 1.1
[0257] In one example implementation, with reference to the flowchart in
[0258] The search may be a perspective search.
[0259] The corpus of information associated with the first particular set of people may include a first one or more of: books, magazine articles, online posts, social network posts, blog posts, social media posts, digitized books, and/or transcribed videos of or by or including the first particular set of people.
EXAMPLE 1.2
[0260] The example implementation of Example 1.1, further includes evaluating at least some results of the search based on a second perspective, wherein the second perspective is based on a second corpus of information associated with a second particular set of people.
EXAMPLE 1.3
[0261] The example implementation of Example 1.2, further includes providing an indication of agreement between the first perspective and the second perspective.
EXAMPLE 1.4
[0262] The example implementation of Examples 1.1 to 1.3, further includes wherein the first perspective was determined using a first avatar mechanism that was trained using the first corpus of information.
[0263] The first perspective may correspond to a first perspective of the first particular person, as emulated a first avatar mechanism that was trained using the first corpus of information.
EXAMPLE 1.5
[0264] The example implementation of Examples 1.1 to 1.4, where the second perspective was determined using a second avatar mechanism that was trained using the second corpus of information.
EXAMPLE 2.1
[0265] In another example implementation, with reference to the flowchart in
[0266] The search is preferably a perspective search. The results of the search are preferably based on a perspective of the first avatar mechanism.
[0267] The corpus of information comprises information associated with a first set of one or more people, and the results of the search are based on a perspective of the first set of one or more people, as determined by the first avatar mechanism.
[0268] In some implementations the information associated with the first set of one or more people comprises one or more of: books, magazine articles, online posts, social network posts, blog posts, social media posts, digitized books, and/or transcribed videos of or by or including the first set of one or more people.
[0269] In some implementations, the providing (at 414) includes filtering and/or sifting a set of search results based on the perspective of the first avatar mechanism.
EXAMPLE 2.2
[0270] The example implementation of Example 2.1, further includes using a second avatar mechanism to evaluate the results of the search, wherein the second avatar mechanism was trained using a second corpus of information.
[0271] The implementation may determine whether the second avatar mechanism agrees with the first avatar mechanism.
[0272] The second corpus of information may include second information associated with a second set of one or more people.
EXAMPLE 2.3
[0273] The example implementation of Example 2.2, further includes providing at least some of the results of the search in a user interface, and indicating agreement of the second avatar mechanism with the first avatar mechanism in the user interface.
EXAMPLE 3.1
[0274] In one example implementation, with reference to the flowchart in
[0275] The first perspective corresponds to a first perspective of the first particular person, as emulated by the first avatar mechanism.
[0276] The search is preferably a perspective search.
[0277] The first corpus of information associated with the first particular person comprises a first one or more of: books, magazine articles, online posts, social network posts, blog posts, social media posts, digitized books, and/or transcribed videos of or by or including the first particular person.
EXAMPLE 3.2
[0278] The example implementation of Example 3.1, further includes evaluating the at least some results of the search based on a second perspective of a second particular person, wherein the second perspective is emulated using a second avatar mechanism that was trained on a second corpus of information associated with the second particular person.
EXAMPLE 3.3
[0279] The example implementation of Example 3.2, further includes providing an indication of agreement between the first avatar mechanism and the second avatar mechanism. The indication of agreement may, e.g., be a “thumbs up” image or the like.
EXAMPLE 4.1
[0280] In another example implementation, with reference to the flowchart in
[0281] The search is preferably a perspective search.
[0282] The results of the search are preferably based on a perspective of the first avatar mechanism.
[0283] The corpus of information comprises information associated with a first particular person, and the results of the search are based on a perspective of the first particular person, as determined by the first avatar mechanism.
[0284] In some implementations a perspective of the first avatar mechanism emulates a particular perspective of the first particular person, as determined based on at least some of the information associated with the first particular person.
[0285] In some implementations the information associated with the first particular person comprises one or more of: books, magazine articles, online posts, social network posts, blog posts, social media posts, digitized books, and/or transcribed videos of or by or including the first particular person.
[0286] In some implementations, the providing (at 424) includes filtering and/or sifting a set of search results based on the perspective of the first avatar mechanism.
EXAMPLE 4.2
[0287] The example implementation of Example 4.1, further includes training a second avatar mechanism on a second corpus of information; and using the second avatar mechanism to evaluate the results of the search.
[0288] The implementation may determine whether the second avatar mechanism agrees with the first avatar mechanism.
[0289] The second corpus of information may include second information associated with a second particular person.
[0290] In some exemplary implementations, at least some of the results of the search are provided in a user interface, and agreement of the second avatar mechanism with the first avatar mechanism is indicated in the user interface.
EXAMPLE 5.1
[0291] In another example implementation, with reference to the flowchart in
EXAMPLE 5.2
[0292] The example implementation of Example 5.1, further includes evaluating (at 432) a result of the perspective search based on a corresponding perspective of at least one other of the one or more avatar mechanisms.
[0293] In some implementations, the method includes providing an indication of agreement between the avatar mechanisms on the search results.
[0294] The corpus of information used for each avatar mechanism comprises information associated with a corresponding particular person and may include one or more of: books, magazine articles, online posts, social network posts, blog posts, social media posts, digitized books, and/or transcribed videos of or by or including the corresponding particular person.
Example Implementations
[0295] In an example implementation, the framework 100 was used to form avatar mechanisms for Peter Diamandis and Ray Kurzweil. Each avatar mechanism was formed using a corpus of data from their respective writings (including, e.g., books, blog postings, social network postings, etc.).
[0296] These avatar mechanisms were used to review certain news articles. As shown in the screenshot in
[0297] Similarly, as shown in the screenshot in
Computing
[0298] The services, mechanisms, operations, and acts shown and described above are implemented, at least in part, by software running on one or more computers or computer systems or devices. It should be appreciated that each user device is, or comprises, a computer system.
[0299] Programs that implement such methods (as well as other types of data) may be stored and transmitted using a variety of media (e.g., computer readable media) in a number of manners. Hard-wired circuitry or custom hardware may be used in place of, or in combination with, some or all of the software instructions that can implement the processes of various embodiments. Thus, various combinations of hardware and software may be used instead of software only.
[0300] One of ordinary skill in the art will readily appreciate and understand, upon reading this description, that the various processes described herein may be implemented by, e.g., appropriately programmed general purpose computers, special purpose computers and computing devices. One or more such computers or computing devices may be referred to as a computer system.
[0301]
[0302] According to the present example, the computer system 500 includes a bus 502 (i.e., interconnect), one or more processors 504, a main memory 506, read-only memory (ROM) 508, removable storage media 510, and mass storage 512, and one or more communications ports 514. Communication port(s) 514 may be connected to one or more networks (not shown) whereby the computer system 500 may receive and/or transmit data.
[0303] As used herein, a “processor” means one or more microprocessors, central processing units (CPUs), graphics processing units (GPUs), computing devices, microcontrollers, digital signal processors, or like devices or any combination thereof, regardless of their architecture. An apparatus that performs a process can include, e.g., a processor and those devices such as input devices and output devices that are appropriate to perform the process.
[0304] Processor(s) 504 can be (or include) any known processor, such as, but not limited to, an Intel® Itanium® or Itanium 2® processor(s), AMD® Opteron® or Athlon MP® processor(s), or Motorola® lines of processors, and the like.
[0305] Communications port(s) 514 can be any of an RS-232 port for use with a modem-based dial-up connection, a 10/100 Ethernet port, a Gigabit port using copper or fiber, or a USB port, and the like. Communications port(s) 514 may be chosen depending on a network such as a Local Area Network (LAN), a Wide Area Network (WAN), a CDN, or any network to which the computer system 500 connects. The computer system 500 may be in communication with peripheral devices (e.g., display screen 516, input device(s) 518) via Input/Output (I/O) port 520. Some or all of the peripheral devices may be integrated into the computer system 500, and the input device(s) 518 may be integrated into the display screen 516 (e.g., in the case of a touch screen).
[0306] Main memory 506 may be Random Access Memory (RAM), or any other dynamic storage device(s) commonly known in the art. Read-only memory 508 can be any static storage device(s) such as Programmable Read-Only Memory (PROM) chips for storing static information such as instructions for processor(s) 504. Mass storage 512 can be used to store information and instructions. For example, hard disks such as the Adaptec® family of Small Computer Serial Interface (SCSI) drives, an optical disc, an array of disks such as Redundant Array of Independent Disks (RAID), such as the Adaptec® family of RAID drives, or any other mass storage devices may be used.
[0307] Bus 502 communicatively couples processor(s) 504 with the other memory, storage and communications blocks. Bus 502 can be a PCI/PCI-X, SCSI, a Universal Serial Bus (USB) based system bus (or other) depending on the storage devices used, and the like. Removable storage media 510 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc—Read Only Memory (CD-ROM), Compact Disc—Re-Writable (CD-RW), Digital Versatile Disk—Read Only Memory (DVD-ROM), etc.
[0308] Embodiments herein may be provided as one or more computer program products, which may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. As used herein, the term “machine-readable medium” refers to any medium, a plurality of the same, or a combination of different media, which participate in providing data (e.g., instructions, data structures) which may be read by a computer, a processor or a like device. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks and other persistent memory. Volatile media include dynamic random-access memory, which typically constitutes the main memory of the computer. Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor. Transmission media may include or convey acoustic waves, light waves, and electromagnetic emissions, such as those generated during radio frequency (RF) and infrared (IR) data communications.
[0309] The machine-readable medium may include, but is not limited to, floppy diskettes, optical discs, CD-ROMs, magneto-optical disks, ROMs, RAMs, erasable programmable pread-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions. Moreover, embodiments herein may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., modem or network connection).
[0310] Various forms of computer readable media may be involved in carrying data (e.g., sequences of instructions) to a processor. For example, data may be (i) delivered from RAM to a processor; (ii) carried over a wireless transmission medium; (iii) formatted and/or transmitted according to numerous formats, standards, or protocols; and/or (iv) encrypted in any of a variety of ways well known in the art.
[0311] A computer-readable medium can store (in any appropriate format) those program elements that are appropriate to perform the methods.
[0312] As shown, main memory 506 is encoded with applications(s) 522 that support(s) the functionality as discussed herein (an application 522 may be a mechanism that provides some or all of the functionality of one or more of the mechanisms described herein). Application(s) 522 (and/or other resources as described herein) can be embodied as software code such as data and/or logic instructions (e.g., code stored in the memory or on another computer readable medium such as a disk) that supports processing functionality according to different embodiments described herein.
[0313] During operation of one embodiment, processor(s) 504 accesses main memory 506 via the use of bus 502 in order to launch, run, execute, interpret, or otherwise perform the logic instructions of the application(s) 522. Execution of application(s) 522 produces processing functionality of the service(s) or mechanism(s) related to the application(s). In other words, the process(es) 524 represents one or more portions of the application(s) 522 performing within or upon the processor(s) 504 in the computer system 500.
[0314] It should be noted that, in addition to the process(es) 524 that carries(carry) out operations as discussed herein, other embodiments herein include the application 522 itself (i.e., the un-executed or non-performing logic instructions and/or data). The application 522 may be stored on a computer readable medium (e.g., a repository) such as a disk or in an optical medium. According to other embodiments, the application 522 can also be stored in a memory type system such as in firmware, read only memory (ROM), or, as in this example, as executable code within the main memory 506 (e.g., within Random Access Memory or RAM). For example, application 522 may also be stored in removable storage media 510, read-only memory 508, and/or mass storage device 512.
[0315] Those skilled in the art will understand that the computer system 500 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources.
[0316] As discussed herein, embodiments of the present invention include various steps or operations. A variety of these steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the operations. Alternatively, the steps may be performed by a combination of hardware, software, and/or firmware. The term “module” refers to a self-contained functional component, which can include hardware, software, firmware, or any combination thereof.
[0317] One of ordinary skill in the art will readily appreciate and understand, upon reading this description, that embodiments of an apparatus may include a computer/computing device operable to perform some (but not necessarily all) of the described process.
[0318] Embodiments of a computer-readable medium storing a program or data structure include a computer-readable medium storing a program that, when executed, can cause a processor to perform some (but not necessarily all) of the described process.
[0319] Where a process is described herein, those of ordinary skill in the art will appreciate that the process may operate without any user intervention. In another embodiment, the process includes some human intervention (e.g., a step is performed by or with the assistance of a human).
Conclusion
[0320] As used in this description, the term “portion” means some or all. So, for example, “A portion of X” may include some of “X” or all of “X.” In the context of a conversation, the term “portion” means some or all of the conversation.
[0321] As used herein, including in the claims, the phrase “at least some” means “one or more,” and includes the case of only one. Thus, e.g., the phrase “at least some ABCs” means “one or more ABCs,” and includes the case of only one ABC.
[0322] As used herein, including in the claims, the phrase “based on” means “based in part on” or “based, at least in part, on,” and is not exclusive. Thus, e.g., the phrase “based on factor X” means “based in part on factor X” or “based, at least in part, on factor X.” Unless specifically stated by use of the word “only,” the phrase “based on X” does not mean “based only on X.”
[0323] As used herein, including in the claims, the phrase “using” means “using at least,” and is not exclusive. Thus, e.g., the phrase “using X” means “using at least X.” Unless specifically stated by use of the word “only,” the phrase “using X” does not mean “using only X.”
[0324] In general, as used herein, including in the claims, unless the word “only” is specifically used in a phrase, it should not be read into that phrase.
[0325] As used herein, including in the claims, the phrase “distinct” means “at least partially distinct.” Unless specifically stated, distinct does not mean fully distinct. Thus, e.g., the phrase, “X is distinct from Y” means that “X is at least partially distinct from Y,” and does not mean that “X is fully distinct from Y.” Thus, as used herein, including in the claims, the phrase “X is distinct from Y” means that X differs from Y in at least some way.
[0326] As used herein, including in the claims, a list may include only one item, and, unless otherwise stated, a list of multiple items need not be ordered in any particular manner. A list may include duplicate items. For example, as used herein, the phrase “a list of XYZs” may include one or more “XYZs.”
[0327] It should be appreciated that the words “first” and “second” in the description and claims are used to distinguish or identify, and not to show a serial or numerical limitation. Similarly, the use of letter or numerical labels (such as “(a),” “(b),” and the like) are used to help distinguish and/or identify, and not to show any serial or numerical limitation or ordering.
[0328] No ordering is implied by any of the labeled boxes in any of the flow diagrams unless specifically shown and stated. When disconnected boxes are shown in a diagram the activities associated with those boxes may be performed in any order, including fully or partially in parallel.
[0329] As used herein, including in the claims, singular forms of terms are to be construed as also including the plural form and vice versa, unless the context indicates otherwise. Thus, it should be noted that as used herein, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0330] Throughout the description and claims, the terms “comprise,” “including,” “having,” and “contain” and their variations should be understood as meaning “including but not limited to” and are not intended to exclude other components.
[0331] The present invention also covers the exact terms, features, values and ranges etc. in case these terms, features, values and ranges etc. are used in conjunction with terms such as about, around, generally, substantially, essentially, at least etc. (i.e., “about 3” shall also cover exactly 3 or “substantially constant” shall also cover exactly constant).
[0332] It will be appreciated that variations to the foregoing embodiments of the invention can be made while still falling within the scope of the invention. Alternative features serving the same, equivalent or similar purpose can replace features disclosed in the specification, unless stated otherwise. Thus, unless stated otherwise, each feature disclosed represents one example of a generic series of equivalent or similar features.
[0333] Use of exemplary language, such as “for instance,” “such as,” “for example” and the like, is merely intended to better illustrate the invention and does not indicate a limitation on the scope of the invention unless so claimed. Any steps or acts described in the specification may be performed in any order or simultaneously, unless the context clearly indicates otherwise.
[0334] While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.