Apparatus and Method for Generating Navigational Plans
20220316906 · 2022-10-06
Assignee
Inventors
- Yves Hoppenot (Notre Dame de Mesage, FR)
- Michel Langlais (Pont de claix, FR)
- Christophe LEGRAS (Montbonnot Saint-Martin, FR)
- Jérôme POUYADOU (Grenoble, FR)
Cpc classification
G01C21/3644
PHYSICS
G01C21/3647
PHYSICS
G01C21/3602
PHYSICS
G01C21/3476
PHYSICS
G01C21/3446
PHYSICS
G06V20/56
PHYSICS
G06V10/74
PHYSICS
International classification
Abstract
There is provided an approach for using street view images, captured from a selected geographical area, to obtain one or more of a landmark saliency score and a street crossing simplicity score, with each score reflecting a degree to which a computer-implemented circuit (including image recognition engines and a visual element matching module) can recognize and identify a landmark in at least one of the street view images. In turn, a navigational plan for a selected geographical area, including travel directions, is generated with the one or more of the landmark saliency score and the street crossing simplicity score.
Claims
1. A computer-implemented method for generating a navigational plan for a user in a geographical area that includes a plurality of streets upon which at least one point of interest is present, comprising: generating one or more travel directions with a landmark saliency score for the at least one point of interest, the landmark saliency score representing a measure reflecting a degree to which a computer-based image recognition system can recognize a visual element in at least one electronic image, the visual element in the at least one electronic image serving to identify the at least one point of interest; and outputting the navigational plan that includes the one or more travel directions; wherein said generating obtains the landmark saliency score for the at least one point of interest from a plurality of electronic images captured along at least one of the plurality of streets in the geographical area, the plurality of electronic images including the at least one electronic image, wherein said obtaining includes using the computer-based image recognition system to (i) recognize the visual element in the at least one electronic image, (ii) compare the visual element in the at least one electronic image with a previously stored visual element where the previously stored visual element is associated with a point of interest, and (iii) determine that a selected relationship exists between the visual element in the at least one electronic image and the previously stored visual element.
2. The computer-implemented method of claim 1, wherein the visual element in the at least one electronic image is one of a text portion and a logo.
3. The computer-implemented method of claim 2, in which the one of the text portion and the logo comprises a logo and wherein said using the computer-based image recognition system includes recognizing the logo with a logo recognition engine.
4. The computer-implemented method of claim 2, in which the one of the text portion and the logo comprises text and said using the computer-based image recognition system includes recognizing the text with a text recognition engine.
5. The computer-implemented method of claim 1, wherein said determining that a selected relationship exists includes using image matching to determine whether the selected relationship exists between the visual element in the at least one electronic image and the previously stored visual element.
6. The computer-implemented method of claim 5, in which the visual element includes a text portion, wherein said image matching includes using fuzzy matching to determine that the selected relationship exists between the text portion in the at least one electronic image and a text portion in the previously stored visual element.
7. The computer-implemented method of claim 1, further comprising: determining, responsive to said comparing, that an image match exists between the visual element in the at least one electronic image and the previously stored visual element; responsive to said determining that the image match exists, assigning an image match score; and wherein said determining that a selected relationship exists includes determining that the image match score is equal to or greater than a selected image match threshold.
8. The computer-implemented method of claim 7, wherein said obtaining of the landmark saliency score further comprises assigning a cognitive score to the at least one point of interest, the cognitive score reflecting a degree to which the at least one point of interest would be identified by a human in accordance with common knowledge of points of interest.
9. The computer-implemented method of claim 8, further comprising storing the at least one point of interest in a database.
10. The computer-implemented method of claim 8, wherein the landmark saliency score varies as a function of the image match score, the cognitive score and a distance calculated from at least one of the plurality of electronic images, and wherein the distance calculated from at least one of the plurality of electronic images corresponds with a maximized user recognition limit.
11. The computer-implemented method of claim 1 in which the geographical area includes a plurality of neighborhoods of varying respective sizes, further comprising normalizing the landmark saliency score to accommodate for differences in neighborhood size.
12. The computer-implemented method of claim 1, further comprising selecting the landmark saliency score from a list of ranked landmark saliency scores.
13. A computer-implemented method for generating a navigational plan for a user in a geographical area that includes a plurality of streets with at least two of the streets forming a street crossing, comprising: generating one or more travel directions with a street crossing simplicity score, the street crossing simplicity score representing a measure reflecting a degree to which a computer-based image recognition system can recognize a visual element in at least one electronic image, the visual element in the at least one electronic image serving to identify at least one point of interest within a selected distance of a location associated with the street crossing; and outputting the navigational plan that includes the one or more travel directions; wherein said generating obtains the street crossing simplicity score from a plurality of electronic images captured along at least one of the plurality of streets in the geographical area, the plurality of electronic images including the at least one electronic image, wherein said obtaining includes using the computer-based image recognition system to (i) recognize the visual element in the at least one electronic image, (ii) compare the visual element in the at least one electronic image with a previously stored visual element where the previously stored visual element is associated with a point of interest, and (iii) determine that a selected relationship exists between the visual element in the at least one electronic image and the previously stored visual element.
14. The computer-implemented method of claim 13, wherein said generating includes (a) generating a plurality of navigational plans, and (b) selecting a navigational plan, from the plurality of navigational plans, that optimizes both ease of street crossing traversal and total travel time.
15. The computer-implemented method of claim 14 in which a simplicity score is calculated for one or more street crossings in each one of the plurality of navigational plans, and an estimated total travel time is calculated for each one of the plurality of navigational plans, wherein said selecting a navigational plan includes selecting a navigational plan in which both the simplicity score is maximized and the total travel time is less than or equal to a selected maximum acceptable travel time.
16. The computer-implemented method of claim 14 in which, for each one of the plurality of navigational plans, a traversal time for each pertinent street crossing and each pertinent road segment time are determined, and in which, for each one of the plurality of navigational plans, an estimated travel time is equal to the sum of all pertinent street crossing traversal times and all pertinent road segment times, wherein said selecting a navigational plan includes selecting the navigational plan with a minimum estimated travel time.
17. The computer-implemented method of claim 13, wherein the visual element in the at least one electronic image is one of a text portion and a logo.
18. The computer-implemented method of claim 17, in which the one of the text portion and the logo is a logo and wherein said using the computer-based image recognition system comprises recognizing the logo with a logo recognition engine.
19. The computer-implemented method of claim 17, in which the one of the text portion and the logo comprises text and said using the computer-based image recognition system comprises recognizing the text with a text recognition engine.
20. The computer-implemented method of claim 13, wherein said determining that a selected relationship exists comprises using image matching to determine whether the selected relationship exists between the visual element in the at least one electronic image and the previously stored visual element.
21. The computer-implemented method of claim 20 in which the visual element in the at least one electronic image includes a text portion, wherein said image matching includes using fuzzy matching to determine whether the selected relationship exists between the text portion in the at least one electronic image and a text portion in the previously stored visual element.
22. The computer-implemented method of claim 13, further comprising: determining, responsive to said comparing, that an image match exists between the visual element in the at least one electronic image and the previously stored visual element; responsive to determining that an image match exists, assigning an image match score; and wherein said selected relationship exists when the image match score is equal to or greater than a selected image match threshold.
23. The computer-implemented method of claim 22, wherein said obtaining of the street crossing simplicity score further comprises assigning a cognitive score to the at least one point of interest, the cognitive score reflecting a degree to which the at least one point of interest would be identified by a human in accordance with common knowledge of points of interest in general.
24. The computer-implemented method of claim 23, wherein: said obtaining of the street crossing simplicity score further comprises calculating a visibility score for each identifiable point of interest around at least one street crossing; the visibility score varies as a function of the image match score, the cognitive score and a distance parameter; and for each one of the plurality of electronic images, the distance parameter is defined as a distance between a geographic location associated with the electronic image and a corresponding street crossing location.
25. The computer-implemented method of claim 13 in which a plurality of visibility scores are calculated for one street crossing, wherein the street crossing simplicity score for the one street crossing is obtained by adding the plurality of visibility scores together.
26. An apparatus for generating information relating to at least one point of interest from a plurality of electronic images, the information relating to the at least one point of interest being usable to generate travel directions for a navigational plan, comprising: an image recognition platform for performing image recognition on at least one of the plurality of electronic images to identify at least one of a text portion and a logo; an image matching module for comparing the at least one of the text portion and the logo with each text portion or logo in a points of interest database to obtain an image recognition score for the at least one of the text portion and the logo; said image matching module determining whether a selected relationship exists between the at least one of the text portion and the logo and at least one point of interest designated in the points of interest database; a cognitive scoring module, said cognitive scoring module assigning a cognitive score to a point of interest corresponding with the at least one of the text portion and the logo when the selected relationship exists, the cognitive score reflecting a degree to which the point of interest corresponding with the at least one of the text portion and the logo can be identified by a human in accordance with common knowledge of points of interest; and an enhanced points of interest database, the point of interest corresponding with the at least one of the text portion and the logo being stored in said enhanced points of interest database; wherein the information relating to the at least point of interest includes one of a landmark saliency score and a street crossing simplicity score, each of one of the landmark saliency score and street crossing simplicity score representing a measure reflecting a degree to which said image recognition module recognizes the at least one of the text portion and the logo in one of the plurality of electronic images.
27. The apparatus of claim 26, wherein the image recognition score for the at least one of the text portion and the logo is greater than or equal to a selected threshold.
28. The apparatus of claim 26 in which the at least one of a text portion and a logo comprises a text portion, wherein said image matching module uses fuzzy matching to determine whether the selected relationship exists between the text portion and at least one point of interest designated in the points of interest database.
29. The apparatus of claim 26, wherein the information relating to the at least point of interest includes one of a landmark saliency score and a street crossing simplicity score, each of one of the landmark saliency score and street crossing simplicity score representing a measure reflecting a degree to which said image recognition module recognizes the at least one of the text portion and the logo in one of the plurality of electronic images.
30. A computer-implemented method for generating a navigational plan for a user in a geographical area that includes a plurality of streets (a) upon which at least one point of interest is present and (b) with at least two of the streets forming a street crossing, comprising: generating one or more travel directions using one or more of (x) a landmark saliency score for the at least one point of interest, the landmark saliency score representing a measure reflecting a degree to which a computer-based image recognition system can recognize a visual element in at least one electronic image, and (y) a street crossing simplicity score, the street crossing simplicity score representing a measure reflecting a degree to which a computer-based image recognition system can recognize a visual element in at least one electronic image, the visual element in the at least one electronic image serving to identify, respectively, (v) the at least one point of interest, or (w) at least one point of interest within a selected distance of a location associated with the street crossing; and outputting the navigational plan that includes the one or more travel directions; wherein said generating obtains one or more of the landmark saliency score for the at least one point of interest and the street crossing simplicity score from a plurality of electronic images captured along at least one of the plurality of streets in the geographical area, the plurality of electronic images including the at least one electronic image, wherein said obtaining includes using the computer-based image recognition system to (i) recognize the visual element in the at least one electronic image, (ii) compare the visual element in the at least one electronic image with a previously stored visual element where the previously stored visual element is associated with a point of interest, and (iii) determine that a selected relationship exists between the visual element in the at least one electronic image and the previously stored visual element.
Description
DESCRIPTION OF THE DRAWINGS
[0026] The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
[0027] The present disclosure will become more fully understood from the detailed description and the accompanying drawings, wherein:
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046] In the drawings, reference numbers may be reused to identify similar and/or identical elements.
DETAILED DESCRIPTION
1. System Implementation
[0047] It should be appreciated that the disclosed embodiments can be implemented in numerous ways, including as a process, an apparatus, a system, a device, a method, or a computer readable medium such as a computer readable storage medium containing computer readable instructions or computer program code, or a computer network wherein computer readable instructions or computer program code are sent over communication links. Applications, software programs or computer readable instructions may be referred to as components or modules. Applications may take the form of software executing on a general-purpose computer or be hardwired or hard coded in hardware. Applications may also be downloaded in whole or in part through the use of a software development kit, framework, or toolkit that enables the creation and implementation of the disclosed embodiments. In general, the order of the steps of disclosed processes may be altered within the scope of the disclosed embodiments.
[0048] Referring to
[0049] The text element recognition engine 106 employs optical character recognition (OCR), a well known approach, capable of being programmed to automatically read text in a street view image. OCR techniques use various approaches to segment images for locating textual areas, sequencing each character found in those textual areas, and recombining them to understand the word made by these characters. One OCR approach, as demonstrated by Akbani, A., Gokrani, A., Quresh, M., Kahn, F. M., Behim, S. I. and Syed, T. Q. Character Recognition in Natural Scene Images, 2015 ICICT 2015, the entire disclosure of which is incorporated herein by reference, is effective for recognizing text in images of natural scenes. Another OCR approach, as demonstrated in U.S. Pat. No. 9,008,447, the entire disclosure of which is incorporated herein by reference, is effective for recognizing text in printed documents.
[0050] These types of OCR approaches are typically dependent on text contrast, representation, distance and orientation with respect to a capture device with which they are used (such as a camera). This may result in occasional erroneous recognition or misspelling of words. Such error can be quite similar to how a human recognizes and reads a text: one may erroneously recognize text if it is too small, too far, too fancy or distorted by perspective. As follows from the subject Description, the disclosed embodiments use the text recognizing aspect of the text recognition engine 106 to mimic human text recognition in a natural scene, accommodating for its technical limitations (misrecognized text).
[0051] The image element recognition engine 108 can employ one of several known techniques for recognizing logos in street view images. In one example, as disclosed in U.S. Pat. No. 9,508,021, the entire disclosure of which is incorporated herein by reference, the image element recognition engine 108 uses techniques for recognizing similarities among two or more images. Local features of a given street view image may be compared to local features of one or more reference images to determine if local features of the given street view image comprises a particular pattern to be recognized.
[0052] In another example, the image element recognition engine 108 could suitably use, as disclosed in U.S. Pat. No. 10,007,863, the entire disclosure of which is incorporated herein by reference, saliency analysis, segmentation techniques, and character stroke analysis. Saliency detection relies on the fact that logos have significant information content compared to the background. Multi-scale similarity comparison is performed to remove less interesting regions such as text strings within a sea of text or other objects.
[0053] In yet another example, the image element recognition engine 108 could suitably use a machine learning based approach for detecting logos in video or image data of the type disclosed in U.S. Pat. No. 10,769,496, the entire disclosure of which is incorporated herein by reference.
[0054] Although the above-mentioned examples of logo recognition focus on techniques for recognizing logos, logo recognition may be considered a subset of object or pattern recognition. Typically, logos may include a variety of objects having a planar surface. Accordingly, although embodiments described may apply to logos, images, patterns, or objects, claimed subject matter is not limited in this respect. A process of computer recognition may be applied to recognizing a logo, a geometrical pattern, an image of a building in a photo, lettering, a landscape in a photo, or other such object of an image or photo, just to name a few examples.
[0055] Referring still to
[0056] POI database is enhanced with logos from an off-the-shelf logo database.
[0057] Visual element recognition platform 104 and POI database 110 communicate with a visual element matching module 114. In the embodiments, the visual element matching module 114 could include one or more image matching subsystems of the type disclosed in U.S. Pat. No. 8,315,423, the entire disclosure of which is incorporated herein by reference. In the embodiments, visual element matching module 114 could employ fuzzy matching logic of the type disclosed in U.S. Pat. No. 8,990,223, the entire disclosure of which is incorporated herein by reference. The purpose of the visual element recognition module 114, as will appear, is to determine if a sufficient match exists between a visual element in a street view image and a visual element listed in the database 110.
[0058] Results from the visual element matching module 114 are communicated to a cognitive scoring module 116. The cognitive matching module 116 serves to classify the output of the visual element matching module 114 with a POI categories dictionary 118. The POI categories of the dictionary can be obtained from OSM or developed from scratch. A cognitive score resulting from the classification reflects how readily a given POI is recognized by a human according to the knowledge the human would typically posses with respect to the given POI. For instance, a higher cognitive score would be assigned to a fast food restaurant than to a cleaning service agency.
[0059] Information from the cognitive scoring module 116, regarding enhanced POls, is communicated to an enhanced POI database 120.
2. System Functionality
[0060] Referring to
[0061] Referring to
[0062] The experimentation generally focused on, among other things, two constraints: buildings hosting POIs and streets covered by street view images. Intersection between these two constraints is illustrated by
[0063] Referring again to
[0064] In one example, the comparison is performed with fuzzy logic; however, as indicated above, other visual element matching technologies could be employed to rate the extent to which the recognized visual element(s) corresponds to at least one of the POls in the POI database 110. Referring to 208, if no match exists between a recognized visual element(s) and a given POI, then the system determines, at 210, if processing of additional street view images is warranted. If further processing of street view images is warranted, then another image is, via 211, fetched from memory 102. If, on the other hand, all currently stored street view images have been assessed with respect to the POI database 110, then the process ends at 220 until additional street view images are supplied to the memory 112.
[0065] Referring still to
[0066] Each pair having a visual element_match score equal to or greater than the visual element_match_threshold is passed along to the cognitive scoring module 116. By way of 216, the POI of each Pair having a suitable visual element_match score is classified in accordance with the POI categories dictionary 118 and an appropriate cognitive_score reflecting such classification is assigned. As will be appreciated by those skilled in the art, other approaches, such as crowdsourcing, could be employed to rate the degree to which various POls are recognizable, based on common knowledge.
[0067] Each fully scored street view—POI pair is, via 218, stored in the enhanced POI database 120. In one embodiment, a new cross table is available on the enhanced POI database, pairing street view images with POIs through its visual element_match score and coginitive_score. It is further contemplated that street view images and their locations, as well as POI name, category and location are also available in the database 120. As described below, with the information stored in database 120 (
a. Scoring Buildings as Relevant Landmarks
[0068] In one example, the landmark_score corresponds with a POI (such as a building) on a map. The landmark_score (also referred to herein as “saliency score”) represents, among other things, the capability of a POI to be easily recognized and identified as a landmark by a human. As will appear form the following, that capability can be assessed from the capacity of the system 100 to recognize, from street view images, visual elements (e.g., text and/or logos) associated with POls. The landmark_score varies as a function of the following three parameters, the three parameters being extractable from the enhanced POI database 120 (
[0072] In another example, the landmark_score for a selected POI may be expressed as the maximum of the product of the three parameters:
Landmark_score.sub.POI=max (visual element_match score×cognitive score×distance)
[0073] As can be recognized, in the above exemplary formula, Landmark_score.sub.POI is maximized when a good visual element match can be obtained from a relatively long distance. While the above formula expresses the three parameters as a product with no weighting, in another example, each of the three parameters could be weighted to accommodate for perceived importance.
[0074] Also, a landmark score for a given building including a POI may be defined as follows:
Landmark_score.sub.Building=max (Landmark_score.sub.POI) [when POI is in a building]
b. Exemplary Application of Scoring Buildings as Relevant Landmarks
[0075] Referring to
The exemplary approach includes a set of street view images a-j. In accordance with the embodiments, this set of street view images is filtered as follows: [0076] (i) Street view images “close” to the POI (i.e., distance<max_distance): [b-i]; [0077] (ii) Street view images for which visual element_match>visual element_match_threshold: [e,f,g] (note that street view images with matches correspond with dotted lines and non-matching street view images correspond with dashed lines. [0078] (iii) Number of street view images recognizing one POI>nb_images: [e,f,g]
[0079] Referring to
[0080] Using the exemplary formulas above for determining landmark_score.sub.POI and landmark_score.sub.Building,
[0081] Referring still to
[0082] Referring to Table 1 which sets out calculated landmark scores for candidates collected with respect to visual element recognition candidates a, b, c, d, e, and f and the above-described formula was used to calculate Landmark_score.sub.POI (referred to in Table 1 as “land_score”):
TABLE-US-00001 TABLE 1 POI Name OCR OCR_Match cognitive_score Distance land_score a Au Bureau AUBUREAU 94% 100% 49 m 100% b Taksim Taksim 100% 100% 45 m 97% c Monoprix MONOPRIX 100% 90% 49 m 88% d Le Lyonnais Lyonnais 84% 100% 9 m 16% e Le Rossini Rossini 82% 100% 8 m 14% f L'Eau Vive Eauvive 82% 60% 6 m 6%
[0083] Referring still to
c. Using landmark_score.sub.POI in Generating Travel Directions
[0084] Referring to
[0085] Referring still to
d. Scoring the Degree of Understandability of a Crossing
[0086] In addition to scoring buildings as relevant landmarks, the simplicity of street crossings can be scored by processing street view images (in memory 102 [
[0087] In calculating a simplicity score for a selected crossing, a visibility score with respect to each identifiable POI around a given crossing may be calculated with the following exemplary formula:
visibility_score.sub.POI=max (visual element_match×cognitive_score×(max_distance_distance))
As with the calculation of Landmark_score.sub.POI, parameters for calculating visibility_score.sub.POI could be weighted to accommodate for perceived importance. The simplicity score for POIs around the crossing is then calculated with the following formula:
simplicity_score=Σ visibility_score.sub.POI
[0088] Referring to
[0092] As described below, the simplicity score can be advantageously used in travel planning (e.g., generating travel directions). One goal would be to generate travel directions promoting use of crossing with higher simplicity scores. Indeed, the prior art teaches that a typical user will accept up 16% longer trip time if the recommended path is simpler to follow. When assessing what crossings to include in a given route, one possible consideration is the extra time associated with traversing each crossing. The simplicity score can be applied to this extra time by reducing it proportionality. For instance, the simplest crossing in an urban area would have no extra time, while a less simpler crossing or a crossing without a simplicity score would require maximum extra time.
[0093] The time required to traverse a given crossing can be simplified by the following exemplary expression (noting that the expression does not accommodate for the impact of such impairments as traffic level or signs):
Crossing Traversal Time=crossing_size×speed.sup.−1+structural_complexity×extra_time×(1−normalized_simplicity_score) [0094] Where, [0095] (i) Crossing_size varies as a function of the physical size of the crossing; [0096] (ii) Speed is the average speed of the user; [0097] (iii) Structural_complexity varies as a function of the structural complexity of the crossing; and [0098] (iv) Extra_time is an estimated constant based on crossing complexity
d.1 Exemplary Application of Scoring the Degree of Understandability of a
[0099] Crossing
[0100] Referring to
[0101] Referring to
[0102] Referring to
TABLE-US-00002 TABLE 2 Simplicity score Number of Crossing (normalized) Rank visible POI a 1.000 1 12 b 0.856 2 8 c 0.026 41 1 d 0.004 42 1
[0103] As illustrated by the crossings heat map of
e. Using simplicity score in Generating Travel Directions
[0104] Referring to
[0105] Referring to 1606 of
[0106] Referring to
[0107] Then, at 1702, each possible acceptable set of travel directions for the route is generated, noting that each “acceptable set of travel directions” has a estimated travel length that is less than or equal to a selected maximum travel length. In 1704, in accordance with the above-described approach, a simplicity score is calculated for the crossings of each possible acceptable set of travel directions.
[0108] Referring to 1706, traversal crossing time for each crossing can be obtained using the crossing travel time formula described above (in which normalized simplicity, among other variables, is employed). Also, at 1708, road segment time can be calculated, by reference to data in a conventional database, as indicated above. At 1710, the estimated travel time for each set of travel directions may be determined by adding corresponding road segment times and corresponding traversal crossing times. Travel time is then optimized, at 1712, by selecting the set of travel directions having the minimum estimated travel time. At 1714, the travel directions are stored in memory or buffered for eventual output.
[0109] Various advantages of the above-described embodiments should now be apparent to those skilled in the art.
[0110] First, generating a navigational plan with a landmark saliency score significantly increases the usability of the corresponding plan. That is, through use of such score in generating a plan, there is assurance that the landmarks referenced in corresponding navigational plans will be readily identifiable by a human user. In contrast to conventional landmark recognition approaches, where visual element recognition is typically used to identify landmarks under various ideal conditions, the above-described technique employs a machine-based implementation capable of mimicking human visual acuity. In essence, the technique accommodates for the actual difficulty a human might encounter in recognizing visual elements (such as text and logos) associated with landmarks.
[0111] Second, generating a navigational plan with a street crossing simplicity score also significantly increases the usability of the corresponding navigational plan. Use of the street crossing simplicity score in a corresponding navigational plan assures that landmarks around a given street crossing will be readily identifiable by a human user. In contrast to prior art approaches, where visual elements associated with landmarks typically provide direction indication, the embodiments use visual elements for the sake of identifying landmarks. Additionally, the street crossing simplicity score is particularly useful in determining the amount of time required to traverse a given street crossing, and use of the street crossing simplicity score in generating a navigational plan result in a plan optimizing both ease of street crossing traversal and total travel time.
[0112] Finally, the embodiments disclose a robust computer implemented circuit for determining landmark saliency and street crossing simplicity scores. By comparing a visual element match score with a suitably selected threshold, the capability to mimic human visual acuity is achieved. That is, by setting the threshold at an appropriate level, there is reasonably good assurance a human user will be able to recognize and identify associated landmarks referenced in a navigational plan. Additionally, by accounting for user recognition limit (“distance”) and cognitive cues (by way of assigning cognitive scores with the circuit), the capability of either the landmark score or the street crossing simplicity score to comply with a human user's capability to identify a corresponding landmark is further enhanced.
3. General
[0113] The foregoing description is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. The broad teachings of the disclosure may be implemented in a variety of forms. Therefore, while this disclosure includes particular examples, the true scope of the disclosure should not be so limited since other modifications will become apparent upon a study of the drawings, the specification, and the following claims. It should be understood that one or more steps within a method may be executed in different order (or concurrently) without altering the principles of the present disclosure. Further, although each of the embodiments is described above as having certain features, any one or more of those features described with respect to any embodiment of the disclosure may be implemented in and/or combined with features of any of the other embodiments, even if that combination is not explicitly described. In other words, the described embodiments are not mutually exclusive, and permutations of one or more embodiments with one another remain within the scope of this disclosure.
[0114] It will be appreciated that various of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. For example, the methods above for computing a landmark saliency score or a street crossing simplicity score may be combined to operate together to generate travel directions. Also, various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art and are also intended to be encompassed by the following claims.