IMAGE PROCESSING SYSTEMS AND/OR METHODS
20170351713 · 2017-12-07
Inventors
Cpc classification
International classification
Abstract
The present invention provides a method (100,200) for identifying, retrieving and/or processing one or more images (12.sub.n) from one or more source network locations (14.sub.n) for display at one or more predetermined target network locations (16.sub.n). The method includes the steps of: acquiring an address (36.sub.n) for each of the one or more source network locations (14.sub.n); perusing data available at each of the one or more source network locations (14.sub.n) to identify one or more images (12.sub.n) suitable for display at the one or more target network locations (16.sub.n); retrieving any images (12.sub.n) identified as being suitable for display at the one or more target network locations (16.sub.n); processing the retrieved images (12.sub.n), as required or desired, in order to adapt the images (12.sub.n) for display at the one or more target network locations (16.sub.n); and, selectively displaying the retrieved and/or processed image or images (12.sub.n) at the one or more target network locations (16.sub.n). Also provided is an associated system (10) for use with the method (100,200) of the invention.
Claims
1. A method for identifying, retrieving and/or processing one or more images from one or more source network locations for display at one or more predetermined target network locations, the method including the steps of: acquiring an address for each of the one or more source network locations; perusing data available at each of the one or more source network locations to identify one or more images suitable for display at the one or more target network locations; retrieving any images identified as being suitable for display at the one or more target network locations; processing the retrieved images, as required or desired, in order to adapt the images for display at the one or more target network locations; and, selectively displaying the retrieved and/or processed image or images at the one or more target network locations.
2. The method of claim 1, wherein the step of acquiring an address for each of the one or more source network locations includes: performing a network and/or database search in response to a search query; identifying one or more source network locations that contain data related to the search query; and, obtaining at least the address for each of the one or more source network locations that were identified as part of the network and/or database search.
3. The method of claim 2, further including the step of: obtaining and/or compiling text-based search results data from/for each of the one of more source network locations that were identified as part of the network and/or database search.
4. The method of claim 3, wherein the step of perusing data available at each of the one or more source network locations to identify one or more images suitable for display at the one or more target network locations includes: utilising the acquired address or addresses to send network crawlers or algorithmic commands to each of the one or more source network locations to identify and analyse any available images for suitability for display at the one or more target network locations.
5. The method of claim 4, further including the step of: obtaining and/or compiling text-based data associated with one or more images identified and analysed at each of the one or more source network locations.
6. The method of claim 5, wherein the text-based data associated with the one or more images identified and analysed at each of the one or more source network locations includes: text-based data extracted from metadata of the one or more images; text-based data associated with and displayed alongside the one or more images at their respective one or more source network locations; and/or, text-based data extracted from metadata contained within modules, fields, graphic tiles, blocks or regions provided at the respective one or more source network locations.
7. The method of claim 1, wherein the step of identifying one or more images suitable for display at the one or more target network locations includes one or more of the following processes: utilising advanced data mining, deep learning, machine learning and/or artificial intelligence to make informed decisions about the existence and suitability of any images available at each source network location; mining source code data and/or embedded link data available at each source network location to determine the size and order of any available images in order to make decisions about the most appropriate or suitable image or images available at each source network location; utilising individual or aggregated user data to make determinations about the most appropriate or suitable image or images available at each source network location; ignoring images of a predetermined and/or unusual shape and/or size; recognising any advertisements and/or third party embedded logos at each source network location and ignoring any images associated with the/those advertisement/third party logos in favour of the selection of other images available at each source network location; utilising one or more image tagging protocols to determine the existence and suitability of any images available at each source network location; scanning and/or analysing metadata of any available image or images to determine the most appropriate or suitable image or images available at each source network location; and/or, analysing and comparing the characteristics of any available images to that of the characteristics of offensive images to make determinations about the most appropriate or suitable image or images available at each source network location.
8. The method of claim 1, wherein the step of retrieving any images identified as being suitable for display at the one or more target network locations includes: selectively compressing or reducing the size of the image or images prior to or during retrieval so as to reduce computational overhead or bandwidth usage.
9. The method of claim 5, wherein if it is determined that there is no suitable image or images available at one or more of the source network locations then the method further includes the step of: obtaining and/or generating a predetermined image or images for each of those source network locations so that the predetermined image or images may be displayed at the one or more target network locations.
10. The method of claim 1, wherein if it is determined that one or more suitable moving images are available at one or more of the source network locations then the method further includes the steps of: acquiring the identification sting or source location details for each of the moving images; obtaining and/or generating a thumbnail or other suitable image for each of the moving images for display at the one or more target network locations; and, utilising the acquired identification string or source location details to enable each of the moving images or a portion thereof to be selectively or automatically played at the one or more target network locations by way of selective or automatic activation of the respective thumbnail or other suitable image.
11. The method of claim 1, wherein the step of processing the retrieved images, as required or desired, in order to adapt the images for display at the one or more target network locations includes one or more of the following processes: analysing the pixels of each image to determine the highest variation area of pixels, selecting a region of predetermined dimensions surrounding the highest pixel variation area, and then adapting each image by removing the portions of each image that are outside of the selected region; analysing the file name and/or metadata of each image in order to locate a specified predetermined pixel point which identifies a desired portion of the image that is to be used for display at the one or more target network locations, selecting a region of predetermined dimensions surrounding the specified predetermined pixel point, and then adapting each image by removing the portions of each image that are outside of the selected region; allowing one or more users to select a region of predetermined dimensions surrounding a desired area of each image, and then adapting each image by removing the portions of each image that are outside of the selected region; analysing one or more pixels of each image to determine whether or not an image contains areas of transparent or no pixels, and if it is determined that an image contains areas of transparent or no pixels, adapting the image by adding a predetermined contrasting background colour(s) and/or effect(s) to the image; and/or, analysing the pixels of any partially transparent images in order to determine the portion and/or size of the non-transparent pixels in relation to the total size of the image, selecting a region of predetermined dimensions surrounding the most appropriate portion of the image which contains non-transparent pixels, and then adapting each image by removing the portions of each image that are outside of the selected region.
12. The method of claim 11, wherein the predetermined contrasting background colour(s) and/or effect(s) that is added to one or more of the images determined to contain areas of transparent or no pixels is selected, generated and/or added by way of one or more of the following processes: analysing the non-transparent pixels of the respective image and generating and adding a contrasting coloured background, or drop shadow or visual effect, to the image which enhances the viewing experience of the non-transparent pixels of the image; mining source code data available at the source network location that corresponds to the respective image, and generating and adding a contrasting coloured background, or drop shadow or visual effect, to the image which corresponds to, or complements, a theme or dominant feature of other data residing at the source network location; and/or, analysing the file name and/or metadata of the respective image in order to locate specified predetermined background information which identifies a desired background colour(s), or drop shadow or visual effect that is to be used with that image, and generating and adding a contrasting coloured background, or drop shadow or visual effect, to the image which corresponds to that specified predetermined background information.
13. The method of claim 11, wherein the process of analysing the pixels or areas of any partially transparent images in order to determine the portion and/or size of the non-transparent pixels in relation to the total size of the image, selecting a region of predetermined dimensions surrounding the most appropriate portion of the image which contains non-transparent pixels, and then adapting each image by removing the portions of each image that are outside of the selected region, further includes one or both of the following steps: reducing the viewable area of the portion of the image that corresponds to the selected region, to a percentage smaller than the full width and/or height of the predetermined dimensions, so as to generate a border area around the non-transparent pixels of each image; and/or, centering the non-transparent pixel content within the selected region of predetermined dimensions prior to removing the portions of each image that are outside of the selected region.
14. The method of claim 9, further including the step of: selectively and/or temporarily storing the retrieved and/or processed image or images, the obtained and/or generated predetermined image or images, the text-based search results data, the text-based data associated with the one or more images identified and analysed at each of the one or more source network locations, and/or data pertaining thereto, in at least one repository, so as to streamline future processing in instances where the same source network locations are identified as part of a future network and/or database search.
15. The method of claim 9, wherein the one or more target network locations include one or more network and/or database search applications or GUIs residing on one or more user operable terminals.
16. The method of claim 15, wherein the step of selectively displaying the retrieved and/or processed image or images at the one or more target network locations includes: selectively displaying the retrieved and/or processed image or images, and/or the obtained and/or generated predetermined image or images, within the one or more network and/or database search applications or GUIs after a network and/or database search has been performed.
17. The method of claim 16, wherein for each source network location that was identified as part of the network and/or database search, the retrieved and/or processed image or images, and/or the obtained and/or generated predetermined image or images, that correspond to that source network location are disposed within at least one activatable tile or region which when selectively or automatically activated links through to the respective source network location.
18. The method of claim 17, further including the step of: for each source network location that was identified as part of the network and/or database search, selectively displaying the obtained text-based search results data and/or the obtained text-based data associated with the one or more images identified and analysed at the source network location, alongside the corresponding retrieved and/or processed image or images, and/or the corresponding obtained and/or generated predetermined image or images, within the at least one activatable tile or region.
19. The method of claim 17, further including the step of: for each source network location that was identified as part of the network and/or database search, audibly conveying the obtained text-based search results data and/or the obtained text-based data associated with the one or more images identified and analysed at the source network location, upon request, or upon it being determined that a user is viewing the corresponding retrieved and/or processed image or images, and/or the corresponding obtained and/or generated predetermined image or images, disposed within the at least one activatable tile or region.
20. The method of claim 17, wherein upon selective or automatic activation of the at least one activatable tile or region corresponding to a selected source network location, network content available at that selected source network location is displayed alongside, and simultaneously with, at least selected ones of the activatable tiles or regions so that those activatable tiles or regions remain accessible to a user should they wish to access and view network content associated with a different source network location.
21. The method of claim 20, wherein the activatable tiles or regions are disposed within a region, sidebar or frame of the one or more network and/or database search applications or GUIs.
22. A non-transitory computer readable medium storing a set of instructions that, when executed by a machine, cause the machine to execute a method for identifying, retrieving and/or processing one or more images from one or more source network locations for display at one or more predetermined target network locations, the method including the steps of: acquiring an address for each of the one or more source network locations; perusing data available at each of the one or more source network locations to identify one or more images suitable for display at the one or more target network locations; retrieving any images identified as being suitable for display at the one or more target network locations; processing the retrieved images, as required or desired, in order to adapt the images for display at the one or more target network locations; and, selectively displaying the retrieved and/or processed image or images at the one or more target network locations.
23. A system for identifying, retrieving and/or processing one or more images from one or more source network locations for display at one or more predetermined target network locations, the system including: one or more modules or applications for acquiring an address for each of the one or more source network locations and/or one or more modules, applications or functions for selectively activating one or more external modules or applications for returning an acquired address for each of the one or more source network locations; one or more modules or applications for perusing data available at each of the one or more source network locations and for identifying and retrieving one or more images suitable for display at the one or more target network locations; one or more modules or applications for processing the retrieved images, as required or desired, in order to adapt the images for display at the one or more target network locations; and, one or more modules or applications for selectively displaying the retrieved and/or processed image or images at the one or more target network locations.
24. A method for selecting a desired region of an image to be displayed at one or more predetermined target network locations, the image having specified predetermined pixel point information included within its file name and/or metadata which identifies the desired region of the image that is to be used for display at the one or more target network locations, the method including the steps of: analysing the file name and/or metadata of the image in order to locate the specified predetermined pixel point information; selecting a region of predetermined dimensions surrounding, or adjacent to, the specified predetermined pixel point information; and, adapting the image by removing the portions of the image that are outside of the selected region so that only the desired region of the image may then be displayed at the one or more predetermined target network locations.
25. A method for generating and adding a desired contrasting background colour(s) and/or effect to a partially transparent image, the partially transparent image having specified predetermined background information included within its file name and/or metadata which identifies the desired contrasting background colour(s) and/or effect, the method including the steps of: analysing the file name and/or metadata of the image in order to locate the specified predetermined background information; and, generating and adding a contrasting coloured background and/or effect to the image which corresponds to that specified predetermined background information.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0033] In order that the invention may be more clearly understood and put into practical effect there shall now be described in detail preferred constructions of an image processing system and/or method made in accordance with the invention. The ensuing description is given by way of non-limitative examples only and is with reference to the accompanying drawings, wherein:
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
MODES FOR CARRYING OUT THE INVENTION
[0045] In the following detailed description of the invention, reference is made to the drawings in which like reference numerals refer to like elements throughout, and which are intended to show by way of illustration specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilised and that procedural and/or structural changes may be made without departing from the spirit and scope of the invention.
[0046] Unless specifically stated otherwise as apparent from the following discussion, it is to be appreciated that throughout the description, discussions utilising terms such as “processing”, “computing”, “calculating”, “acquiring”, “transmitting”, “receiving”, “retrieving”, “identifying”, “determining”, “manipulating” and/or “displaying”, or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0047] Discussions regarding apparatus for performing the operations of the invention are provided herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable read-only memory (EPROMs), electrically erasable programmable read-only memory (EEPROMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
[0048] The software modules, engines or applications, and displays presented or discussed herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialised apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
[0049] A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
[0050] In
[0051] In the preferred embodiments shown in the drawings, system 10 is specifically configured for identifying, retrieving and processing images 12.sub.n for display within a search results screen or page of a search engine GUI 18.sub.n after a search has been performed. As will be described in further detail below, the retrieved images 12.sub.n may be displayed (within search engine GUI 18.sub.n) alongside corresponding text-based or other search results data (see, for example,
[0052] System 10 includes at least one network server 24.sub.n, which in the present embodiment is a search engine or network search service or provider 24.sub.n, and which includes at least one computing device 26.sub.n, which may host and/or maintain a plurality of tools or applications (not shown, but which may be, for example, software and/or hardware modules or applications, etc.) and databases/storage devices 28.sub.n, that together at least provide a means of searching communications network(s) 22.sub.n, but which may also provide a means of identifying, retrieving and/or processing one or more images 12.sub.n (and any desired available associated data, e.g. text-based data associated with an image(s) 12.sub.n, as will be described in further detail below), from one or more source network locations 14.sub.n, for display at one or more predetermined target network locations 16.sub.n, such as, for example, within one or more search engine GUI's 18.sub.n installed on a user operable terminal 20.sub.n, as shown in
[0053] As will be described in further detail below with reference to the preferred flow diagrams of
[0054] Network server 24.sub.n is configured to receive/transmit data, including at least search request and results data, from/to at least one user operable terminal 20.sub.n, via communications network 22.sub.n. The term “user operable terminal(s) 20.sub.n” refers to any suitable type of computing device or software application, etc., capable of transmitting, receiving, conveying and/or displaying data as described herein, including, but not limited to, a mobile or cellular phone, a smart phone, an App (e.g. iOS or Android) for a smart phone, a smart watch or other wearable electronic device, an augmented reality device (such as, for example, an augmented reality headset, eyeglasses or contact lenses, etc.), a connected Internet of Things (“IoT”) device; a Personal Digital Assistant (PDA), and/or any other suitable computing device, as for example a server, personal, desktop, tablet, or notebook computer.
[0055] As already discussed above, network server 24.sub.n is designed to at least perform search functions so as to, for example, retrieve text-based search results data from, and along with, details of associated source network locations 14.sub.n (e.g. the URL of each source network location 14.sub.n) available via communications network 22.sub.n, in response to search requests submitted via a user operable terminal 20.sub.n (either directly, or by way of, for example, a search engine application programming interface, hereinafter simply referred to as “API(s)”), and to return the search results data, etc., to user operable terminal(s) 20.sub.n. Should network server 24.sub.n side image 12.sub.n processing be desired, then network server 24.sub.n would also be configured to identify, retrieve, analyse and/or process (if necessary) images 12.sub.n (and any desired available associated data) before providing those images 12.sub.n (and any desired available associated data) to user operable terminal(s) 20.sub.n.
[0056] As is shown in
[0057] User operable terminals 20.sub.n are each configured to be operated by at least one user 32.sub.n of system 10. The term “user 32.sub.n” refers to any person in possession of, or stationed at, at least one user operable terminal 20.sub.n whom is able to operate the user operable terminal 20.sub.n in order to transmit/receive data, including a search request and/or resultant search results data, and/or display/retrieve (at least) one or more images 12.sub.n within a search engine GUI(s) 18.sub.n installed on the user operable terminal 20.sub.n. User operable terminals 20.sub.n may include various types of software and/or hardware (not shown) required for capturing, transmitting, receiving, analysing, processing, conveying and/or displaying data and images 12.sub.n to/from network server 24.sub.n, source network locations 14.sub.n, and external server(s) 30.sub.n, via communications network 22.sub.n, in accordance with system 10 including, but not limited to: web-browser or other GUI 18.sub.n application(s) or App(s) (e.g. one or more search engine GUI's 18.sub.n), which could simply be an operating system installed on user terminal 20.sub.n that is capable of actively transmitting, receiving, conveying and/or displaying data on a screen without the need of a web-browser GUI, etc.; a plurality of tools or applications (not shown, but which may be, for example, software and/or hardware modules or applications, etc.) that provide a means of identifying, retrieving, analysing and/or processing one or more images 12.sub.n (and any desired available associated data, e.g. text-based data associated with an image(s) 12.sub.n, as will be described in further detail below), from one or more source network locations 14.sub.n, for display within a search engine GUI(s) 18.sub.n after search results data is returned by way of, for example, network server 24.sub.n; monitor(s) (touch sensitive or otherwise); GUI pointing device(s); keyboard(s); sound capture device(s) (e.g. one or more microphone devices for capturing a user's voice commands, etc.); sound emitting device(s) (e.g. one or more loudspeakers and/or text to speech convertors, etc., for audibly conveying search results data and/or any text-based data associated with image(s) 12.sub.n); gesture capture device(s) (e.g. one or more cameras for capturing a user's gesture commands, etc.); augmented reality device(s); smart watch(es); and/or, any other suitable data acquisition, transmission, conveying and/or display device(s) (not shown).
[0058] A search request may be captured by a user operable terminal 20.sub.n directly by way of, e.g. a user 32.sub.n utilising their finger(s), thumb(s), a keyboard, a GUI pointing device(s), etc., or a voice command, physical motion or gesture, etc. Alternatively, a search request may be captured by way of a user 32.sub.n utilising a user interface (not shown), e.g. a smart watch, augmented reality device, etc., connected to the user operable terminal 20.sub.n. A search request may also not involve any user 32.sub.n directed input at all, but instead could be submitted to network server 24.sub.n, as desired by a user operable terminal 20.sub.n itself, based on algorithms, e.g. predictive algorithms, residing on the user operable terminal(s) 20.sub.n, which may determine that a user 32.sub.n has an interest in a particular topic or subject matter, by way of, for example, analysing a user's 32.sub.n behaviour or their geographical location. Similarly, one or more images 12.sub.n (and any desired available associated data), and possibly other search results data associated therewith, may be displayed to a user 32.sub.n by way of one or more screens or monitors of a user operable terminal 20.sub.n, or may be displayed to the user 32.sub.n by way of a user interface (not shown), e.g. a smart watch, augmented reality device, etc., connected to the user operable terminal 20.sub.n. It yet a further embodiment, (at least) the one or more images 12.sub.n may be displayed to a user 32.sub.n by way of one or more screens or monitors of a user operable terminal 20.sub.n (or may be displayed to the user 32.sub.n by way of a user interface (not shown), e.g. a smart watch, augmented reality device, etc., connected to the user operable terminal 20.sub.n), whilst the search results data and/or any text-based data associated with image(s) 12.sub.n may be audibly conveyed to the user 32.sub.n by way of one or more sound emitting device(s) of (or connected to) the user operable terminal 20.sub.n. For example, and as will be described in further detail below, the one or more image(s) 12.sub.n retrieved from one or more source network locations 14.sub.n, may be displayed (by way of, for example, an augmented reality device(s), etc.) to a user 32.sub.n by way of the exemplary search engine GUI 18.sub.n of
[0059] Network server 24.sub.n is configured to communicate with user operable terminals 20.sub.n and external server(s) 30.sub.n via any suitable communications connection or network 22.sub.n (hereinafter referred to simply as a “network(s) 22.sub.n”). External server(s) 30.sub.n is/are configured to transmit and receive data to/from network server 24.sub.n and user operable terminals 20.sub.n, via network(s) 22.sub.n. User operable terminals 20.sub.n are configured to transmit, receive and/or display data and images 12.sub.n from/to network server 24.sub.n, source network locations 14.sub.n, and external server(s) 30.sub.n, via network(s) 22.sub.n. Each user operable terminal 20.sub.n and external server 30.sub.n may communicate with network server 24.sub.n (and each other, where applicable) via the same or a different network 22.sub.n. Suitable networks 22.sub.n include, but are not limited to: a Local Area Network (LAN); a Personal Area Network (PAN), as for example an Intranet; a Wide Area Network (WAN), as for example the Internet; a Virtual Private Network (VPN); a Wireless Application Protocol (WAP) network, or any other suitable telecommunication network, such as, for example, a GSM, 3G, 4G, etc., network; Bluetooth network; and/or any suitable WiFi network (wireless network). Network server 24.sub.n, external server(s) 30.sub.n, and/or user operable terminal 20.sub.n, may include various types of hardware and/or software necessary for communicating with one another via network(s) 22.sub.n, and/or additional computers, hardware, software, such as, for example, routers, switches, access points and/or cellular towers, etc. (not shown), each of which would be deemed appropriate by persons skilled in the relevant art.
[0060] For security purposes, various levels or security, including hardware and/or software, such as, for example, firewalls, tokens, two-step authentication (not shown), etc., may be used to prevent the unauthorized access to, for example, network server 24.sub.n and/or external server(s) 30.sub.n. Similarly, network server 24.sub.n and/or external server(s) 30.sub.n may utilise security (e.g. hardware and/or software—not shown) to validate access by user operable terminals 20.sub.n, or when exchanging information between respective servers 24.sub.n, 30.sub.n. It is also preferred that network server 24.sub.n performs validation functions to ensure the integrity of data transmitted between external server(s) 30.sub.n and/or user operable terminals 20.sub.n. A person skilled in the relevant art will appreciate such technologies and the many options available to achieve a desired level of security and/or data validation, and as such a detailed discussion of same will not be provided. Accordingly, the present invention should be construed as including within its scope any suitable security and/or data validation technologies as would be deemed appropriate by a person skilled in the relevant art.
[0061] Communication and/or data transfer between network server 24.sub.n, external server(s) 30.sub.n and/or user operable terminals 20.sub.n, may be achieved utilising any suitable communication, software architectural style, and/or data transfer protocol, such as, for example, FTP, Hypertext Transfer Protocol (HTTP), Representational State Transfer (REST); Simple Object Access Protocol (SOAP); Electronic Mail (hereinafter simply referred to as “e-mail”), Unstructured Supplementary Service Data (USSD), voice, Voice over IP (VoIP), Transfer Control Protocol/Internet Protocol (hereinafter simply referred to as “TCP/IP”), Short Message Service (hereinafter simply referred to as “SMS”), Multimedia Message Service (hereinafter simply referred to as “MMS”), any suitable Internet based message service, any combination of the preceding protocols and/or technologies, and/or any other suitable protocol or communication technology that allows delivery of data and/or communication/data transfer between network server 24.sub.n, external server(s) 30.sub.n and/or user operable terminals 20.sub.n, in accordance with system 10. Similarly, any suitable data transfer or file format may be used in accordance with system 10, including (but not limited to): text; a delimited file format, such as, for example, a CSV (Comma-Separated Values) file format; a RESTful web services format; a JavaScript Object Notation (JSON) data transfer format; a PDF (Portable Document Format) format; and/or, an XML (Extensible Mark-Up Language) file format.
[0062] Access to network server 24.sub.n and the transfer of information between network server 24.sub.n, source network locations 14.sub.n, external server(s) 30.sub.n and/or user operable terminals 20.sub.n, may be intermittently provided (for example, upon request), but is preferably provided “live”, i.e. in real-time.
[0063] As already outlined above, system 10 is designed to provide an improved process for identifying, retrieving and processing one or more images 12.sub.n (and possibly any desired available associated data, e.g. text-based data associated with an image(s) 12.sub.n, as will be described in further detail below) from one or more source network locations 14.sub.n for display at one or more predetermined target network locations 16.sub.n (preferably within a search results screen or page of a search engine GUI 18.sub.n installed on a user operable terminal 20.sub.n after a search has been performed). To do this, system 10 provides various novel means for identifying and/or retrieving images 12.sub.n (and any desired available associated data) as required, and for analysing and/or processing/manipulating (if necessary) those images 12.sub.n for display within a search engine GUI 18.sub.n. All of this preferably occurring substantially in real-time.
[0064] Again as already briefly outlined above, network server 24.sub.n, user operable terminal(s) 20.sub.n and/or external server(s) 30.sub.n, may host and/or maintain a plurality of applications (not shown, but which may be, for example, software and/or hardware modules or applications, etc.) and database(s)/storage device(s) 28.sub.n (although only network server 24.sub.n database(s)/storage device(s) 28.sub.n are shown, others may be utilised where required) that enable multiple aspects of system 10 to be provided over network(s) 22.sub.n. These module(s) or application(s) (not shown) and database(s)/storage device(s) 28.sub.n may include, but are not limited to: one or more network server 24.sub.n and/or external server(s) 30.sub.n based database(s)/storage device(s) 26.sub.n for storing (whether temporarily or permanently) and/or indexing web data for the purpose of streamlining the provision of at least text-based search results data (and associated source network locations 14.sub.n addresses, e.g. URLs) in response to search requests submitted via user operable terminals 20.sub.n; one or more module(s) or application(s) for capturing search requests input via, or generated by, a user operable terminal 20.sub.n (or one or more user interfaces connected thereto), for submitting the search request to network server 24.sub.n (via network(s) 22.sub.n) for processing (which may be achieved by sending the search request to search engine database(s)/storage device(s) 28.sub.n either directly, or by of a search engine API, etc.), and for retrieving/receiving the resultant search results data (e.g. at least text-based search results data and the corresponding URLs of the source network locations 14.sub.n) after the search have been performed; one or more module(s) or application(s) (such as, for example, web-crawlers, algorithmic commands, or the likes) for scanning source network locations 14.sub.n identified in response to a search, and for identifying and retrieving one or more suitable image(s) 12.sub.n (and any desired available associated data) from each source network location 14.sub.n (as already discussed above, this/these such module(s) or application(s) may reside on network server 24.sub.n, user operable terminal(s) 20.sub.n and/or external server(s) 30.sub.n, as desired, depending on where such processing is to be performed (e.g. server 24.sub.n/30.sub.n side or user operable terminal 20.sub.n side)); one or more module(s) or application(s) for analysing and processing (if necessary) the retrieved images 12.sub.n, and for selecting which image or images 12.sub.n is/are to be displayed within search engine GUI(s) 18.sub.n (as already discussed above, this/these such module(s) or application(s) may reside on network server 24.sub.n, user operable terminal(s) 20.sub.n and/or external server(s) 30.sub.n, as desired, depending on where such processing is to be performed (e.g. server 24.sub.n/30.sub.n side or user operable terminal 20.sub.n side)); one or more module(s) or application(s) for generating or acquiring a thumbnail image(s) 12.sub.n and for locating and retrieving source moving image 12.sub.n file links (e.g. video file links, such as, for example, YouTube identification strings) in response to moving images 12.sub.n being located at source network locations 14.sub.n, for the purpose of enabling moving images 12.sub.n, or a portion thereof (e.g. a preview of the video file, etc.), to be played within search engine GUI(s) 18.sub.n automatically, or as desired by a user 32.sub.n (this/these such module(s) or application(s) may reside on network server 24.sub.n, user operable terminal(s) 20.sub.n and/or external server(s) 30.sub.n, as desired, depending on where such processing is to be performed (e.g. server 24.sub.n/30.sub.n side or user operable terminal 20.sub.n side)); one or more module(s) or application(s) and database(s) or storage device(s) (e.g. 28.sub.n) for generating and/or storing (whether temporarily or permanently) image(s) 12.sub.n for use in situations where it is determined that no suitable image(s) 12.sub.n is/are available at a source network location 14.sub.n, and/or for storing (whether temporarily or permanently) retrieved and/or processed image(s) 12.sub.n (and any associated data) for future use (this/these such module(s), application(s), database(s) and/or storage device(s) may reside on network server 24.sub.n, user operable terminal(s) 20.sub.n and/or external server(s) 30.sub.n, as desired, depending on where such processing is to be performed (e.g. server 24.sub.n/30.sub.n side or user operable terminal 20.sub.n side)); and/or, one or more user operable terminal 20.sub.n based module(s) or application(s) for generating and displaying the selected image(s) 12.sub.n within search engine GUI(s) 18.sub.n, along with any desired or required associated data (e.g. text-based search results data, URLs, and/or associated data retrieved along with the image(s) 12.sub.n, etc.) after a search has been performed (the image(s) 12.sub.n and any associated data preferably being presented in the form of an activatable tile or region 38.sub.n that when selected or otherwise activated links through to the respective source network location 14.sub.n).
[0065] Although separate modules, applications or engines (not shown) and database(s)/storage device(s) (e.g. 28.sub.n) have been outlined (each with reference to one or more of network server 24.sub.n, external server(s) 30.sub.n and user operable terminal(s) 20.sub.n), each for effecting specific preferred aspects (or combinations thereof) of system 10, it should be appreciated that any number of modules/applications/engines/databases/storage devices for performing any one, or any suitable combination of, aspects of system 10, could be provided (wherever required) in accordance with the present invention. A person skilled in the relevant art will appreciate many such module(s)/application(s)/engine(s) and database(s)/storage device(s) embodiments, modifications, variations and alternatives therefor, and as such the present invention should not be construed as limited to any of the examples provided herein and/or described with reference to the drawings.
[0066] In order to provide a more detailed understanding of the operation of preferred system 10 of the present invention, reference will now be made to the exemplary GUI's 18.sub.n (e.g. search engine GUI(s) 18.sub.n, as shown) shown in
[0067] Preferred search engine GUI's 18.sub.n of
[0068] In
[0069] As can be seen in
[0070] A flow diagram illustrating a first preferred image processing method 100 is shown in
[0071] As can be seen in
[0072] Upon user operable terminal 20.sub.n receiving the search results data 34.sub.n, 36.sub.n, in response to the search request (either upon receiving all search results data 34.sub.n, 36.sub.n, or upon receiving some of the search results data 34.sub.n, 36.sub.n, i.e. commencing immediately upon receiving some of the data and continuing simultaneously whilst the remaining data is being retrieved), method 100 may continue at step 106, whereat user operable terminal 20.sub.n then sends web-crawlers (not shown), algorithmic commands (not shown) or the likes, to each of the source network locations 14.sub.n (i.e. network addresses or URLs 36.sub.n, etc.) that were identified as part of the search in an attempt to identify and retrieve one or more suitable image(s) 12.sub.n (and/or any desired available associated data 34.sub.n—as will be described in further detail below) from each source network location 14.sub.n. Thereafter, at step 108, it is checked whether or not one or more suitable image(s) 12.sub.n (and/or any desired associated data 34.sub.n) is/are available at each source network location 14.sub.n.
[0073] Preferred processes/techniques for identifying one or more suitable image(s) 12.sub.n (and/or any desired available associated data 34.sub.n) at each source network location 14.sub.n (in accordance with, e.g., steps 106 & 108, of preferred method 100) may include, but are not limited to: utilising advanced data mining, deep learning, machine learning and/or artificial intelligence processes as part of the scanning/crawling of source network location(s) 14.sub.n so as to make informed decisions about the existence and suitability of any image(s) 12.sub.n (and/or associated data 34.sub.n) available at the source network location(s) 14.sub.n; mining Hyper Text Markup Language (HTML), Javascript, Cascading Style Sheets (CSS), embedded link data (such as, for example, YouTube embedded link data), or other types of code available at source network location(s) 14.sub.n, to determine the size and order of image(s) 12.sub.n on that/those source network location(s) 14.sub.n, and utilising the acquired data to make decisions about the most appropriate or suitable image(s) 12.sub.n available at the source network location(s) 14.sub.n; utilising individual or aggregated user 32.sub.n data (e.g. user's 32.sub.n browsing history or preferences and/or settings configured at an account or user operable terminal(s) 20.sub.n level, etc.) to make determinations about the most appropriate image(s) 12.sub.n suitable for display for an individual user 32.sub.n, or sub group of user's 32.sub.n, etc. (for example, if it is known that a particular user 32.sub.n has historically or recently been searching for information related to ‘small cars’ and an automotive related source network location(s) 14.sub.n is retrieved in response to a search query, system 10 or method 100 may favour the display or ‘small car’ image(s) 12.sub.n over ‘large car’ image(s) 12.sub.n from the/those source network locations(s) 14.sub.n—thus tailoring the display of image(s) 12.sub.n to suit the predicted needs of user's 32.sub.n, etc.); ignoring image(s) 12.sub.n of unusual shape or size, such as, for example, image(s) 12.sub.n smaller than a certain pixel height of width, very thin image(s) 12.sub.n, or very long image(s) 12.sub.n that may not be readily or effectively displayed within the predetermined image 12.sub.n display area(s) provided within search engine GUI(s) 18.sub.n; recognising advertisement(s) and/or third party embedded logo(s) (e.g. PayPal, VISA, AMEX, or other payment, security, web designer third party logo(s), etc.) at source network location(s) 14.sub.n and ignoring the image(s) 12.sub.n associated with the/those advertisement(s)/third party logo(s) in favour of the display of other image(s) 12.sub.n (if any) available at the source network location(s) 14.sub.n; utilising image 12.sub.n tagging protocols, such as, for example, commonly accepted tagging profiles like Facebook's Open Graph Mark-Up protocol, or Twitter's tagging protocol, or other known or proprietary protocols, to determine the existence and suitability of any image(s) 12.sub.n available at the source network location(s) 14.sub.n; scanning or analysing available image(s) 12.sub.n metadata to determine the suitability of image(s) 12.sub.n (and/or associated data 34.sub.n) available at source network location(s) 14.sub.n (should such metadata not be available, then large image(s) 12.sub.n, or moving image(s) 12.sub.n, etc., may be favoured over other image(s) 12.sub.n available at a source network location(s) 14.sub.n); and/or, utilising real time image(s) 12.sub.n processing to compare the characteristics of available/retrieved image(s) 12.sub.n to that of the characteristics of offensive image(s) 12.sub.n and selectivity excluding image(s) 12.sub.n from display that may be likely to be offensive to user's 32.sub.n (e.g. determining and ignoring image(s) 12.sub.n which include nudity, pornography and/or violent elements, themes, etc.—the exclusion of such image(s) 12.sub.n could be determined based on settings associated with a user 32.sub.n, or user operable terminal(s) 20.sub.n, e.g. based on parental controls, etc.). A skilled person will appreciate such preferred methods/techniques for identifying suitable image(s) 12.sub.n, (and/or any desired associated data 34.sub.n) available at source network location(s) 14.sub.n, along with alternatives, variations or modifications thereof, and as such, the present invention should not be construed as limited to any one or more of the specific examples provided herein.
[0074] If at step 108 it is determined that one or more suitable image(s) 12.sub.n (and/or any desired associated data 34.sub.n) are available at a/some/all source network location(s) 14.sub.n, then preferred method 100 continues at step 110, whereat the one or more suitable image(s) 12.sub.n (and/or associated data 34.sub.n) are retrieved (by user operable terminal 20.sub.n) from the/some/all source network location(s) 14.sub.n, before being analysed and processed (if necessary) at step 112 (described below). Although not specifically shown in
[0075] Alternatively, if at step 108 it is determined that one or more suitable image(s) 12.sub.n are not available at a/some/all network location(s) 14.sub.n, then preferred method 100 continues at step 114, whereat no image(s) 12.sub.n are retrieved from the/some/all source network location(s) 14.sub.n, and instead, at step 116, a predetermined image(s) 12.sub.n is/are loaded and/or generated by user operable terminal 20.sub.n for display within search engine GUI(s) 18.sub.n. It will be appreciated that steps 106, 108, 110 & 114, of preferred method 100 of
[0076] If one or more image(s) 12.sub.n (and/or any desired associated data 34.sub.n) are retrieved from a/some/all source network location(s) 14.sub.n at step 110, the/those image(s) 12.sub.n (and/or associated data 34.sub.n) are then analysed and processed (if necessary) by/at the user operable terminal 20.sub.n (at step 112), before the most suitable/appropriate image(s) 12.sub.n (and/or associated data 34.sub.n) are selected for display (and/or are selected to be audibly conveyed along with the display of image(s) 12.sub.n, in the case of any text-based search results or associated data 34.sub.n, etc.) within search engine GUI(s) 18.sub.n (again, at step 112). Preferred methods/techniques of/for analysing, processing and/or selecting suitable image(s) 12.sub.n for display within search engine GUI(s) 18.sub.n, each of which are suitable for use with step 112, of preferred method 100, will be described in further detail below (including with reference to the image 12.sub.n diagrams of
[0077] If, at step 108, it is determined that one or more moving image(s) 12.sub.n (e.g. videos or movies 12.sub.n) are available at a/some/all source network location(s) 14.sub.n, then the one or more module(s) or application(s) (not shown—but as already outlined above) for generating or acquiring a thumbnail image(s) 12.sub.n, and for locating and retrieving source moving image 12.sub.n file links (e.g. video file links, such as, for example, YouTube identification strings) for the purpose of enabling the/each moving image(s) 12.sub.n, or a portion thereof (e.g. a preview of the video file 12.sub.n, etc.), to be played (whether selectively or automatically) within a search engine GUI(s) 18.sub.n may be utilised at steps 110 and 112. The process of identifying and processing (at steps 108 to 112), for example, embedded video(s) 12.sub.n (e.g. embedded YouTube video(s) 12.sub.n, etc.) within a source network location 14.sub.n may involve, but is not limited to: scanning the network location 14.sub.n for the presence of embedded video links; acquiring the identification sting or source location details for each link; generating a thumbnail or any other suitable image 12.sub.n of the/each video file 12.sub.n; overlaying an icon (e.g. a play symbol, etc.) on each thumbnail or other suitable image 12.sub.n that was generated so as to inform a user 32.sub.n that the respective source network location 14.sub.n contains moving image 12.sub.n content, as opposed to just still image(s) 12.sub.n; and, using the acquired identification string(s) to enable the/each video 12.sub.n and/or a portion thereof (e.g. a preview of the video 12.sub.n) to be selectively or automatically played within search engine GUI(s) 18.sub.n (this may be achieved by, for example, connecting to a third party video API(s), not shown, but which may be provided by an external server(s) 30.sub.n, such as, for example, a YouTube API, and accessing and streaming the video 12.sub.n directly from the YouTube API to the search engine GUI(s) 18.sub.n by matching the acquired video identification string found within the/each source network location(s) 14.sub.n to the same video 12.sub.n, etc., stored on YouTube, etc.). By enabling at least a preview of a moving image(s) 12.sub.n to be played within search engine GUI(s) 18.sub.n, a user 32.sub.n may readily watch/preview the moving image(s) 12.sub.n without having to navigate to the actual source network location(s) 14.sub.n to determine whether the image 12.sub.n or site 14.sub.n content is of interest to them.
[0078] Referring back to step 108, if it is determined that no suitable image(s) 12.sub.n are available at a/some/all network location(s) 14.sub.n, then preferred method 100 continues at steps 114 & 116 as described previously. That is, no image(s) 12.sub.n are retrieved from the/some/all source network location(s) 14.sub.n (at step 114), and instead, at least one predetermined image(s) 12.sub.n for each source network location 14.sub.n is/are loaded and/or generated by user operable terminal 20.sub.n for display within search engine GUI(s) 18.sub.n (at step 116). It will be appreciated that step 116, of preferred method 100 of
[0079] Although not specifically shown in
[0080] Again, although not specifically shown in
[0081] Regardless of the way in which the image(s) 12.sub.n (and/or any associated data 34.sub.n) are selected (and possibly temporarily or permanently stored for future use, as described previously) for display within (and/or to be audibly conveyed along with) search engine GUI(s) 18.sub.n, at either or steps 112 or 116, method 100 then continues at steps 118 & 120, whereat the one or more user operable terminal 20.sub.n based module(s) or application(s) (not shown) for generating and displaying the selected image(s) 12.sub.n (and any desired associated data 34.sub.n, 36.sub.n, etc.) within search engine GUI(s) 18.sub.n, may be used: to generate the display of the combined image(s) 12.sub.n, and any desired search results and/or associated data 34.sub.n, 36.sub.n (if required—see, for example,
[0082] As already briefly outlined above, and as is shown in
[0083] A flow diagram illustrating a second preferred image processing method 200 is shown in
[0084] As can be seen from a comparison of the flow diagrams of
[0085] In
[0086] Referring to
[0087] Although not specifically shown in the drawings, an alternative preferred method/technique for manipulating/processing a large, wide or unusual shaped image 12.sub.n (such as, the image 12.sub.n shown in
[0088] Again although not specifically shown in the drawings, yet a further alternative method/technique for manipulating/processing a large, wide or unusual shaped image 12.sub.n (such as, the image 12.sub.n shown in
[0089] Referring to
[0090] As already outlined above, if it is determined that one or more image(s) 12.sub.n are partially transparent image(s) 12.sub.n, then at step 112 or 212, of preferred method 100 or 200, a contrasting or desired background colour(s), effect(s), etc., may be added to the partially transparent image(s) 12.sub.n as, for example, is illustrated by way of the resultant image(s) 12.sub.n shown in
[0091] In accordance with a further aspect of the present invention, and as was outlined in the preceding paragraph, a novel image 12.sub.n file name, or image 12.sub.n metadata, protocol for specifying data required to generate a desired background colour(s), etc., for partially transparent image(s) 12.sub.n, may be utilised in accordance with step 112 or 212, of preferred method 100 or 200, of the present invention. In accordance with one preferred embodiment of this novel protocol a web designer, etc., may add a code within the image(s) 12.sub.n file name (or may embed same within the image(s) 12.sub.n metadata) that indicates a reference to the background, followed by the background RBG values. This may include a string, such as, for example, “_BG_#000000_” specified within the image(s) 12.sub.n file name or metadata. In this example, the letters “BG” are intended to indicate “background”, whilst the RGB code “#000000” is intended to represent “100% black”. The presence of such exemplary information within the image(s) 12.sub.n file name or metadata would readily enable method 100 or 200, to generate a 100% black background for the respective image(s) 12.sub.n. A further exemplary string that may be specified (using, e.g. a HEX code instead of an RGB code) within a partially transparent image(s) 12.sub.n file name, or metadata, may include “_makebackgroundhexFFFFFF_”, which would readily indicate to method 100 or 200, that the desired background colour for the particular image(s) 12.sub.n is 100% white. Further exemplary strings, etc. (not shown), may utilise colour codes other than RGB or HEX, such as, for example, the so-called: HSL; HSV; and/or, CMYK colour codes. A skilled person will appreciate these and other suitable colour codes, identification strings, naming conventions, etc., that may be used in accordance with methods 100, 200, of the present invention. Accordingly, the present invention should not be construed as limited to the specific examples provided herein. This image file name, or image metadata, protocol could be made publicly available to, for example, web designers or copyright owners, so as to make it easy for them to make the relevant changes to (or to create) the file names, or metadata, of partially transparent image(s) 12.sub.n used on their sites 14.sub.n (e.g. source network location(s) 14.sub.n) and to test how those image(s) 12.sub.n display quickly and easily within search engine GUI(s) 18.sub.n.
[0092] Referring again to
[0093] Reference will now be made to the alternative exemplary GUI's 18.sub.n (e.g. search engine GUI(s) 18.sub.n, as shown) shown in
[0094] As already outlined above, in
[0095] In
[0096] In
[0097] The present invention therefore provides novel and useful image processing systems and/or methods suitable for use in identifying, retrieving and processing one or more images from one or more source network locations for display within a search results screen or page of a search engine GUI(s) after a search has been performed. Many advantages of the present invention will be apparent from the detailed description of the preferred embodiments provided hereinbefore. Examples of those advantages including, but are not limited to: the ability to retrieve and process images (and/or associated image data) in real-time, or as close to real-time as possible, and hence, not being required to create an index of stored images beforehand; seamless processing and displaying of images (and/or associated image data) to user's in response to search queries (whether user, or user operable terminal, generated search queries); simultaneous display of search results, including one or more image(s), and network content available at a selected one of the source network locations corresponding to a search result presented within a search engine GUI(s) after a search has been performed; and/or, improved methods/techniques for processing and/or manipulating images, including partially transparent images, retrieved from one or more source network locations, for display at one or more target network locations.
[0098] While this invention has been described in connection with specific embodiments thereof, it will be understood that it is capable of further modification(s). The present invention is intended to cover any variations, uses or adaptations of the invention following in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains and as may be applied to the essential features hereinbefore set forth.
[0099] As the present invention may be embodied in several forms without departing from the spirit of the essential characteristics of the invention, it should be understood that the above described embodiments are not to limit the present invention unless otherwise specified, but rather should be construed broadly within the spirit and scope of the invention as defined in the attached claims. Various modifications and equivalent arrangements are intended to be included within the spirit and scope of the invention. Therefore, the specific embodiments are to be understood to be illustrative of the many ways in which the principles of the present invention may be practiced.
[0100] Where the terms “comprise”, “comprises”, “comprised” or “comprising” are used in this specification, they are to be interpreted as specifying the presence of the stated features, integers, steps or components referred to, but not to preclude the presence or addition of one or more other features, integers, steps, components to be grouped therewith.