MESSAGE DISTRIBUTION SERVICE
20230012929 · 2023-01-19
Inventors
Cpc classification
G09G2370/022
PHYSICS
G06T3/40
PHYSICS
G06F3/14
PHYSICS
International classification
G06F3/14
PHYSICS
G06T3/40
PHYSICS
Abstract
A method of distributing location-based message contents over a messaging system and that are displayable on consumer devices present at associated locations. The method comprises, for each message of a set of messages, obtaining a message content and a message location search term, submitting the message location search term to a web mapping service so that a service application programming interface (API) searches with the message location search term, and receiving a result list including a plurality of message locations corresponding to the message. The method further comprises adding the message content and the plurality of message locations to a message distribution database or set of linked databases that is or are searchable by location. This facilitates the sending of relevant message location(s) to the consumer devices.
Claims
1.-15. (canceled)
16. A computer-implemented method of displaying content on a display of an electronic device, the method comprising: obtaining real-time augmented image data of an environment of the electronic device, the real-time augmented image data comprising image data augmented with depth information; identifying within the real-time augmented image data a display surface of the environment and an orientation of the display surface; configuring content data representing said content using the identified display surface and the orientation of the display surface to align and orient the content with the identified display surface; and displaying the configured content data and the real-time augmented image data on the display such that the content appears to be present on said display surface.
17. The computer-implemented method according to claim 16 further comprising obtaining said real-time augmented image data via an operating system application programming interface (API) or a native layer of the electronic device.
18. The computer-implemented method according to claim 16 further comprising capturing image data from the environment using one or more cameras and one or more LiDAR scanners of the electronic device.
19. The computer-implemented method according to claim 18 further comprising aligning image data obtained from the one or more cameras and image data obtained from the one or more LiDAR scanners using data provided by one or more motion sensors of the electronic device.
20. The computer-implemented method according to claim 16, wherein the configuring content data representing said content comprises scaling and setting a viewing perspective of the content data.
21. The computer-implemented method according to claim 16, wherein said display is a transparent display, and the configuring content data representing said content comprises configuring the content data so that the content is in focus on said display surface.
22. The computer-implemented method according to claim 16, wherein said content comprises content of a message received by the electronic device, content downloaded to the electronic device, or content generated at the electronic device.
23. The computer-implemented method according to claim 16, wherein the identifying within the real-time augmented image data a display surface comprises determining a display surface from received or stored data and searching the real-time augmented image data for the determined display surface.
24. The computer-implemented method according to claim 16, wherein said content comprises one or a combination of text data, picture data, or video data.
25. A non-transitory computer storage medium storing a computer program, wherein the computer program is configured to be executed by a computer device to cause the computer device to: obtain real-time augmented image data of an environment of the computer device, the real-time augmented image data comprising image data augmented with depth information; identify within the real-time augmented image data a display surface of the environment and an orientation of the display surface; configure content data representing said content using the identified display surface and the orientation of the display surface to align and orient the content with the identified display surface; and display the configured content data and the real-time augmented image data on a display of the computer device such that the content appears to be present on said display surface.
26. The non-transitory computer storage medium according to claim 25, wherein the computer program is configured as an app to run on a mobile device.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
DETAILED DESCRIPTION
[0041] The following disclosure is concerned with a messaging application or “app” in which messages may be associated with location data, and where users can view messages in a geographic region (e.g. close to the user) via an interface. The interface may display a list of messages in the geographic region, display the messages overlaid on a map, or display the messages in an “augmented reality” view (i.e. with the message appearing to float in front of the associated location on displayed graphics, e.g. as captured by a device camera). More particularly, the disclosure is concerned with messages that are each associated with multiple locations, possible even a very large number of locations. It will of course be appreciated that an augmented reality (AR) message can be displayed using a number of different approaches, e.g. under a displayed location in the case where the device is in the basement of a building or on a location as a virtual billboard.
[0042] Consider the example of a chain of supermarkets which wishes to use the location-based messaging service to provide a given message content to customers in their marketing list, with the location tagged as the supermarket stores in the chain. The message content might include for example a discount code that a receiver can use to obtain a discount on items purchased (e.g. “Celebrate Valentine's Day; discount code 12345”).
[0043]
[0044] Referring again to
[0045] In step 201, one of the sending client 2010 chooses to create a “multi-position” message, containing message content that is to be associated with a set of locations (this is called a “multi-position message”, as it is associated with multiple locations).
[0046] In step 202, the sending client 2010 sends this multi-position message to the server 2020. This may be done using a “business platform” interface having a field or fields for the message content and a field identifying the locations, e.g. “supermarket name”.
[0047] In step 203, the server identifies the multiple locations associated with the information provided by the sending client in the location field. These might be, for example, the addresses of stores in the chain and their geographic coordinates, i.e. latitude and longitude. The server may perform these steps using an appropriate API, such as the Google™ mapping service API. The resulting list of locations are added to an “Atlas” database, together with links to the associated message content. As further mutli-position messages are sent by the same or different sending clients, the respective locations and content links are identified by the server and the Atlas updated. The result is an Atlas database containing multiple locations associated with various message content. These messages are referred to here as “business multi-position messages”, with the intended recipients being referred to as consumers (e.g. the users of the receiving clients are considered to be consumers of the business multi-position messages). Businesses may pay a subscription to use this service (via their respective sending clients 2010), or may pay on a per-message basis, or using some other payment model.
[0048] It will be appreciated that the Atlas creation process is dynamic, and that the location of step 203 in the flow is merely exemplary.
[0049] In step 204, the server 2020 receives a further location update message from a given receiving client 2030. Once again, the server will identify any personal messages destined for the receiving client and deliver a notification and or message content as described above.
[0050] In step 205, the server will also determine which if any of the multi-position messages are intended for the receiving client 2030. If the number of multi-position messages is small, all messages may be identified. However, it is more likely that a subset of the complete multi-message set will be identified. This subset may be identified by, for example, matching metadata associated with respective messages (e.g. submitted by the sending client with the message request) against receiving client metadata (e.g. user behaviour, stated preferences, etc).
[0051] In steps 206 and 207, the server determines which of the identified (intended) messages should actually be notified or sent to the receiving client. For each of the identified multi-position messages, the server determines at step 206 the location associated with that multi-position message that is closest to the client. The server then determines 207, for each of those locations, whether the location is within a “notification distance” of the client, and whether it is within a “sending distance” of the client (where the notification distance is greater than the sending distance, e.g. 50 km notification distance and 100 m sending distance). Alternatively, the two substeps may be performed in the opposite order—e.g. for each multi-position message the server first determines whether there are any locations within the notification distance and/or the sending distance, and then, for each message having at least one location within the notification distance, the server determines which location associated with that message is the closest.
[0052] In this example, the closest location is within the notification distance, so in step 208, the server sends a notification of the multi-position message to the receiving client. This notification comprises at least information regarding the closest location of the multi-position message, and may comprise additional data such as a message summary and/or the identity of the message sender. In step 209, the receiving client notifies the user of the closest location of the multi-position message, e.g. by display on a map or on an augmented reality display (as described in further detail later). At this stage, the user is aware that there is a message “waiting for them” at a particular location, but cannot access the contents of the message until they are closer to the location, i.e. within the sending distance.
[0053] In step 210, the receiving client sends a further location update, and in step 211 the server repeats steps 206 and 207 for this further location update, i.e. identifying the closest location of each multi-position message, and determining whether it is within the notification and/or sending distance.
[0054] In this example, the receiving client is within the sending distance, so in step 212, the server sends the message content of the multi-position message to the client, together with information regarding the closest location of the multi-point message (which may be a reference to the notification sent in step 207). In step 213, the receiving client displays the message to the user in an augmented reality interface. This may require the user to select a notification displayed in the AR interface, which then brings up the message contents.
[0055] Steps 206 (determining the closest location) and 207 (determining whether the closest location is within the notification and/or sending distance) will be performed each time the receiving client sends a location update, and step 205 will also be repeated to identify any new messages (which may be done in response to a location update, on a schedule, or in response to some other event).
[0056] In step 205, the server may only identify messages that have not yet been sent to the receiving client, and in step 206 the server may only consider the sending distance when determining whether to send a message or notification for a message which has already been notified to the receiving client.
[0057] If a location update places the receiving client within sending distance of a message which has not yet been notified to that client, then the server may include the message contents with the notification (effectively proceeding directly to step 212 from step 207).
[0058] In step 206, where a receiving client has already been notified of a multi-position message, the server may determine whether another of the locations is closer to the client than the previous closest location, and if so the server may resent the notification if that closest location is within the notification distance.
[0059] The information representing the location may be GPS coordinates or another suitable representation.
[0060] Instead of determining notification distance based on the actual location of the receiving client, the receiving client may send a request for notifications around a user-defined location, and in steps 206 and 207 the server may determine the “closest location” and “notification distance” based on that user-defined location. This may be useful, for example, if a user wishes to determine whether there are any messages close to a location they are travelling towards, before they actually get there. The user may identify the user-defined location by swiping across a displayed map. The “notification distance” may also be user-definable, i.e. provided in a location update by the receiving client, e.g. a user may define the distance by enlarging or reducing the size of a displayed map area. The “sending distance” may still be determined for the actual location of the device, even if the receiving client provides a user-defined location.
[0061] The message contents may include multimedia content, e.g. any combination of text, images, video, audio, additional location data (i.e. a location other than the associated location), etc. The message contents may include only static content (i.e. the same for each location of the set), or it may include both static and dynamic content, where the dynamic content depends on which of the set of associated locations is associated with the single-position message generated by the server. For example, the message contents may include a first image which is a product advertisement (static content), and a set of second images which is a picture of the storefronts of the associated locations (dynamic content), defined such that only the picture for the associated location will be sent by the server to the receiving client. Alternatively, the message contents may include text containing both static and dynamic content, e.g. “Come to your local shop at ((address)) for great deals today!”, where the data sent to the server comprises a lookup table of addresses for each of the set of associated locations, and the server substitutes the relevant address for “((address))” in the message contents prior to sending the single-position message to the receiving client.
[0062] While the above example has referred to a “sending client” and a “server”, the multi-position messages may be directly created at the server, rather than originally obtained from a sending client. For example, this may occur in a setup where an advertiser instructs the operator of the server to generate a message on their behalf.
[0063] In steps 209 and 213, the message or message notification is displayed on an augmented reality display. An augmented reality display is one which overlays display graphics on a real world environment. There are broadly two types of augmented reality displays. In the first type, display graphics are overlaid on an image (generally a live image) taken from a camera. This is the type commonly seen on AR apps for smartphones. In the second type, graphics are displayed on a transparent or translucent display, which the user can look through to see the real world beyond. This type is used for AR headsets, “smart glasses”, or “smart windows”, and has been proposed for “smart contact lenses”. The above disclosure could apply to any of the AR examples given, and will also be applicable to future AR technologies with appropriate modification including holographic displays.
[0064] Message content may be associated with a passcode, such as a password or PIN code, such the content can only be viewed or accessed after a receiver has entered the passcode into his or her device. The passcode may be derived from biometric data such as a fingerprint or the image of a face. In the case of a password, the user's device may provide a means for recovering a forgotten password, such as by way of displaying a password hint.
[0065]
[0066] For the purpose of displaying a received message, known AR applications tend to be quite limited in the positioning of the message on the display or screen, and typically display the message at a fixed location on the display or screen, e.g. top left or bottom right. In order to make a messaging service more relevant and interesting to users, more flexible display solutions are desirable. Whilst the approach that will now be described is applicable to the multi-location messaging services described above, it also applicable to many other messaging services and indeed to content display services in general.
[0067] The following disclosure is concerned with a messaging application or “app” in which messages may be associated with location data, and where users can view messages in a geographic region (e.g. close to the user) via an interface. An example of such an application is the ZOME™ app available on the Apple App Store™ and GooglePlay™. It will however be appreciated that this represents only an exemplary use of the described novel system and other uses are clearly within the scope of the invention.
[0068] The recently launched Apple iPad Pro™ is provided with a Light Detection and Ranging (LiDAR) scanner that is capable of measuring distances to surrounding objects up to 5 m away at nano-second speeds. The device's processor is able to tightly integrate data generated by the LiDAR scanner with data collected by the device's cameras and motion sensors. It is expected that other devices including smartphones will in the near future be provided with LiDAR or other scanners (such as ultrasonic scanners) to enable the capture of 3D aspects of an environment. Systems may alternatively or additionally utilise multiple spaced apart cameras to capture images with depth information. It can also be expected that the range at which scanners operate will increase over time from the iPad's current 5 m range.
[0069] In order to make use of LiDAR and other data, e.g. camera data etc, Apple™ provides app developers with a software development kit (SDK) that consists of tools used for developing applications for the Apple iOS™. In common with other vendors, the Apple SDK includes an application programming interface (API) which serves as a link between software applications and the platform they run on. APIs can be built in many ways and include helpful programming libraries and other tools.
[0070] The introduction and development of this new technology makes possible a new message display paradigm.
[0071] In the case of Apple iOS, it is understood that the SDK allows a developer to create an app that obtains from the system image data that is a composite of data provided by a device's camera and depth data provided by the LiDAR scanner. The two are aligned using motion sensor data. Thus, for example, image data may be obtained that has, for each pixel of an image, a depth or distance value.
[0072] Returning to the location-based messaging service discussed above, e.g. ZOME™, a user of the device may be sent a message having as its location the location of the room. Whilst not in the room, the user will not be able to view the message content although might be provided with in indication that a message is available in the room. In the present context, the message location may be further specified as being on a particular surface of the room. This might be for example a whiteboard or wall mounted screen within the room. In that case of course, the sender of the message may be required to identify the display location. Alternatively, the recipient may specify a display location for his or her incoming messages. For example, a received message may at first float in the environment when viewed on a display, with the user being able to pin that message to a surface by dragging the message onto the surface.
[0073] When the user enters the room and views the room on the device display, an appropriate algorithm running on the device's processor analyses the image data to identify the specified display location, e.g. the whiteboard. This may also utilise the data obtained by the LiDAR scanner and motion sensors. In any case, using all of this data, the device configures the message content for display on the device display so that, when presented, it appears as if it is actually on the whiteboard surface. Moreover, as the camera moves, the message content remains fixed in position relative to the whiteboard. Even where the display surface is at an angle to the device, e.g. see the whiteboard on the right hand wall of
[0074] Referring now to
[0075]
[0076] Whilst the message content might be simple text, e.g. “remember to buy milk”, it can also be images, video (with accompanying audio) etc. It may also be content that is configured to interact with the display surface. One could image for example, the case where the display surface is a painting, and the message content is an image overlaid on the painting, e.g. the content is a bird flying back and forth over a landscape within the painting.
[0077] Whilst the proposal above relates to a device having a camera and a display, the proposal can also be applied to transparent displays such as spectacles. In this case, a camera is still likely required to recognise a display location, but the content is presented as AR content over the transparent display. Other devices that might be used include smart windows such as vehicle windscreens. The proposal is also applicable, by way of example, to smart watches.
[0078] It will be further appreciated that the proposal is not restricted to messaging services but is applicable to many other services and applications. Such an application might be a note keeping or memo application where a user creates a memo using an app on his or her phone and pins this to a surface in the environment using the devices camera and display. When the user views that environment in the future, the memo will appear on the display surface. The memo (or indeed message) may be associated with a display time such that it appears and/or disappears at set times or after set time periods.