METHOD FOR ENHANCING A USER'S IMAGE WHILE E-COMMERCE SHOPPING FOR THE PURPOSE OF ENHANCING THE ITEM THAT IS FOR SALE

20220044311 · 2022-02-10

    Inventors

    Cpc classification

    International classification

    Abstract

    One embodiment of this disclosure is a method for selling an item. The method includes providing an image to a machine, manipulating the image to provide a modified image, identifying at least one item, and displaying the at least one item on the modified image.

    Claims

    1. A method for selling an item, comprising: providing an image to a machine; manipulating the image to provide a modified image; identifying at least one item; and displaying the at least one item on the modified image.

    2. The method of claim 1, wherein the machine utilizes artificial intelligence to provide the modified image.

    3. The method of claim 1, wherein the image is uploaded to the machine from a remote location.

    4. The method of claim 1, wherein the image is uploaded to the machine from a remote device.

    5. The method of claim 1, wherein the image is captured by the machine through a camera coupled to the machine.

    6. The method of claim 1, wherein the image is uploaded from a database.

    7. The method of claim 1, wherein the image is uploaded from a bar code.

    8. The method of claim 1, wherein the image is one of a still image, real time image or a video image.

    9. The method of claim 1, wherein the image is a real time image of a user and the modified image comprises a change to at least one of the user's body contour, skin complexion, eye color, eye clarity, teeth alignment and color, smile, and hair.

    10. The method of claim 1, wherein the at least one item is identified by a marker in or on the item.

    11. The method of claim 1, wherein the at least one item is identified utilizing artificial intelligence without utilizing a marker.

    12. The method of claim 1, wherein the displaying step comprises displaying the at least one item on the modified image with a user display coupled to the machine.

    13. The method of claim 1, wherein the displaying step comprises displaying the at least one item on the modified image to a remote user display from a remote machine.

    14. The method of claim 1, wherein the displaying step comprises displaying the at least one item on the modified image with a personal computing device that wirelessly communicates with the machine.

    15. The method of claim 1, wherein the at least one item comprises a clothing item.

    16. The method of claim 1, wherein the machine utilizes mixed reality to provide the modified image.

    17. The method of claim 1, wherein the at least one item comprises makeup.

    18. The method of claim 1, wherein the at least one item comprises one or more of shoes, jewelry, a purse, glasses, contacts, a vehicle, exercise equipment, a technology product, a household item, real estate, a bicycle, a skin care product, or artificial nails.

    19. A method for selling an item, comprising: providing an image of a user to a machine; manipulating the image to provide a modified image of the user having enhanced physical features; identifying at least one item through a user input; and displaying the at least one item on the modified image having the enhanced physical features of the user.

    20. The method of claim 19, further wherein the enhanced physical features are one or more of modifications to the user's frame, complexion, eye, hair color, hair style, smile, teeth color, and teeth alignment and the at least one item comprises one or more of shoes, a purse, glasses, contacts, a vehicle, exercise equipment, a technology product, a household item, real estate, a bicycle, a skin care product, or artificial nails.

    Description

    DESCRIPTION OF THE DRAWINGS

    [0032] The above-mentioned aspects of the present disclosure and the manner of obtaining them will become more apparent and the disclosure itself will be better understood by reference to the following description of the embodiments of the disclosure, taken in conjunction with the accompanying drawings, wherein:

    [0033] FIG. 1 is an exemplary flow chart of the present disclosure; and

    [0034] FIG. 2 is a schematic representation of components of the present disclosure.

    [0035] Other features and advantages of the present invention will become apparent from the following more detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention.

    DETAILED DESCRIPTION

    [0036] Illustrative embodiments of the invention are described below. The following explanation provides specific details for a thorough understanding of and enabling description for these embodiments. One skilled in the art will understand that the invention may be practiced without such details. In other instances, well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments.

    [0037] Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “above,” “below” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. When the claims use the word “or” in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.

    [0038] Referring to FIG. 2, one exemplary schematic machine 200 is illustrated. The machine 200 may have a controller 204 that control the processing of information provided to, and sent from, the controller 204. The controller 204 may have one or more processor and access to a memory unit. In one example, the controller 204 is part of a computing system that has inputs and outputs that can execute the methods discussed herein

    [0039] The controller 204 may implement an artificial intelligence (“AI”) protocol 202 as part of this disclosure. The AI protocol 202 may utilize machine learning and historical user data to automatically generate recommendations and modifications for the present disclosure. Further, the controller 204 may also utilize mixed reality 218 to provide outputs to a user that show a modified real-time image or a previously uploaded image with altered image data.

    [0040] The controller 204 may also communicate with a camera 206, database 208, optical motion capturing system 210, eye color detection system 214, user input 220, screen 222, holographic display 224, mixed reality headset 226, augmented reality glasses 228, and personal computing device 230 among other things. The controller 204 may communicate with these devices through any known wired or wireless protocol. In one aspect of this disclosure, one or more of these components may be part of the same physical hardware component. Alternatively, different components discussed herein may be separate hardware components that communicate with the controller 204. Regardless, the controller 204 may communicate with one or more of the devices and systems discussed herein to implement the teachings of this disclosure.

    [0041] In one aspect of this disclosure, the machine 200 has the camera coupled thereto. The camera 206 can take video or photographic images of the user or other surroundings to be further processed by the controller 204. In one aspect of this disclosure, photographs and videos taken by the camera 206 may be stored in the database 208 to be selectively processed by the controller 204 at a later time. The camera 206 may also provide photographic or video data to be processed as part of the optical motion capturing system 210, the vision optics technology 212, the eye color detection system 214, or the teeth detection system 216. In other words, the controller 204 may selectively use information provided from the camera 206 to implement one or more of the systems or technologies discussed herein 210, 212, 214, 216.

    [0042] The controller 204 may also communicate with a user input 220. The user input 220 may be a part of the machine 200 that allows a user to input data, such as a keyboard, touchscreen, or any other user input device. The user input may be from devices or displays such as augmented reality glasses 228, mixed reality headset 226, Holographic display 224 and or screen 222. Alternatively, the user input 220 may be part of an application that can be sent to the controller 204 from a personal computing device 230 such as a smart phone, tablet, personal computer, or any known device commonly used for personal data management. Alternatively, the controller 204 may receive the user's input 220 automatically utilizing the camera 206 and artificial intelligence 202.

    [0043] The machine 200 may also have the screen 222 coupled thereto as part of a single hardware component. Alternatively, the screen 222 may be locate remotely from the remaining components of the machine 200. In one aspect of this disclosure, the screen 222 may be displayed on the personal computing device 230. Alternatively, the screen 222 may be positioned at any strategic location separate from the remaining components of the machine 200.

    [0044] Referring now to FIG. 1, an exemplary flow chart 100 of present disclosure is illustrated. This flow chart 100 may initiate in box 102 by utilizing a machine 200 having artificial intelligence 202 or the like implemented by a controller 204 having a memory unit and one or more processors to capture a user's image and detect data points on a user's frame. The artificial intelligence 202 may have access to a camera 206 or the like or real time images of the user may be gathered from a database 208 or manually or automatically uploaded to the artificial intelligence 202 for further analysis. In one example, Apple's ARKit or a similar program could be used for body tracking and motion capture. The body tracking and motion captures may use data points that track the joints of a human skeleton. In one non-limiting example, a marker-based (or markerless) feedback optical motion capturing system 210 may extract the user's skeleton frame using the user's image. The image captured may be from a still image captured by the camera 206 or from still images uploaded to the machine 200, from a live video stream or a database of similar images. The user's skeleton may be extracted using any method know in the art and some non-exclusive examples include OpenPose engine and Kinect-based markerless systems. However, any known system that can analyze an image is considered herein.

    [0045] In one aspect of this disclosure, skin color and/or tone (or “complexion”) may also be detected in box 102. In one non-exclusive example, the user's complexion may be detected utilizing Vision Optics Technology 212 to detect and analyze details of the user's complexion and skintone. One non-exclusive example of Vision Optics Technology 212 is Foundation Finder by Maybelline. However, any known software capable of identifying a user's complexion is considered herein.

    [0046] Also part ofbox 102 may be an eye color detection system 214. The eye color detection system 214 may automatically extract the iris region of the user based on the image. The extracted iris region may be further analyzed and a color classification may be performed. The color classification may utilize a Gaussian Mixture Model and a jointly employed system on a UBIRIS2 database and provide results. The classified eye color may be used as a soft biometric by the artificial intelligence 202 to identify another aspect of the user. See, as one non-exclusive example, “On the reliability of eye color as a soft biometric trait” by Antitza Dantcheva, Jean-Luc Dugelay, Sophia Antipolisand Nesli Erdogmus, the contents of which are hereby incorporated herein by reference.

    [0047] Also part of box 102 may be a teeth whitening and alignment detection system 216. The teeth detection system 216 may automatically extract the teeth of the user based on the real time image. The extracted teeth region may be further analyzed and an enhancement in brightness or alignment may be performed. The teeth enhancement my utilize a program like Fotor editing technologies to enhance the color of the teeth. The artificial intelligence 202 may utilize any one or more of these traits to further analyze the user.

    [0048] In box 104, the frame, complexion, eye, hair color and style, smile, and/or teeth data (collectively “Physical Features Data”) may then be processed through an artificial intelligence application 202 in real time to manipulate the Physical Features Data. The artificial intelligence application 202 may manipulate the user's real time image body contour and/or one or more other of the Physical Features Data to create Enhanced Physical Features wherein modifications are made to the Physical Features Data. In other words the camera 206 may take the user's real time image as input into the artificial intelligence 202 and manipulate the user's image to change any physical trait of the user via the Physical Features Data. The artificial intelligence 202 and mixed reality application 218 may be any known image manipulation software that is capable of altering the image as discussed herein.

    [0049] In box 106, markers are identified on the user. The markers are associated with various personal items that the user is interfacing with and may be anchored to the user's new real time digital frame or image established in box 104. In another embodiment, artificial intelligence 202 and mixed reality 218 may be implemented to allow markerless products to automatically anchor in real time to the modified user's image from box 106. The process of anchoring the user's frame may be achieved utilizing a program like Microsoft special awareness and surface magnetism which makes any object snap to the surface. In this embodiment, the updated user's image from box 104 may provide real time Enhanced Physical Features to allow the artificial intelligence 202 and mixed reality 218 to map the Enhanced Physical Features of the user's real time image and display it accordingly. In another embodiment, the user's updated image from box 104 may provide real time Enhanced Physical Features to allow the artificial intelligence 202 to map the Enhanced Physical Features of the user's real time image and display accordingly in box 108.

    [0050] In box 108, the user may utilize a user input to select an item to be displayed on a screen 222. The screen 222 may be a Holographic display 224. In another embodiment contemplated herein the item identified by the real time user or the artificial intelligence 202 may be displayed through a Holographic display 224. In another example, the item identified by the user or the artificial intelligence 202 may be seen on a mixed reality headset 226 such as Facebook Oculus or Microsoft HoloLens, in order to produce the real world Holographic display. In yet another embodiment, augmented reality glasses 228 (also known as “smart glasses”) such as Google Glasses and Snap Spectacles may be used to see the real time enhanced image with the example of the item selected by the user. In another example, the item identified by the user or the artificial intelligence 202 may be displayed on a personal computing device 230 such as a smart phone, tablet, desktop computer, laptop, or television.

    [0051] In one aspect of this disclosure, the item is adjusted to the modified real time user's image from box 104, which may contain the Physical Enhancements. The modified user's image may be updated in real time. Accordingly, in box 108 the item may be displayed on the modified user's image in real time to illustrate the item on/around the user with the Physical Enhancements. In one aspect of this disclosure, the method discussed herein may create a more pleasant shopping experience for the user along with providing a method for the retailer to increase sales by allowing the user to see the item as it would appear on a modified real time user image.

    [0052] While a particular form of the invention has been illustrated and described, it will be apparent that various modifications can be made without departing from the spirit and scope of the invention. For example, the system may be adapted to be used for a group of people, such as a yoga or exercise class. Alternately, the system may be adapted for use by people who are not exercising on an exercising machine or ecommerce shopping. For example, mental health patients might use the system to assist in positive self-imagery such as a smile, self-esteem and autonomy exercises. In another example, a cosmetic surgeon or beauty treatment centers may want to show their client what they would look like in cloths in real time while in the office visit, after body contouring procedures. Accordingly, it is not intended that the invention be limited, except as by the appended claims.

    [0053] Particular terminology used when describing certain features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the invention.

    [0054] The above detailed description of the embodiments of the invention is not intended to be exhaustive or to limit the invention to the precise form disclosed above or to the particular field of usage mentioned in this disclosure. While specific embodiments of, and examples for, the invention are described above for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. In addition, the teachings of the invention provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various embodiments described above can be combined to provide further embodiments.

    [0055] All of the above patents and applications and other references, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further embodiments of the invention.

    [0056] Changes can be made to the invention in light of the above “Detailed Description.” While the above description details certain embodiments of the invention and describes the best mode contemplated, no matter how detailed the above appears in text, the invention can be practiced in many ways. Therefore, implementation details may vary considerably while still being encompassed by the invention disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated.

    [0057] While certain aspects of the invention are presented below in certain claim forms, the inventor contemplates the various aspects of the invention in any number of claim forms. Accordingly, the inventor reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the invention.