System and Method by which combining computer hardware device sensor readings and a camera, provides the best, unencumbered Augmented Reality experience that enables real world objects to be transferred into any digital space, with context, and with contextual relationships.
20170228929 · 2017-08-10
Inventors
Cpc classification
A63F2300/00
HUMAN NECESSITIES
International classification
G06T19/00
PHYSICS
Abstract
Fragmented Reality provides an unencumbered full immersion augmented/virtual reality with object transfer from real world to digital.
Utilizing a combination of the digital compass, a gyroscope, the accelerometer, infrared and GPS, this software detects exactly where the user and their “Camera” is in real space and translates it to digital space, providing for a merging of real world and digital world. Further, it adds the ability to move real objects into the digital world using object and image detection and other heuristics.
Claims
1. A system for defining an augmented reality capability for a mobile phone or tablet device, said system comprising: a) a portable camera comprising a display and having the ability to show the current real world environment via the display; b) a mobile phone or tablet device comprising a computer processor and having the ability to show images, drawings, and models via the display; c) a software program executed by said computer processor for managing the display of said images, drawings, and models via the display; d) a set of controls whereby the user can interact with the software program; f) digital images acquired by the camera based upon user interaction specific view of a particular location; wherein the computer processor, via execution of the software program: i) receives from a user of the system a request for a particular image from the camera view ii) delivers the image to the cloud service component which; iii) receives the image, and uses image detection to determine what the image is then iv) delivers the image as digital 3d model v) or if not known by the cloud service, the software searches public domain models, finds one, compiles it and then delivers back the mobile phone or tablet device to be vi) rendered in the real world environment as displayed by the portable camera are aligned; vii) displays a digital 3d model, with a view of the current real-world environment; and viii) displays an adjusted digital artifact in response to an adjustment by the user of the view of the current real-world environment as displayed by the portable camera and ix) adjusts lighting projected onto the 3d object depending upon location and time of day, x) and applies physics to the object as it relates to the scene and xi) plays animations and particle effects when available
2. The system of claim 1, wherein said a digital image comprises an cropped image of a digital picture viewed through the camera b) cropped using object detection algorithms
3. The system of claim 1, wherein said digital 3d model: a) is related to the particular location; and b) allows some portion or portions of the view of the current real-world environment to remain visible.
4. The system of claim 1, wherein said digital 3d model comprises one or more of the following characteristics: a) it obscures or partly obscures portions of the view of the current real-world environment with content from the artifact; and b) it is rotatable, resizable or repositionable in response to changes in the view of the current real-world environment; and c) has the physical characteristics (hull and mass) that allow it to further interact with the real-world and other digital models and c) is lit by the environment based upon inputs from location, time of day and weather patterns, and d) plays animations if the model contains them and e) produces particle effects when available or when placed near-enough geographically to another digital 3d model
5. The system of claim 1, wherein said 3d digital model comprises an asset in a common industry format (FBX, OBJ) that is compiled to be drawn by 3D Software Engines.
6. The system of claim 1, wherein said digital artifact comprises a digitized 3 dimensional model associated with the particular location.
7. The system of claim 1, wherein the computer processor, via execution of the software program, displays the digital artifact superimposed on at least a portion of the view of a current real-world environment displayed by the portable phone or tablet device.
8. The system of claim 1, wherein the adjustment by the user of the view of the current real-world environment comprises moving closer to or further from a particular location.
9. The system of claim 1, wherein the adjustment by the user of the view of the current real-world environment comprises changing the altitude or azimuth of the view of the current real-world environment.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
CONCLUSION
[0041] The disclosed embodiments are illustrative, not restrictive. While specific configurations of the technology have been described, it is understood that the present invention can be applied to a wide variety of technology category. There are many alternative ways of implementing the invention.
[0042] Fragmented Reality has many applications beyond basic apps and games. A car salesman could use it to project the inside of an engine for a customer. An advertising agency (like for Coca-Cola) could position certain events, animations or object around the globe (for example, a large dancing coke bottle in the middle of a football field)
[0043] Fragmented Reality is a software component which is used to enhance existing applications.
[0044] Because Fragmented Reality is a component is can be used any piece of software including but not limited to games, maps, CAD, advertising, medical/surgery, presentation software.
[0045] Real time application of near field depth perception as well as far field surface, altitude and other geographic data. Object detection and transfer through specialized image detection, search, and 3d model association. Object to Object awareness with related actions (either physics or particle/visual effects)
[0046] The movement of the user and/or camera is grounded by NASA altitude measurements which are used at runtime to create a Heightmap and optional NASA imagery for top-down views
[0047] The grounding allows for realistic physics models to be applied and respected by the Fragmented Reality component. Fragmented Reality also leverages real-world, real-time data from publicly available feeds to augment a user's space with additional characteristics including but not limited to local architecture, traffic incidents, and current events.