METHODS AND APPARATUSES FOR PRINTING THREE DIMENSIONAL IMAGES
20170072645 ยท 2017-03-16
Inventors
Cpc classification
B33Y10/00
PERFORMING OPERATIONS; TRANSPORTING
B29C64/386
PERFORMING OPERATIONS; TRANSPORTING
B33Y30/00
PERFORMING OPERATIONS; TRANSPORTING
B29L2031/772
PERFORMING OPERATIONS; TRANSPORTING
H04N21/4117
ELECTRICITY
B33Y50/00
PERFORMING OPERATIONS; TRANSPORTING
G06T17/20
PHYSICS
B29C67/00
PERFORMING OPERATIONS; TRANSPORTING
G06K15/1849
PHYSICS
G05B19/4099
PHYSICS
G05B2219/49019
PHYSICS
B33Y50/02
PERFORMING OPERATIONS; TRANSPORTING
International classification
B29C67/00
PERFORMING OPERATIONS; TRANSPORTING
B33Y30/00
PERFORMING OPERATIONS; TRANSPORTING
G05B19/4099
PHYSICS
B33Y10/00
PERFORMING OPERATIONS; TRANSPORTING
G06T17/20
PHYSICS
B33Y50/02
PERFORMING OPERATIONS; TRANSPORTING
Abstract
Systems and methods for printing a 3D object on a three-dimensional (3D) printer are described. The methods semi-automatically or automatically delineate an item in an image, receive a 3D model of the item, matches said item to said 3D model, and send the matched 3D model to a 3D printer.
Claims
1. (canceled)
2. A computer-implemented method of three-dimensional (3D) printing of a 3D object of interest, the method comprising: obtaining, using a processor, a 3D wire-frame model of the object of interest; breaking the 3D wire-frame model into 3D components using a processor; and generating, using a processor, a 3D model of the object of interest by mapping information from a portion of an image that depicts the object of interest to the 3D components of the 3D wire-frame model.
3. The computer-implemented method of 3D printing of claim 2, further comprising transmitting the 3D model to a 3D printer.
4. The computer-implemented method of 3D printing of claim 3, further comprising printing the 3D model of the object of interest as a raised-contoured surface.
5. The computer-implemented method of 3D printing of claim 3, further comprising printing the 3D model of the object of interest as a freestanding figurine on a platform.
6. The computer-implemented method of 3D printing of claim 3, further comprising printing 3D model on the 3D printer, wherein the 3D printer is located a remote location.
7. The computer-implemented method of 3D printing of claim 6, further comprising shipping the printed 3D model to a user.
8. The computer-implemented method of 3D printing of claim 2, wherein the 3D model includes surface color information.
9. The computer-implemented method of 3D printing of claim 2, wherein the 3D model includes surface pose information.
10. The computer-implemented method of 3D printing of claim 2, wherein said obtaining, using a processor, a 3D wire-frame model of an object of interest comprises generating the 3D wire-frame model from at least one pair of stereoscopic images.
11. The computer-implemented method of 3D printing of claim 2, wherein said obtaining, using a processor, a 3D wire-frame model of an object of interest comprises selecting the 3D wire-frame model from a database of 3D wire-frame models.
12. The computer-implemented method of 3D printing of claim 2, wherein the 3D wire-frame model includes shape information of the object of interest, and wherein the method further comprises further comprising determining the position of the 3D components with respect to video frames depicting the object of interest.
13. The computer-implemented method of 3D printing of claim 2, wherein the 3D wire-frame model includes one or more constraints of texture, color, or lighting.
14. The method of claim 2, further comprising transmitting the mapped information to a 3D printer.
15. A system for three-dimensional (3D) printing, comprising: at least one processor configured to obtain, using a processor, a 3D wire-frame model of the object of interest; break the 3D wire-frame model into 3D components using a processor; and generate, using a processor, a 3D model of the object of interest by mapping information from a portion of an image that depicts the object of interest to the 3D components of the 3D wire-frame model.
16. The system for 3D printing of claim 15, wherein the at least one processor is configured to calculate the wire-frame 3D model using at least one pair of stereoscopic images.
17. The computer-implemented method of 3D printing of claim 15, wherein the mapped information includes surface color information.
18. The computer-implemented method of 3D printing of claim 15, wherein the mapped information includes surface texture color information.
19. The computer-implemented method of 3D printing of claim 15, further comprising a feedback interface configured to receive user input, and wherein the at least one processor is further configured to select the 3D wire-frame model from a database of 3D wire-frame models based on a user input received through the feedback interface.
20. A non-transitory, computer readable storage medium having instructions stored thereon that cause an apparatus to perform a method comprising: obtaining, using a processor, a 3D wire-frame model of the object of interest; breaking the 3D wire-frame model into 3D components using a processor; and generating, using a processor, a 3D model of the object of interest by mapping information from a portion of an image that depicts the object of interest to the 3D components of the 3D wire-frame model.
21. The non-transitory, computer readable storage medium of claim 20, wherein the method further comprises receiving a user input through a feedback interface, and using the received user input to select the 3D wire-frame model from a database of 3D wire-frame models based on a user input received through the feedback interface.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] A more complete appreciation of embodiments of the invention, and many of the attendant advantages thereof, will be readily apparent as the same becomes better understood by reference to the following detailed description when considered in conjunction with the accompanying drawing in which like reference symbols indicate the same or similar components, wherein:
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
[0026] The following detailed description is directed to certain specific embodiments. However, other embodiments may be used and some elements can be embodied in a multitude of different ways.
[0027] As an initial matter, terminologies to be used to describe example embodiments are provided below as best they can be expressed because various concepts herein are novel and well known terminologies may not yet exist. Moreover, the description should not be interpreted as being limited to only technologies and concepts described herein because the purpose of the description provided herein is for conveying concepts relating to example embodiments of the present invention and is not limited only to example descriptions.
[0028] User interface enabled video devices (UIEVDs) are electronic devices that can display video and allow users to point and click on a portion or portions of displayed video or pictures. Examples of such devices are: TV, IPOD, IPHONE, etc.
[0029] A three-dimensional model (3D model) is a mathematical and/or numerical representation of a 3D object. An example of 3D model format is described in DXF Reference Guide for AutoCAD 2009 available on the AutoDesk website (http://usa.autodesk.com). A wire-frame 3D model contains information relating to the item of interest which could include the object itself, secondary items in the scene, and background attributes. A 3D model can also include information relating to the surface color of the object as well as information relating to the pose of the object to be printed. Moreover, information such as surface texture, illumination properties, and geometry topology is optionally included in the 3D model. Such a 3D model is referred to as a full 3D model herein.
[0030]
[0031] In the embodiment illustrated in
[0032] In some embodiments, the 3D printer 105 is co-located with the TV 101 and the user such that the user can simply remove a printed item from the 3D printer. In other embodiments, the 3D printer 105 is located at a place remote from the user. Once the item is printed, the user may pick-up the printed 3D item at the remote location, or have the 3D item shipped or transferred to the user's location. For example, a relatively inexpensive 3D printer 105 may be located in a user's home; alternatively, a high-end 3D printer 105 may be located at a printing kiosk or other business that provides high quality 3D printing services for numerous customers/users.
[0033]
[0034] TV 101 also includes an object model processor 203 configured to calculate a printable 3D model with color and pose information based on the received wire-frame 3D model, video frames (includes main subject and background), and user selection information. A multiple frame abstraction processor 205 is configured to calculate the 3D model based on the video captured frames. In some embodiments, the multiple frame abstraction processor 205 calculates the 3D model based solely on the video capture frames. An object abstraction processor 207 is configured to delineate the item of interest in the video frame(s) that will be used to generate the printable 3D model. The results from the object abstraction processor 207 can be provided (directly or indirectly) to both the object model processor 203 and the multiple frame abstraction processor 205. An Object Formation Processor 209 integrates the results from 203 and 205 into a readable format for most common 3D printers. The generated 3D model with color and pose information is then sent to 3D printer 105. In some cases, the printer can be attached to a host device such as workstation 103 in
[0035] In particular, the object formation processor 209 generate a 3D model for printing the object of interest as a raised-contoured surface, a 3D model for printing the object of interest as a freestanding figurine on a platform, and/or a 3D model for printing a person depicted in the plurality of video frames. The object formation processor 209 can also break the 3D wire-frame model into 3D components, determine the position of the 3D components in relation with respect to the plurality of video frames representing the delineated object, and color the 3D components with the color information from the plurality of video frames representing the delineated object.
[0036] Using various example components described above in connection with
[0037] After allowing the user to select a portion of the framed image, at step 303 the example embodiment is configured to allow the user to select a wire-frame 3D model of the player. Such a wire-frame 3D model of a particular player may include 3D wire frame skeletons, generic skins, texture maps, attributes (e.g. predetermined colors), and lighting constraints. Moreover, the model may contain the physical limitation of the item of interest (e.g. physical flexibility, etc.). In an example embodiment of the present invention, a user is allowed to access a database with numerous 3D models of various individuals. The user can then match the image that he/she selected to the 3D model of the same individual from the database.
[0038] In one embodiment, accessing the database with wire-frame 3D models can be free of charge, or accessing the database can be a fee based service. If a fee based service is provided, before the requested 3D model is download, whether such service can be provided or not is checked. In other words, in some embodiments, the data processing functionality determines whether it is legal to download a 3D model (step 305) (for example, if previous usage permissions have been granted). If it is legal, the selected 3D model can be downloaded (step 307), and 3D copyright information can also be downloaded (step 309). To conform to intellectual property requirements that may be associated with the printed product, the 3D copyright and/or trademark information can be downloaded to be displayed as part of the printed product. The 3D copyright material may be added to the printable 3D product to ensure compliance with trademark restrictions. In parallel or in sequence, in the example embodiment, the video frame that contains the user selected portion is captured (step 313) and then reformatted (step 315) into an optimal format for performing frame matching. The reformatted image data is then matched with the downloaded 3D model, and the result is sent to 3D printer 105 (step 319), or stored to be printed later. For example, a user may store a number of 3D objects and at a later time print one or more of the 3D objects on a high-end 3D printer at a remote location from the user, for example, at a company or 3D printing kiosk that uses a high-end 3D printer to print a variety of 3D objects commercially. The user then may pickup the one or more objects at the remote location or have the 3D objects shipped from the remote location to the user. It should be noted that the image can be compensated for in-scene motion of the object of interest, for camera motion of the object of interest, and/or for color imbalances of the object of interest
[0039]
[0040] Based on different number of frames and various perspectives of frames, different levels of 3D models can be generated. For instance, multiple images taken of a 3D item of interest can be used to create a 3D model of the item. Given that all the surface area of the item of interest can be imaged, a complete 3D model of the item of interest can be derived. Depending on the geometry of the item of interest, it is possible to perform a valid representation with a small number of video frames (e.g. 6). A multi-lighted, multi-angle 3D model generation system was developed by Adam Baumberg from the Canon Research Centre Europe that faithfully restored a complete 3D model from 20-25 frames. The system is described in Blending images for texturing 3D, Adam Baumberg, Canon Research Centre Europe, which is incorporated by reference herein in its entirety. Additionally, an algorithm is described in U.S. Pat. 6,856,314, called Method and system for 3D reconstruction of multiple views with altering search path and occlusion modeling, (for example, at column 4, line 55 through column 5, line 10 and as illustrated by the referenced figures) to perform full 3D model reconstruction from a limited number of images, which is incorporated by reference herein in its entirety.
[0041] With a complete 3D model, a fully developed 3D object can be printed using a 3D printer. However, the present invention also contemplates creating other types of models that are less than a complete 3D model depending upon the availability of relevant information and/or the user preference. For instance, a 3D model that represents a raised flat surface can be generated from two stereoscopic images. The raised surface can be a stepped up surface, as compared with the surrounding surface, and can be outlined by the contour of an object selected by the user. As an example, if a person is selected by the user in the video scene, the contour of that person in the image would be raised using multiple views to create the 3D perspective. This type of 3D model is referred to as a 2.5 dimension model within this document for the sake of convenience.
[0042] In another example, two stereoscopic images can be used to generate 3D models to create raised relief of the object of interest. An example of such a 3D model is a relief map. This type of 3D model is referred to as a 3D relief model within this document for the sake of convenience. A difference between a 3D relief model and a 2.5 dimensional model is that the raised surface of a 2.5 dimensional model would be flat no matter what the object of interest actually looks like; whereas, the raised surface of a 3D relief model would show a 3 dimensional surface.
[0043] Now referring back to
[0044]
[0048] All three publications identified above are incorporated reference herein in their entirety. Other delineation methods and algorithms may also be used.
[0049] The result of delineated object from the image can then be combined with (or mapped to) a wire-frame 3D model selected by the user to form a full 3D model which information relating to the surface colors and pose of the object to be printed. The 3D wire-frame model is combined with the captured image in a multi-step process. The following list describes the steps in some embodiments of the process: [0050] Break the wire-frame 3D model into 3D wire-model components that can be matched directly with the capture video frames. For example, 3D wire-model components of a sports figure can be the head, arms, torso, legs, and feet. [0051] Determine the position of the each 3D component in the scenes of the captured video frames: [0052] For a given 3D wire-frame component, determine constraints imposed by other 3D wire-frame components placed in the scene. In the example of a sports figure, if the torso is placed first, the search area and viable search location (for example, position, orientation) for the head is limited to a small area. Once the head is placed, the arms, legs and feet can also be placed. [0053] For a given 3D wire-frame component, create multiple simulated 2D images from the 3D wire-frame model at various orientations (e.g., view points) as constrained by the previous step. The simulated 2D images can be created using a generic illumination model in order to robustly match the created image with the captured image. In other examples, illumination models that match the actual scene (for example, rainy day, dusk, bright sunny day with shadows, or the like) can also be used. [0054] For a given 3D component, determine the position and orientation thereof using Principal Component Analysis (PCA). The principal component analysis (Karhunen-Loeve transform) is a linear transformation for a sample in n-dimensional space which makes the new coordinate axes uncorrelated. Using PCA, the individual 3D wire-frame component that is projected into a simulated 2D image is aligned with the 2D capture images extracted from the captured frames. An approximation to the Karhunen-Loeve transform is discussed in Fast Approximate Karhunen-Loeve Transform with Applications to Digital Image Coding, Leu-Shing Lan and Irving Reed, University of Southern California, Proc SPIE Vol. 2094, 1993, which is incorporated by reference in its entirety. [0055] Optimize on a global connection distance metric that ensures all the 3D wire-frame components re-connect to the original 3D wire-frame model at the proper locations. If the model does not re-connect properly, as specified by a user threshold, the process can be re-iterated by discarding the largest incorrect correspondence and re-calculating the system of simulated 2D images and 3D wire-frame components.
[0056] Finally, the image may be textured onto (for example, fill in with colors) the full 3D model. The 3D wire-frame model is simplified into a triangular mesh. Since the position and orientation of the 3D wire-frame model relative to the 2D captured frame(s) are calculated in the steps described above, the delineated image is projected onto the triangular mesh of the 3D wire-frame model. Three different visibility states that can be considered when performing the mapping from the 2D captured image onto the 3D wire-frame model to include: [0057] Visible: If a triangle of the 3D wire-frame model is completely visible in the camera view, the whole triangle can be textured based on the color information of the delineated image. [0058] Hidden: If a triangle is completely hidden in the camera view, the triangle can be rendered with zero intensity or filled in with information from another part of the model (for example, opposite side) based on the user's preferences. [0059] Partially visible: The triangles may be subdivided into smaller triangles that are hidden or visible. Then the appropriate action can be taken from the above steps and the model may be textured.
[0060] The resulting 3D model is textured with color and shadow and displayed in the appropriate pose, which is referred to as the full 3D model. At this point, the object of interest is ready to be printed. Note that the mapping can be achieved (and/or improved) if multiple frames or multiple cameras at different angles are used to during the coloring step. In other words, using multiple frames or multiple cameras at different angles, the front and back of an object to be printed can be colored in assuming that the frames captured the front and back of the object to be printed.
[0061] In
[0062] In any of the methods and processes specifically described above, one or more steps may be added, or a described step deleted, without departing from at least one of the aspects of the invention. Those of ordinary skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative logical blocks, components, modules, and circuits described in connection with the examples disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some embodiments, the functionality of devices described herein as separate processors may be combined into an embodiment using fewer processors, or a single processor, unless otherwise described.
[0063] Those of ordinary skill would further appreciate that the various illustrative logical blocks, modules, and algorithm steps described in connection with the examples disclosed herein may be implemented as electronic hardware, firmware, computer software, middleware, microcode, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed methods.
[0064] The steps of a method or algorithm described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). The ASIC may reside in a wireless modem. In the alternative, the processor and the storage medium may reside as discrete components in the wireless modem.
[0065] Various modifications to these example embodiments may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other examples without departing from the spirit or scope of the novel aspects described herein. Thus, the scope of the disclosure is not intended to be limited to the examples shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.