INFORMATION PROCESSING APPARATUS, SIMULATOR RESULT DISPLAY METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
20180012395 · 2018-01-11
Assignee
Inventors
Cpc classification
G06T19/20
PHYSICS
International classification
Abstract
An information processing apparatus is disclosed. A processor selects cross-section shape information and texture information corresponding to a view direction from a memory. The memory stores the cross-section shape information representing a cross-section shape and the texture information representing a texture of a cross-section for each of cross-sections in a vicinity of a line segment pertinent to a phenomenon portion. The processor generates visualization data used to visualize the line segment in a three dimensional image by using the cross-section shape information and the texture information being selected and displays the line segment based on the visualization data on a display part.
Claims
1. An information processing apparatus, comprising: a memory; and a processor coupled to the memory and the processor configured to select cross-section shape information and texture information corresponding to a view direction from the memory, the memory storing the cross-section shape information representing a cross-section shape and the texture information representing a texture of a cross-section for each of cross-sections in a vicinity of a line segment pertinent to a phenomenon portion; and generate visualization data used to visualize the line segment in a three dimensional image by using the cross-section shape information and the texture information being selected, and display the line segment based on the visualization data on a display part.
2. The information processing apparatus as claimed in claim 1, wherein the processor is further configured to create the cross-section information by connecting a first plane and a second plane at a line including a center of the line segment, in which the first plane and the second plane intersect each other, the first plane including the center of the line segment and one edge of the line segment, the second plane including the center of the line segment and another edge of the line segment.
3. The information processing apparatus as claimed in claim 2, wherein the processor creates the cross-section shape information depending on each of angles by rotating one of the first plane and the second plane with respect to a line passing through the center defined as an axis at every predetermined angle.
4. The information processing apparatus as claimed in claim 1, wherein the processor is further configured to set a plurality of cut surfaces equally dividing a rectangle region in each of perpendicular directions with respect to the rectangle region of three dimensions, the rectangle region including a cross-section in the vicinity; and create the cross-section shape information by forming a curved surface by using apexes of the plurality of cut surfaces.
5. The information processing apparatus as claimed in claim 4, wherein the curved surface is created for each of regions equally divided by the cut surfaces at each of surfaces of the rectangular region of the three dimensions; and each of lines equally dividing the surfaces is formed as a part of the curved surface, and is shared with the curved surface created in another region.
6. A simulator result display method by a computer, the method comprising: selecting cross-section shape information and texture information corresponding to a view direction from a memory, the memory storing the cross-section shape information representing a cross-section shape and the texture information representing a texture of a cross-section for each of cross-sections in a vicinity of a line segment pertinent to a phenomenon portion; generating visualization data used to visualize the line segment in a three dimensional image by using the cross-section shape information and the texture information being selected; and displaying the line segment based on the visualization data on a display part.
7. A non-transitory computer-readable recording medium storing therein a simulator result display program that causes a computer to execute a process comprising: selecting cross-section shape information and texture information corresponding to a view direction from a memory, the memory storing the cross-section shape information representing a cross-section shape and the texture information representing a texture of a cross-section for each of cross-sections in a vicinity of a line segment pertinent to a phenomenon portion; generating visualization data used to visualize the line segment in a three dimensional image by using the cross-section shape information and the texture information being selected; and displaying the line segment based on the visualization data on a display part.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
DESCRIPTION OF EMBODIMENTS
[0035] For the above described technologies, in order to realize bird's eye visualization, reduction of a data size is an important issue. In general, thinning of grid information is performed. In this case, a display is not realized with analysis accuracy of large-scale simulation itself.
[0036] Accordingly, the embodiment presents to properly extract sufficient data for a visualization process of a simulation result.
[0037] In the following, a preferred embodiment of the present invention will be described with reference to the accompanying drawings.
[0038] First, a center line 3 of a phenomenon region 1 representing a phenomenon from the scaler field data 2 is acquired, nodal points I.sub.0, I.sub.1, . . . are applied on the center line 3, and a vector is calculated on each of nodal points I.sub.k (k=0, 1, . . . ) such as a vector v.sup.x (I.sub.0-I.sub.1) (1a). Torsion information is generated by using respective vectors acquired for the nodal points I.sub.k, and a curved surface 1c is visualized by using the generated torsion information. An example of a three dimensional visualization will be depicted in
[0039]
[0040]
[0041]
[0042]
[0043] As a phenomenon to be visualized, pressure, air flow, and the like being unshaped are considered. However, the embodiment is not limited to these phenomena, and is able to visualize various phenomenon regions 1 in three dimensions.
[0044] The three dimension visualization in the embodiment is conducted by an information processing apparatus 100 as illustrated in
[0045] The CPU 11 corresponds to a processor that controls the information processing apparatus 100 in accordance with a program stored in the main storage device 12. A Random Access Memory (RAM), a Read Only Memory (ROM), and the like may be used as the main storage device 12 to store or temporarily store the program executed by the CPU 11, data for a process conducted by the CPU 11, data acquired in the process conducted by the CPU 11, and the like.
[0046] A Hard Disk Drive (HDD) or the like may be used as the auxiliary storage device 13, and stores various sets of data such as programs for performing various processes and the like. A part of the program stored in the auxiliary storage device 13 is loaded to the main storage device 12, and various processes are performed and realized by the CPU 11.
[0047] The input device 14 includes a mouse, a keyboard, and the like, and is used by a user to input various information items for the process conducted by the information processing apparatus 100. The display device 15 displays various information items for control of the CPU 11. The input device 12 and the display device 15 may be integrated as a user interface such as a touch panel or the like. The communication I/F 17 conducts wired or wireless communications through a network. The communications by the communication I/F 17 are not limited as wireless or wired.
[0048] The program realizing the processes conducted by the information processing apparatus 100 may be provided to the information processing apparatus 100 by a recording medium 19 such as a Compact Disc ReadOnly Memory (CD-ROM) or the like, for instance.
[0049] The drive device 18 interfaces between the recording medium 19 (the CD-ROM or the like) set into the drive device 18 and the information processing apparatus 100.
[0050] Also, the program, which realizes the various processes according to the embodiment, is stored in the recording medium 19. The program stored in the recording medium 19 is installed into the information processing apparatus 100 through the drive device 18, and becomes executable by the information processing apparatus 100.
[0051] The recording medium 19 storing the program is not limited to the CD-ROM. The recording medium 19 may be any type of a recording medium, which is a non-transitory tangible computer-readable medium including a data structure. The recording medium 19 may be a portable recording medium such as a Digital Versatile Disc (DVD), a Universal Serial Bus (USB) memory, or the like, or a semiconductor memory such as a flash memory.
[0052]
[0053] Also, a storage part 130, which corresponds to the main storage device 12 and the auxiliary storage device 13, stores vector field data 51, a λ2 distribution data 52 of the scaler field, cross-sectional shape information 53, texture information 54, visualization data 55, camera position information 59, and the like.
[0054] The space interpolation part 41 includes the entire computational space 7 (
[0055] When the vector field data 51 is data of the structural grid, the space interpolation process is not conducted. In the vector field data 51, with respect to data of the unstructured grid, the space interpolation process is conducted by the space interpolation part 41.
[0056] The phenomenon region extraction part 42 uses a λ2 method of Jing et al., and extracts the phenomenon region 1 (which may be a vortex region or a vascular region). The λ2 distribution data 52 by the scaler field acquired by the λ2 method are specified by a boundary 4 (
[0057] The center line extraction part 43 extracts the center line 3 of a region extracted by the phenomenon region extraction part 42.
[0058] The cross-section formation part 44 conducts a cross-section formation process with respect to the region extracted based on the center line 3. The cross-sectional shape information 53 is stored in the storage part 130. The cross-sectional shape information 53 includes information pertinent to multiple cross-sectional shapes.
[0059] The cross-section formation part 44 in the embodiment conducts a first cross-section formation method or a second cross-section formation method described below, depending on a selection of the user.
[0060] First Cross-Section Formation Method
[0061] For each of ends of the center line 3, multiple planes including the edge and the region center 5 (
[0062] Second Cross-Section Formation Method
[0063] Maximum and minimum space information is defined for the extracted region. The curved surface is generated by using center point information acquired from the space information, and the cross-section formation process is conducted. The space information represents a three-dimensional rectangular region such as a cube, a rectangular solid, or the like. Hereinafter, it is simply called “rectangular region”.
[0064] The texture generation part 45 uses the cross-sectional shape information 53 and generates texture depending on each of cross-sectional shapes. The texture information 54 is output and stored in the storage part 130. The texture information 54 is regarded as information such as the texture, or the like, which represents reflected light of a band surface for a light source arranged at a predetermined position in a three dimensional space.
[0065] The rendering part 46 includes a selection part for selecting the cross-sectional shape information 53 and the texture information 54 depending on the camera position information 59 from the storage part 130, and a display part for generating the visualization data 55 in the storage part 130 and displaying the visualization data 55 on the display device 15.
[0066]
[0067] For each of the extracted regions, the first cross-section formation method or the second cross-section formation method is conducted. The first cross-section formation method and the second cross-section formation method 2 will be described.
[0068]
[0069] The cross-section formation process is conducted by setting a pair of planes: a first plane and a second plane. The first plane and the second plane share one edge including the region center 5 with each other. The first plane includes one end point of the center line 3 and the region center 5. The second plane includes another end point of the center line 3 and the region center 5. By a different angle θ, a plurality of pairs of planes sharing the edge including the region center 5 may be created. In this example, θ=90° is applied. however, any angle θ is applicable.
[0070]
[0071] A curved line 8 is defined by using a central point of each of edges of the rectangular region 6, and the curved surface is generated by a plurality of defined curved lines 8.
[0072]
[0073] The phenomenon region extraction part 42 generates the λ2 distribution data 52 by calculating a λ2 value distribution of the computational space 7, and extracts the phenomenon region 1 (step S72). By extracting the phenomenon region 1, at least the boundary 4 is extracted. Then, the center line extraction part 43 extracts the boundary 4 and the center line 3 from the computational space 7 based on the λ2 distribution data 52 (step S73).
[0074] Next, the cross-section formation part 44 conducts the cross-section formation process based on the boundary 4 and the center line 3, which are extracted by the center line extraction part 43, by the first or second cross-section formation method indicated by the user (step S74). The cross-sectional shape information 53 is output.
[0075] Next, the texture generation part 45 generates the texture for each of the cross-sectional shapes with respect to each of the regions specified by the boundary 4 based on the cross-sectional shape information 53, and generates the texture information 54 representing the texture (step S75).
[0076] The rendering part 46 creates the visualization data 55 based on the camera position information 59 by using the cross-sectional shape information 53 and the texture information 54 (step S76). Based on the visualization data 55, a three dimensional image or motion picture is displayed on the display device 15.
[0077] The space interpolation process by the space interpolation part 41 will be described below. The space interpolation part 41 inputs the vector field data 51 for the structural grid, the unstructured grid, or the mix thereof are input.
[0078]
[0079] The space interpolation part 41 includes the entire computational space 7 in the rectangular region, and conducts the space interpolation process by the structural grid. The space interpolation process may be conducted in accordance with a procedure described below.
[0080] 1) For the unstructured grid, a histogram is automatically acquired based on a length of the edge. That is, if (min, max)=(1.0e−3, 1.0e+2), the unstructured grid is divided by 10 in logarithm.
[0081] 2) A count value (which may be “a”) in a first histogram range and a maximum count value (which may be “b”) are introduced into the following formula,
[0082] As a data structure, both the structural grid and the unstructured grid may be input.
[0083] In the space interpolation process of data for the unstructured grid, by using a method for generating the three dimensional structural grid by using a tetrahedral element (Non-Patent Document 2), the interpolation is conducted by a division number indicated by the user. The extraction of the phenomenon region 1 is conducted with little dependence on an interpolation accuracy.
[0084] The texture generation part 45 divides the computational space into the phenomenon region 1 and a region other than the phenomenon region 1. An example of using the λ2 method of Jing et al. will be described below. However, the extraction of the phenomenon region 1 is not limited to the λ2 method. In a case of using the λ2 method, a value to output is a λ2 value (a scaler value).
[0085] 1) A velocity vector (u, v, w) is arranged on the nodal points of the structural grid. In a case of the unstructured grid, the interpolation is conducted for the nodal points on the structural grid.
[0086] 2) A slope is calculated on the nodal point i from the velocity vector (u.sub.i, v.sub.i, w.sub.i), a matrix J is created by
[0087] 3) Then,
A=S.sup.2+Q.sup.2
is calculated. Here, S is represented by
Q is represented by
Q=½(J−J.sup.T) Rate-of-rotation tensor [Formula 4]
[0088] 4) An eigenvalue of a matrix A is calculated.
[0089] 5) λ2 is selected. That is, a second value is selected from λ1, λ2, and λ3.
[0090] A center line extraction process by the center line extraction part 43 is realized by an existing technology. As an example, a method by Motoi Kinishi et al. may be used. Then, the center line 3 is extracted.
[0091] Next, the cross-section formation process and a texture generation process will be described. The cross-section formation process by the cross-section formation part 44 is performed by using the first cross-section formation method and the second cross-section formation method.
[0092]
[0093]
[0094]
[0095] In
[0096] Based on the quadrilateral 9.sub.a11 and the quadrilateral 9.sub.a12, a pair of two planes 9a is created so as to share the line including the region center 5 as the edge. By changing the angle θ and creating the plurality of pairs of the planes 9a, n cross-sections are created.
[0097] In a simple manner, a maximum value and a minimum value are acquired in each of axis directions of the center line 3, a long axis of a longest distance is selected, and a half distance is acquired to set a center position. The n cross-sections are created by rotating one of two planes by angle θ with respect to an axis connecting two points. One plane (which may be the quadrilateral 9.sub.a11) shares the center position and the edge 5a, and another plane (which may be the quadrilateral 9.sub.a12) shares the center position and the edge 5b.
[0098] In
[0099] In
[0100] In
[0101] In
[0102] Furthermore, in
[0103] In
[0104] Accordingly, in a case of the angle θ=60°, as depicted in
[0105]
[0106] In
[0107] Based on a distance between the point e.sub.1 and the point e.sub.2, a maximum coordinate axis is selected from values of x, y, and z. When the y value indicates a greatest value, a plane D.sub.1 is defined at a position of (y.sub.e2−y.sub.e1)/2 to reduce the rectangular region 6 to be ½.
[0108] In
[0109] In
[0110] Moreover, a quadratic curve is defined to connect a point p.sub.3 and the point q.sub.1, and a quadratic curve is defined to connect a point p.sub.4 and the point q.sub.2. Then, a curved surface 9b is created by the quadratic curve connecting the point p.sub.3 and the point q.sub.1, the quadratic curve connecting the point p.sub.4 and the point q.sub.2, a line connecting the point p.sub.3 and the point p.sub.4, and a line connecting the point q.sub.1 and the point q.sub.2.
[0111] Furthermore, a curved surface 9b is created, in which a quadratic curve is defined to connecting the point p.sub.3 and the point q.sub.3, and a quadratic curve is defined to connecting the point p.sub.4 and the point q.sub.4.
[0112] Among four created curved surfaces 9b as described above, adjacent curved surfaces 9b share a line acquired based on the middle point information with each other.
[0113] In
[0114] In the same manner, a curved surface 9b is created, in which a quadratic curve is defined to connecting the point r.sub.1 and the point q.sub.4, and a quadratic curve is defined to connecting the point r.sub.2 and the point q.sub.3.
[0115] Moreover, a curved surface 9b is created, in which a quadratic curve is defined to connecting the point r.sub.3 and the point q.sub.3, and a quadratic curve is defined to connecting the point r.sub.4 and the point q.sub.1. Furthermore, a curved surface 9b is created, in which a quadratic curve is defined to connecting the point r.sub.3 and the point q.sub.4, and a quadratic curve is defined to connecting the point r.sub.4 and the point q.sub.2. In the second cross-section formation method, the cross-section is represented by the pair of two curved surfaces 9b.
[0116] Display examples of the visualization process in the embodiment will be described with reference to
[0117]
[0118] Input data are as follows:
[0119] simulation result
[0120] coordinates of the nodal points, element information, physical values at elements
[0121] color map data
[0122] parameters of a user indication
[0123] the first cross-section formation method or the second cross-section formation method.
[0124] In the embodiment, a data amount for the visualization process is represented as follows:
n>>m×2×5+texture data amount
where n denotes an original three dimensional polygon number, and it is assumed that a polygon number used on each of the cross-sections is 2, a cross-section number (an average) per object is 5, and an object number is represented by m. In the visualization process in the embodiment, data corresponding to a region of the plane 9a or the curved surface 9b, which corresponds to a minimum region including the phenomenon region 1, are extracted. Hence, it is possible to reduce the data amount to use for the visualization process with respect to a phenomenon in the three dimensions. Compared with an existing technology, which does not use the visualization process of the embodiment, it is possible for the embodiment to reduce the data amount to 1/100 at maximum.
[0125] Accordingly, in the visualization process of a simulation result using three dimensional large scale data of the structural grid and/or the unstructured grid in the embodiment, a data reduction is realized.
[0126] As described above, according to the embodiment, it is possible to properly extract sufficient data for the visualization process of the simulation result.
[0127] All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.