SYSTEM AND METHOD FOR SMART WINSHIELD IN VEHICLES
20230158865 · 2023-05-25
Inventors
Cpc classification
International classification
Abstract
The present teaching relates to approaches for dynamic light blocking in a moving vehicle. Sensor data from sensors deployed on a vehicle are received that capture information exterior and interior around the vehicle. The presence of a person in the vehicle is detected based on interior sensor data while a light source exterior to the vehicle is detected based on exterior sensor data. A portion of a window of the vehicle through which light from the light source shines on the person is determined and an appropriate level of shade is applied on the portion of the window to reduce the amount of light shining on the person.
Claims
1. A method implemented on at least one processor, a memory, and a communication platform for dynamic light blocking, comprising: receiving sensor data from sensors deployed on a vehicle, wherein the sensor data capture information exterior and interior around the vehicle; detecting presence of a person in the vehicle based on interior sensor data; detecting a light source exterior to the vehicle based on exterior sensor data; determining a portion of a window of the vehicle through which light from the light source shines on the person; and applying a level of shade on the portion of the window to reduce the amount of light shining on the person.
2. The method of claim 1, wherein the presence of the person is identified via a first region of interest represented using a first set of three dimensional (3D) coordinates in a coordinate system; the light source is identified via a second region of interest represented using a second set of 3D coordinates in the coordinate system; and the coordinate system is configured so that the window is approximately aligned on one axis so that 3D coordinate of any point on the window has a zero or near zero value with respect to the axis.
3. The method of claim 2, wherein the first region of interest is determined based on at least one of: a default profile specifying a designated part of a person to be protected from the light emitted by the light source; a personal profile associated with the person specifying a preferred part of the person to be protected from the light emitted by the light source.
4. The method of claim 2, wherein the window includes a plurality of sections, each of which occupies an identifiable area in the coordinate system; and the portion of the window is defined by one or more of the plurality of sections of the window dynamically identified based on the first and the second regions of interest.
5. The method of claim 4, wherein each of the plurality of sections of the window can be individually controlled to provide different levels of shade; and each level of the different levels of shade is achieved by applying a level of tint.
6. The method of claim 2, wherein the step of determining the portion comprises: establish rays of light connecting the first and the second sets of 3D coordinates and representing how the light from the light source shines on the first region of interest; for each of the rays connecting one of the first set of 3D coordinates and one of the second set of 3D coordinates, identifying a point on the ray that intersects with the window on the axis, locating an intersecting section from the plurality of sections of the window in which the point falls; and generating the portion of the window based on the intersecting sections identified based on the rays.
7. The method of claim 6, wherein the step of establishing the rays of light comprises: identifying a first subset of the first set of 3D coordinates on boundary of the first region of interest; identifying a second subset of the second set of 3D coordinates on boundary of the second region of interest, wherein the first and the second subsets include a same number of 3D coordinates; for each 3D coordinate from the first subset, identifying a corresponding 3D coordinate from the second subset, and forming a ray of light.
8. Machine readable non-transitory medium having information recorded thereon for dynamic light blocking, wherein the information, once read by the machine, causes the machine to perform the following steps: receiving sensor data from sensors deployed on a vehicle, wherein the sensor data capture information exterior and interior around the vehicle; detecting presence of a person in the vehicle based on interior sensor data; detecting a light source exterior to the vehicle based on exterior sensor data; determining a portion of a window of the vehicle through which light from the light source shines on the person; and applying a level of shade on the portion of the window to reduce the amount of light shining on the person.
9. The medium of claim 8, wherein the presence of the person is identified via a first region of interest represented using a first set of three dimensional (3D) coordinates in a coordinate system; the light source is identified via a second region of interest represented using a second set of 3D coordinates in the coordinate system; and the coordinate system is configured so that the window is approximately aligned on one axis so that 3D coordinate of any point on the window has a zero or near zero value with respect to the axis.
10. The medium of claim 9, wherein the first region of interest is determined based on at least one of: a default profile specifying a designated part of a person to be protected from the light emitted by the light source; a personal profile associated with the person specifying a preferred part of the person to be protected from the light emitted by the light source.
11. The medium of claim 9, wherein the window includes a plurality of sections, each of which occupies an identifiable area in the coordinate system; and the portion of the window is defined by one or more of the plurality of sections of the window dynamically identified based on the first and the second regions of interest.
12. The medium of claim 11, wherein each of the plurality of sections of the window can be individually controlled to provide different levels of shade; and each level of the different levels of shade is achieved by applying a level of tint.
13. The medium of claim 9, wherein the step of determining the portion comprises: establish rays of light connecting the first and the second sets of 3D coordinates and representing how the light from the light source shines on the first region of interest; for each of the rays connecting one of the first set of 3D coordinates and one of the second set of 3D coordinates, identifying a point on the ray that intersects with the window on the axis, locating an intersecting section from the plurality of sections of the window in which the point falls; and generating the portion of the window based on the intersecting sections identified based on the rays.
14. The medium of claim 13, wherein the step of establishing the rays of light comprises: identifying a first subset of the first set of 3D coordinates on boundary of the first region of interest; identifying a second subset of the second set of 3D coordinates on boundary of the second region of interest, wherein the first and the second subsets include a same number of 3D coordinates; for each 3D coordinate from the first subset, identifying a corresponding 3D coordinate from the second subset, and forming a ray of light.
15. A system for dynamic light blocking, comprising: a face/light source relationship detector implemented by a processor and configured for: receiving sensor data from sensors deployed on a vehicle, wherein the sensor data capture information exterior and interior around the vehicle, detecting presence of a person in the vehicle based on interior sensor data, and detecting a light source exterior to the vehicle based on exterior sensor data; a window section determiner implemented by a processor and configured for determining a portion of a window of the vehicle through which light from the light source shines on the person; and a dynamic tint application controller implemented by a processor and configured for applying a level of shade on the portion of the window to reduce the amount of light shining on the person.
16. The system of claim 1, wherein the presence of the person is identified via a first region of interest represented using a first set of three dimensional (3D) coordinates in a coordinate system; the light source is identified via a second region of interest represented using a second set of 3D coordinates in the coordinate system; and the coordinate system is configured so that the window is approximately aligned on one axis so that 3D coordinate of any point on the window has a zero or near zero value with respect to the axis.
17. The system of claim 16, wherein the first region of interest is determined based on at least one of: a default profile specifying a designated part of a person to be protected from the light emitted by the light source; a personal profile associated with the person specifying a preferred part of the person to be protected from the light emitted by the light source.
18. The system of claim 16, wherein the window includes a plurality of sections, each of which occupies an identifiable area in the coordinate system; and the portion of the window is defined by one or more of the plurality of sections of the window dynamically identified based on the first and the second regions of interest, wherein each of the plurality of sections of the window can be individually controlled to provide different levels of shade, and each level of the different levels of shade is achieved by applying a level of tint.
19. The system of claim 16, wherein the window section determiner determines the portion by: establishing rays of light connecting the first and the second sets of 3D coordinates and representing how the light from the light source shines on the first region of interest; for each of the rays connecting one of the first set of 3D coordinates and one of the second set of 3D coordinates, identifying a point on the ray that intersects with the window on the axis, locating an intersecting section from the plurality of sections of the window in which the point falls; and generating the portion of the window based on the intersecting sections identified based on the rays.
20. The system of claim 19, wherein the rays of light are identified by: identifying a first subset of the first set of 3D coordinates on boundary of the first region of interest; identifying a second subset of the second set of 3D coordinates on boundary of the second region of interest, wherein the first and the second subsets include a same number of 3D coordinates; for each 3D coordinate from the first subset, identifying a corresponding 3D coordinate from the second subset, and forming a ray of light.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The methods, systems and/or programming described herein are further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
DETAILED DESCRIPTION
[0027] In the following detailed description, numerous specific details are set forth by way of examples in order to facilitate a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, well known methods, procedures, components, and/or system have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings.
[0028] The present teaching aims to address the deficiencies of the current state of art in a windshield for blocking unwanted light in moving vehicles. According to the present teaching, a window is made with a plurality of sections, each of which may be individually configured to provide a certain level of tint so that the amount of light from a source that is allowed to go through the section can be controlled based on need. The source of light may be natural or artificial. The level of tint to be applied to block the light may be configured based on preferences specified on how much light is desired. The level of tint to be provided may also be determined based on the strength of light observed which may be estimated based on different means. For example, for sun light observed during the daytime, the strength of the light shining on the window may be estimated based on sensor information (e.g., visual information or heat information) or alternatively a time of the day. If it involves light from an artificial source (e.g., light for a construction site), the strength of the light may be determined from sensor information or information communicated to the vehicle (e.g., such information may be broadcast to the passing vehicles).
[0029] Sections of the window are to be selected to provide needed level of tint may be determined dynamically based on each specific situation. For example, a direction of the sun may be dynamically determined with respect to a person who desires to block the sun light so that sections that are between the sun and the person may be identified. Connecting the direction of the source of the light shining on the window and the direction of a person in the vehicle, it may be determined as to which sections on the window that are on the intersection course and which sections nearby the intersection course may also be tinted to provide an adequate coverage. Then a certain level of tint to be applied may also be estimated based on, e.g., a personalized configuration specifying a comfort level on the light.
[0030] With relevant sections on the window identified and a level of tint determined to satisfy the specified comfort level, the tint is then applied to the identified sections by controlling each identified sections to be tinted accordingly so that the amount of the light shining through the sections of the window provides the person the desired level of comfort. The level of tint for each person may vary and may be configured based on preference of the person in a personalized manner. In a dynamic situation, each person may be recognized via, e.g., face recognition or other means, so that an appropriate configuration associated with the recognized person may be invoked to implement a personalized light shielding.
[0031]
[0032]
[0033]
[0034] Similarly, passenger shield 3 260-2 for the passenger is determined based on what is needed to block the light shining on passenger 200-2 with respect to the direction of the sun 140-3. As can be seen, due to the direction of the sun, the passenger shield 3 260-2 is mostly located on the side of the front window right in front of the driver 220-1. Given that, the driver shield 260-1 and the passenger shield 3 260-2 may have substantial overlap and, in some embodiments, the two shields determined for two different people in the vehicle (if they both are present) may be merged into one that can be used to block the sun light 140-3 in the left of the vehicle for both people in the vehicle. This is illustrated in
[0035]
[0036] As discussed herein, a window according to the present teaching comprises a plurality of sections, each of which is individually activatable to be a part of a shield to be implemented by increasing the tint level. Any section can be selected if it is on the intercept paths of rays connecting the sun and the part of the body of a person for whom the sun light is to be blocked. Such selected sections together form a shield dynamically based on need. Different illustrated shields as illustrated in
[0037]
[0038]
[0039] While dynamic shield may be determined and formed based on detected presence of person/people in the vehicle, what is needed may also be determined based on configurations specified. For example, each person who may be in the vehicle may have a profile with an indication, e.g., whether he/she prefers to block the sun light. Some people may like to block the sun light and some may not. Such individualized preferences may be specified in each person’s profile. When the presence is detected, the person on each seat may be recognized, e.g., via face recognition, and appropriate profile may be invoked so that the preferences specified may be observed. For instance, if both a driver 300 and a passenger 340 are present in the vehicle. If the profile is the driver 300 specifies to have the sun light blocked when needed and the profile of the passenger 340 indicates not to block any sun light, then even though both people are present in the vehicle, the shield formed according to both people’s profiles will be for only the driver (same as shield 330 in
[0040]
[0041] To enable estimations of directions of the sun and the person, the configuration shown in
[0042] Transformation matrices may be provided by calibrating with respect to the three coordinate systems (i.e., X1-Y1-Z1, X2-Y2-Z2, and X3-Y3-Z3) that enable transformation or mapping of a coordinate in one coordinate system to a transformed coordinate in another coordinate system. Through such transformation, a coordinate representing a person (or face thereof) detected within the coordinate system of camera 420 (X3-Y3-Z3, not shown) may be transformed to a coordinate in the window’s coordinate system X1-Y1-Z1. Similarly, the coordinate of the sun 440 detected in the coordinate system X2-Y2-Z2 of camera 430 may also be transformed into a coordinate in the window’s coordinate system X1-Y1-Z1. With the coordinates of the sun 440 and the person 400 are mapped, via transformation, to the X1-Y1-Z1 coordinate system of the window 410, then the lines connecting the transformed coordinate of the sun and the transformed coordinate of the person (or face thereof) can be established in the coordinate system X1-Y1-Z1 and the points on such lines that intersect with the window at x1-0 (where the window is) can be determined. The sections on the window where any of such intersection points falls within may be selected for application of tint.
[0043] To determine a 3D coordinate of an object (e.g., a person’s face in the vehicle) in a coordinate system of a camera based on a 2D location in an image acquired by the camera, a depth measure needs to be estimated based on, e.g., either a depth sensor or stereo using multiple cameras. An object of interest detected from a 2D image may be represented by a 2D image coordinate, which may be determined based on, e.g., the centroid of a region of interest (ROI) in the image where the object of interest is detected. When there is another view of the same scene captured by another stereo camera, a corresponding 2D image coordinate in the other view may be detected to represent the same object. A discrepancy or displacement between the 2D image coordinate and the corresponding 2D image coordinate can be used to estimate the depth of the object in the 3D space. Alternatively, a depth sensor may be deployed in the vehicle to observe the same scene. When calibrated properly, a region from the depth map of the depth sensor that corresponds to the ROI in the image (representing a person’s face) may be identified and the depth measures in that region of the depth map may be used to estimate the depth of the person.
[0044] In an alternative embodiment, depth information may not be estimated but configured according to the setting of the present teaching. For example, the depth of the sun is known to be very far so that the depth for its depth may be set with a very large value. On the other hand, depending on the vehicle type (e.g., a car, a truck, a boat, etc.), the depth of the person detected behind the front window 410 may be set within a certain range. For instance, for a car, the distance from the camera 420 to the person sitting in the front row may be set to be 3 feet. While for a bus, that distance may be set bigger. Based on such set depths for the person detected and the sun, the 3D coordinates in their respective coordinate systems may be computed accordingly based on the corresponding 2D mage coordinates and the calibration parameters of their respective cameras. In this alternative embodiment, deploying a single camera on either side of the window 410 is sufficient so that the computational process to estimate the 3D coordinates of the objects of interest in respective coordinate systems is more efficient.
[0045] A region of interest may be defined according to the present teaching as the area of a person to be protected from the sun light. In some embodiments, such a region of interest may be defined as the face of the person sitting in a vehicle, as shown in
[0046] In some embodiments, to determine the intersection points on the window, 2D image coordinates of some points (e.g., on the boundary of a bounding box enclosing the object) of the sun may be converted into corresponding transformed coordinates in the coordinate system X1-Y1-Z1. Similarly, 2D image coordinates of some points of a region of interest of the person in the vehicle (e.g., on the boundary of a bounding box enclosing a region of interest related to the person) may be converted into corresponding transformed coordinates in the coordinate system of the window 410. This creates two clusters of 3D points in X1-Y1-Z1. One cluster has all 3D points with transformed coordinates converted from some points of 2D image points of the sun that have positive x coordinate values, indicating that they are outside of window 410. Another cluster has all 3D points in their transformed coordinates converted from some 2D image points of the person that have negative x values, indicating that they are inside the window 410.
[0047] To determine window sections to be used to apply tint for appropriately blocking the sun light for the person, the intersection points of lines (connecting a 3D point from the sun and a 3D point of the person) at x = 0 (where the window is located) can be identified. As what is important is a range for the spatial coverage, only a portion of the such lines may be used to elect sections. If a person has a preferred spatial range for sun light blocking, e.g., only face area or face plus neck area, this preferred spatial range may be specified in a profile for the person. A preferred range may be individualized for each person and a profile so specified may be translated into a bounding box around the person that is dimensioned according to the profile. This is illustrated in
[0048] If the specified preferred range is for face area, the dimension of the face may be estimated based on, e.g., camera data when the person is detected. In
[0049]
[0050]
[0051] In some situations, such as what is shown in
[0052]
[0053] Once the person and the light source and their spatial relationship are detected, the intersection section determiner 540 estimates, at 525, the rays of lights connecting the points from the light source and the person and determine the intersection sections on the relevant window(s), as discussed herein. In the situations as depicted in
[0054] For example, in a scenario such as depicted in
[0055] In order to determine the scope of coverage as well as the level of protection to be applied, there may be multiple considerations, including the preference of the person detected and the strength of the light detected. The preference of the person may be specified in a profile stored in a storage 595. For instance, the preference of a person may include a specification of a spatial range of the person to be protected from the light (e.g., eyes only, face only, face plus neck, or face plus shoulder, etc.) and the level of protection (e.g., slight, medium, or heavy). Such preference is to be considered in the process of determining both the sections of the window(s) to be tinted and the level of tint to be applied.
[0056] To detect the strength of the light, the face/light source relationship detector 510 may also estimate, at 535, the strength of the light based on sensor data. In some embodiments, the level of shade needed may also be determined based on other factors. For example, not only the shade needed depends on the detected strength of the light, it also depends on, e.g., the preference of the person detected. Such personalized preference profile may be stored in 595 and is used by the shade level determiner 550 to estimate the level of shade needed for the person. Additional information that may be relevant to the determination of the shade level needed may include some control parameters, e.g., stored in a shade level control parameter storage 590, that specify the parameters needed to realize a level of shade desired given a level of strength of the light observed. The shade level determiner 550 thus determines, at 545, the level of shade needed based on the detected strength of light (from 510) and the parameters specified for shade level control with respect to detected strength of light.
[0057] With the sections on the window(s) selected (by 540) based on rays of light between the person and the light source in a spatial protection range (preferred protection range) as well as the estimated shade level needed to meet the desired level of protection of the person given the strength of the light detected, the dynamic tint application controller 570 applies, at 555, a necessary level of tint to the selected sections on the window(s) to block the light over the preferred protection range of the person. The application of the tint on the sections may be performed based on the shade control parameters from 590 and the configuration of the sections to be tinted. For example, the sections comprising the relevant window(s) may be electrically wired in a certain manner that will affect how to apply the tint. In some embodiments, sections of a window may be electrically connected in a row-based fashion and in this case, selection of needed sections may be performed by selectively activating some desired columns for applying the tint. In some embodiments, sections on relevant window(s) may be electrically connected in a column-based fashion so that selection of needed sections for applying tint may be through selectively activating certain rows.
[0058]
[0059] In this illustrated embodiment, the face/light source relationship detector 510 comprises a human face detector 600 and a light source detector 630 for detecting the presence of a person (or people) or a light. The human face detector 600 is configured to detect presence of a person based on his/her face. In some embodiments, the human face detector 600 may also be configured to detect the identity of the person present in the vehicle (not shown in
[0060] Detection of a human face or a light source have been widely used in literature. Some of such techniques may utilize artificial intelligence technologies such as machine learning that produces trained models to facilitate detection of certain objects. For example, the human face detector 600 may perform the detection based on some face location models 610 that are trained, e.g., via learning, to recognize presence of color of human faces from, e.g., images. Similarly, depending on what type of light source to be detected, the light source detector 630 may invoke an appropriate model to facilitate the detection. For instance, if it is to detect a sun light from, e.g., images acquired by cameras installed outside of the front window 220, models previously trained for detecting presence of sun light from images may be used for the detection.
[0061] With detected objects of interest, coordinates of such detected objects may be determined with respect to their respective coordinate systems. For doing so, the face/light source relationship detector 510 includes a face surrounding location estimator 640, an eye location estimator 650, and a light source location estimator 660. The face surrounding location estimator 640 and the eye location estimator 650 may be invoked to identify bounding boxes according to a specified protection profile for the person detected. For instance, the protection profile may specify to protect only the eyes, the face, the face plus neck, etc. The profile is accessed from the protection profile storage 595 and used to determine the type of bounding box to be estimated. The face surrounding estimator 640 is to identify the bounding box related to the face of the person according to the protection profile of the person. The eye location estimator 650 is invoked when only eye area is to be protected according to the protection profile. The light source location estimator 660 is invoked to identify a bounding box for the light source detected from the image.
[0062] With the bounding boxes identified for different objects of interest and represented by the coordinates in the respective coordinate systems of the sensors, such coordinates need to be converted to the coordinates in the coordinate system(s) of the relevant window(s). To do so, the 510 further includes a protection BBox coordinate converter 640, an eye BBox coordinate converter 670, and a light source BBox coordinate converter 680. Each conversion is performed with respect to the transformation parameters stored in the coordinate system conversion configuration 690. The protection BBox coordinate converter 640 outputs face-based protection BBox coordinates expressed with respect to the coordinate system(s) of the relevant window(s). The eye BBox coordinate converter 670 outputs eye BBox coordinates expressed with respect to the coordinate system(s) of the relevant window(s). The light source BBox coordinate converter 680 outputs light source BBox coordinates expressed with respect to the coordinate system(s) of the relevant window(s).
[0063]
[0064] With the coordinates of different objects of interest computed with respect to their respective coordinate systems, the protection BBox coordinate converter 640 converts, at 645, the coordinates of the bounding box for a protection area with respect to the coordinate system of the interior sensor(s) to that in the coordinate system of the window(s). Similarly, the eye BBox coordinate converter 640 converts the coordinates of the bounding box for the eye area with respect to the coordinate system of the interior sensor(s) to that in the coordinate system of the window(s). The light source BBox coordinate converter 680 converts, at 655, the coordinates of the bounding box for the light source with respect to the coordinate system of the exterior sensor(s) to that in the coordinate system of the window(s). In this way, all the coordinates of all the relevant bounding boxes are now represented in the coordinate system of the relevant window(s) to enable generation of rays of lights between the protection area (face, eyes, or more) and the light source. As discussed herein with respect to
[0065]
[0066] To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform(s) for one or more of the elements described herein. The hardware elements, operating systems and programming languages of such computers are conventional in nature, and it is presumed that those skilled in the art are adequately familiar therewith to adapt those technologies to appropriate settings as described herein. A computer with user interface elements may be used to implement a personal computer (PC) or other type of workstation or terminal device, although a computer may also act as a server if appropriately programmed. It is believed that those skilled in the art are familiar with the structure, programming, and general operation of such computer equipment and as a result the drawings should be self-explanatory.
[0067]
[0068] Computer 800, for example, includes COM ports 850 connected to and from a network connected thereto to facilitate data communications. Computer 800 also includes a central processing unit (CPU) 820, in the form of one or more processors, for executing program instructions. The exemplary computer platform includes an internal communication bus 810, program storage and data storage of different forms (e.g., disk 870, read only memory (ROM) 830, or random-access memory (RAM) 840), for various data files to be processed and/or communicated by computer 800, as well as possibly program instructions to be executed by CPU 820. Computer 800 also includes an I/O component 860, supporting input/output flows between the computer and other components therein such as user interface elements 880. Computer 800 may also receive programming and data via network communications.
[0069] Hence, aspects of the methods of dialogue management and/or other processes, as outlined above, may be embodied in programming. Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on or embodied in a type of machine readable medium. Tangible non-transitory “storage” type media include any or all of the memory or other storage for the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide storage at any time for the software programming.
[0070] All or portions of the software may at times be communicated through a network such as the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, in connection with information analytics and management. Thus, another type of media that may bear the software elements includes optical, electrical, and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links, or the like, also may be considered as media bearing the software. As used herein, unless restricted to tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.
[0071] Hence, a machine-readable medium may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, which may be used to implement the system or any of its components as shown in the drawings. Volatile storage media include dynamic memory, such as a main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that form a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a physical processor for execution.
[0072] Those skilled in the art will recognize that the present teachings are amenable to a variety of modifications and/or enhancements. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution—e.g., an installation on an existing server. In addition, the techniques as disclosed herein may be implemented as a firmware, firmware/software combination, firmware/hardware combination, or a hardware/firmware/software combination.
[0073] While the foregoing has described what are considered to constitute the present teachings and/or other examples, it is understood that various modifications may be made thereto and that the subject matter disclosed herein may be implemented in various forms and examples, and that the teachings may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all applications, modifications and variations that fall within the true scope of the present teachings.