Medical device for improving environmental perception for blind or visually-impaired users

11684517 · 2023-06-27

    Inventors

    Cpc classification

    International classification

    Abstract

    A device for improving environmental perception for blind or visually impaired users, including a set of mechanical actuators intended to be in contact with the skin of a user, at least one digital camera designed to acquire a current digital image of an environment facing the user, a processing circuit connected to the camera for receiving pixel signals from the acquired digital image and converting at least one portion of the pixel signals into control signals, each of which powers a mechanical actuator of the set of actuators, an eye-tracking module for tracking each eye of the user to identify a gaze direction of the user. The processing circuit then selects, in the environment filmed by the camera, an area of acquired current image which is a function of the gaze direction and converts the pixel signals of said area into control signals, each of which powers an actuator of the set to stimulate the user's skin.

    Claims

    1. A device for improving environmental perception for blind or visually-impaired users, the device comprising: at least one set of mechanical actuators intended to be in contact with the skin of a user, at least one digital camera arranged for acquiring a current digital image of an environment facing the user, a processing circuit connected to the camera for receiving signals from pixels of the acquired digital image and converting at least a part of the pixel signals into control signals each supplying a mechanical actuator of the set of actuators, wherein the device furthermore comprises at least one eye-tracking module for at least one eye of the user for identifying a direction of gaze of the user, wherein the processing circuit is arranged for selecting in the environment filmed by the camera an area of acquired current image which depends on the direction of gaze, and converting the signals from pixels of said area into control signals each supplying an actuator of the set for stimulating the skin of the user, and wherein the processing circuit is further arranged for selecting within an acquired current image a limited area of the current image which depends on the direction of gaze, and converting the signals from pixels of said limited area into control signals each supplying an actuator of the set for stimulating the skin of the user.

    2. The device as claimed claim 1, comprising: two sets of mechanical actuators intended to be in contact with the skin of a user, and two eye-tracking modules for the respective eyes of the user, for defining two areas in the current image and converting the signals from pixels of each of the two areas into two respective sets of control signals supplying the two respective sets of actuators.

    3. The device as claimed in claim 1, wherein the camera and the eye-tracking module, at least, are mounted onto a common mechanical support, the eye-tracking module being oriented toward one eye of the user, the camera being oriented so as to acquire a current digital image of an environment facing the user.

    4. The device as claimed in claim 1, wherein at least one connection between the processing circuit and the actuators is a wireless link for the transmission of the control signals for the actuators.

    5. The device as claimed in claim 1, wherein each actuator is arranged for stimulating the skin of the user according to an intensity that is a function of the control signal received, and the control signals are each functions of an intensity of color of pixel.

    6. The device as claimed in claim 5, wherein each actuator is mobile in vibration for stimulating the skin of the user by cyclical vibrations, the frequency of the vibrations of an actuator being determined by a type of color of pixel.

    7. The device as claimed in claim 1, wherein the image area is of general shape corresponding to a shape of visual field of the eye tracked by the eye-tracking module, and the actuators of the set of actuators are distributed according to a two-dimensional shape corresponding to said shape of visual field.

    8. A method implemented by a processing circuit of a device as claimed in claim 1, wherein the method comprises: receive signals from pixels of the acquired digital image, receive a measurement by eye-tracking of a direction of gaze of the user, determine a bounding in the acquired image of an area corresponding to the direction of the gaze, and select the signals from pixels of said area, convert the signals from pixels of said area into control signals each intended to supply one actuator of the set of mechanical actuators, the number of pixels in said area corresponding to a number of actuators that the set of mechanical actuators comprises, transmit the respective control signals to the actuators of said set.

    9. A non-transitory computer storage, storing instructions of a computer program causing the implementation of the method as claimed in claim 8, when said instructions are executed by a processing circuit of a device for improving environmental perception.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    (1) Furthermore, other advantages and features of the invention will become apparent upon reading the description of exemplary embodiments presented hereinafter, and upon examining the appended drawings, in which:

    (2) FIG. 1 shows an individual wearing a medical device for improving environmental perception, subject of the invention, according to a first exemplary embodiment;

    (3) FIG. 2 is a perspective view of a part of the device subject of the invention with reference to FIG. 1;

    (4) FIG. 3 corresponds to a flow diagram of the general algorithm of the method subject of the invention with reference to FIG. 1;

    (5) FIG. 4 illustrates schematically a processing circuit notably of the device shown in FIG. 1;

    (6) FIG. 5 shows an individual wearing a medical device for improving environmental perception, according to a second exemplary embodiment;

    (7) FIG. 6 shows an individual 5, wearer of a device in the sense of the invention, observing an environment facing them; and

    (8) FIG. 7 illustrates schematically one of the sets of mechanical actuators.

    DETAILED DESCRIPTION

    (9) FIG. 1 shows a device for improving environmental perception typically in the space in front of a user. The device is worn by a blind or partially-sighted user 5, this device comprising a mechanical support 1 integrating a wide-angle camera 6 and an eye-tracking module 7. More precisely, in the case where the eyes of the user are both still moving and when an eye-tracking is then possible on each eye, the device may comprise, as illustrated in FIG. 1, two wide-angle cameras 6, each for filming the environment in front of each eye of the user, and two eye-tracking modules 7, each for tracking the movements of one eye.

    (10) Typically, the two cameras, notably wide-angle, are installed in fixed respective positions on the support 1, and the areas selected in the wide-angle images partially overlap in order to provide a stereoscopic effect, allowing a 3D perception. For this purpose, the device furthermore comprises two sets of mechanical actuators 3 spatially separated as illustrated in FIG. 1.

    (11) The cameras 6 and tracking modules 7 are connected via wired link to a processing circuit 2 driving the two sets of mechanical actuators 3.

    (12) In the example shown, the mechanical support 1 is placed on the head of the user 5, whereas the two sets of mechanical actuators 3 are positioned on the torso of the user 5. The movements of the eyes of the user 5 are measured by means of the eye-tracking modules 7 in real time. Furthermore, here by way of example, an area in the acquired images is selected which corresponds to the current direction of the gaze of the user. Alternatively, cameras (not necessarily wide-angle) may be mounted on respective pivoting axes and the cameras oriented in the direction of the current gaze of the user. In yet another alternative, a single wide-angle camera may be provided but with an eye-tracking for each eye, and two areas selected in the environment filmed, the stereoscopic effect being again provided by a partial overlap of these two areas.

    (13) In the embodiment where the environment is filmed by one or more cameras of the “wide-angle” type, the focal distance of such cameras is chosen so as to cover in the acquired images what the visual field of the user 5 would correspond to if they were able-bodied.

    (14) The movements of the eyes of the user 5 are measured simultaneously with the shooting of the environment by the wide-angle cameras 6. More precisely, as illustrated in FIG. 6, the measurements carried out by the eye-tracking modules 7 are analyzed by the processing circuit 2 in order to select, according to the measurement of the direction D of the gaze (for each eye), the appropriate area Z of pixels of the current image of the environment E which has just been acquired by the wide-angle camera 6.

    (15) The measurements carried out may be sent to the processing circuit 2 by means of a wireless link. Indeed, a wireless link in this application allows less restriction and a greater freedom of movement for the user 5, the processing circuit 2 and the mechanical actuators 3 being located on the torso (or the back typically) of the user 5.

    (16) The processing circuit 2 processes the received data and sends the instructions to the mechanical actuators AM of the sets 3.

    (17) The mechanical actuators 3 positioned on the bust stimulate the skin of the user 5 according to the instructions received by the processing circuit 2. They “reproduce” on the skin the images modified after the processing of the data in the processing circuit 2.

    (18) In practice, the processing circuit 2 converts a pixel intensity (gray level for example) for each pixel into a mechanical intensity of stimulation by an actuator. A particular frequency of vibration may be associated with each color of pixel (for example a low frequency for red and a higher frequency for blue). For example, an increasing frequency of vibration may also be assigned to the various colors from red to violet, in the colors of the visible spectrum.

    (19) FIG. 2 shows a perspective view of the mechanical support 1, with a face 9 and two branches 8 held on the face 9, for example mechanically rigidly attached to the latter via a hinged link (not shown). Two eye-tracking modules 7 are situated toward the eyes typically for filming the pupils and tracking their movements (often by shape recognition implemented by a computer module in the images thus acquired) and, from there, determining a direction of the gaze of the user. On the other hand, the two wide-angle cameras 7 are positioned in the direction of the environment (along the physiological axes of the eyes of the subject).

    (20) FIG. 3 shows a flow diagram summarizing possible steps of a method implemented by the processing circuit 2. Such a method may comprise, at the step 51, the acquisition of a current image by a camera 6. At the step S2, an eye-tracking is implemented in order to determine a current direction of the gaze of the user (based on measurements by eye-tracking of the direction of gaze of the user 5 by the eye-tracking module or modules 7). The visual field is thus made coherent with the orientation of the gaze. At the step S3, depending on this current direction of the gaze, an area Z corresponding to the direction of the gaze D is selected in the acquired image of the environment E (as described hereinabove with reference to FIG. 6). More particularly, the signals from pixels of this area Z are selected for converting them, at the step S4, into control signals sp for the mechanical actuators AM (sets of actuators 3 such as illustrated in FIG. 7). These may be piezoelectric elements or MEMS as described hereinabove, controlled by electrical signals sp with a view to driving these elements in vibration: At an amplitude of vibration that is a function of the pixel intensity (gray level, for example), and At a frequency of vibration being determined by the type of color (with three frequency levels for blue, green or red for example).

    (21) At the step S5, these control signals are subsequently sent to the mechanical actuators AM.

    (22) It goes without saying that these steps S1 to S5 are carried out for each eye (and hence each current camera image), and successively for all the images successively acquired, in real time.

    (23) The two vibrational images of the two mechanical actuators 3 on the skin are intended to be as close as possible to those that the retinas would have captured if the subject had been able-bodied. Such an embodiment allows the surrounding space to be re-taught to the individual 5.

    (24) FIG. 4 shows the device in a schematic form comprising the wide-angle cameras 6 and the eye-tracking modules 7, together with the processing circuit 2 here comprising: an input interface IN for receiving the signals from the equipment 6 (pixels of current image) and 7 (measurement of direction of the gaze), a memory MEM for storing, at least temporarily, data corresponding to these signals, together with instructions of a computer program in the sense of the invention, for the implementation of the method described hereinabove with reference to FIG. 3, a processor PROC capable of cooperating with the input interface IN and with the memory MEM for reading and executing the instructions of the computer program and thus delivering control signals for the actuators AM, via an output interface OUT connected to the sets of actuators 3.

    (25) FIG. 5 shows a device for improving the spatial perception of an environment, worn by a user 5 on the head. The device comprises a mechanical support 1 extending on either side of the head and two mechanical actuators 3 positioned on the forehead of the individual 5. The mechanical support 1 comprises a face 9 and two branches 8 held on the face as illustrated in FIG. 2. In contrast to the device illustrated in FIG. 1, the processing circuit 2 is integrated into the mechanical support 1. The mechanical support 1 takes the form of a headset allowing complete freedom for the user 5. Here, a short wired link between the mechanical support 1 and the mechanical actuators 3 may be provided in this embodiment. This is because it does not limit the user 5 in their movements and the entire device is on the head of the user.

    (26) It goes without saying that the present invention is not limited to the exemplary embodiments described hereinabove, and it may be extended to other variants.

    (27) Thus, for example, a frequency of vibration specific to a color has been described hereinabove. As a variant, for a black and white image, a frequency of vibration may be chosen that varies as a function of the gray level of each pixel.

    (28) Previously, a distribution of the actuators of the set of actuators has been defined according to a two-dimensional shape corresponding to the shape of visual field that the selected area of image takes, with in particular a number of actuators in each set corresponding to the number of pixels in the selected area of image. Nevertheless, as a variant, it is possible to progressively render actuators of the set active in increasing number (starting from the center of the set of actuators up to the entirety of the actuators of the set) as the user becomes accustomed to the device. As yet another a variant, it is possible to average the amplitudes of vibration (or other stimuli) of n actuators from amongst the N actuators of the set, then to subsequently refine the definition of perception by making n decrease, again as the user gets accustomed to the device. It goes without saying that, in order to expedite this adaptation, it is also possible to correct (by computer) and thus reduce initial oculomotor aberrations, in order to attenuate saccadic eye movements for example.