MONITORING EXERCISE ACTIVITY IN A GYM ENVIRONMENT
20230145451 · 2023-05-11
Inventors
Cpc classification
G06V10/44
PHYSICS
A61B5/0077
HUMAN NECESSITIES
A61B5/744
HUMAN NECESSITIES
A61B5/1123
HUMAN NECESSITIES
A63B2024/0071
HUMAN NECESSITIES
A63B2071/065
HUMAN NECESSITIES
A63B2024/0068
HUMAN NECESSITIES
G06V40/23
PHYSICS
A63B24/0062
HUMAN NECESSITIES
A63B2024/0096
HUMAN NECESSITIES
A63B71/0622
HUMAN NECESSITIES
International classification
A63B24/00
HUMAN NECESSITIES
Abstract
A system for monitoring exercise activity in a gym environment includes an image-based monitoring arrangement, a storage device, and a replay device. The monitoring arrangement generates a time series of 3D representations of an individual, with each 3D representation defining a pose of the individual and comprising 3D positions of a plurality of predefined feature points of the individual. The storage device receives and stores an exercise sequence, which comprises at least part of the time series of 3D representations and corresponds to an exercise performed by the individual. The replay device retrieves the exercise sequence from the storage device, renders an animation of the individual when performing the exercise based on the exercise sequence, and provides the animation for display. The system enables a user to review the exercise at any desirable viewing angle and/or magnification.
Claims
1. A system for monitoring exercise activity in a gym environment, said system comprising: an image-based monitoring arrangement configured to generate a time series of 3D representations of an individual in the gym environment, wherein each of the 3D representations defines a pose of the individual and comprises positions of a plurality of predefined feature points of the individual in a 3D coordinate system; a storage device configured to receive and store at least part of the time series of 3D representations, said at least part of the time series corresponding to an exercise performed by the individual, and a replay device configured to retrieve said at least part of the time series from the storage device, render an animation of the individual when performing the exercise based on said at least part of the time series, and provide the animation for display, wherein the replay device is configured to process said at least part of the time series to evaluate performance of the exercise performed by the individual and combine a performance indicator representing the performance with the animation, wherein the replay device is further configured to process said at least part of the time series to classify the exercise performed by the individual into an exercise category among a plurality of exercise categories, and to evaluate the performance based on the exercise category.
2. The system of claim 1, wherein the replay device is further configured to determine a nominal movement pattern for the exercise performed by the individual based on the exercise category, and to evaluate the performance by comparing said at least part of the time series to the nominal movement pattern.
3. The system of claim 2, wherein the nominal movement pattern defines nominal trajectories of the positions of the predefined feature points, and wherein the replay device is configured to detect and evaluate one or more deviations in time and/or space between the positions of the predefined feature points in said at least part of the time series and the positions of the predefined feature points in the nominal trajectories.
4. The system of claim 3, wherein the replay device is configured to combine the performance indicator with the animation to provide a visual indication of the one or more deviations in the animation.
5. The system of claim 2, which is further configured to obtain, for the individual, a body category among a plurality of body categories, and wherein the replay device is configured to determine the nominal movement pattern based also on the body category.
6. The system of claim 1, wherein the replay device comprises a trained machine learning-based model which is configured to classify an incoming time series of 3D representations into one or more of the plurality of exercise categories.
7. The system of claim 6, wherein the trained machine learning-based model is configured to generate a probability value for each of the one or more of the plurality of exercise categories, and wherein the replay device is configured to at least partly evaluate the correctness based on the probability value.
8. The system of claim 1, wherein the replay device is configured to render the animation in response to a display command entered into the system by a user.
9. The system of claim 8, wherein the display command defines at least one viewing angle onto the individual to be included in the animation and/or a magnification ratio of the individual in the animation.
10. The system of claim 1, wherein the replay device is configured to render the animation as an avatar which is spatially matched to the 3D representations in said at least part of the time series.
11. The system of claim 10, which is configured to obtain one or more physiological properties of the individual, wherein the replay device is configured to render the avatar to exhibit the one or more physiological properties.
12. The system of claim 11, wherein the one or more physiological properties comprises at least one of weight, height, gender, or body category.
13. The system of claim 1, wherein image-based monitoring arrangement or the storage device is configured to process the time series for detection of a start of the exercise, and wherein the storage device is operable to store an increased number of 3D representations per unit time upon the detection of the start of the exercise.
14. The system of claim 1, wherein the predefined feature points correspond to joints and/or extremities of the individual.
15. The system of claim 1, wherein the system comprises one or more feedback devices operable to display the animation to the individual and/or to a trainer.
16. The system of claim 1, wherein the system comprises two or more feedback devices which are communicatively connected to the replay device.
17. A replay device for use in the system according to claim 1.
18. A replay device for use in a system for monitoring exercise activity in a gym environment, wherein the replay device is configured to: retrieve, from a storage device, at least part of a time series of 3D representations of an individual in the gym environment, said at least part of the time series corresponding to an exercise performed by the individual, each of the 3D representations defining a pose of the individual and comprising positions of a plurality of predefined feature points of the individual in a 3D coordinate system, the time series being generated by an image-based monitoring arrangement in the gym environment; render an animation of the individual when performing the exercise based on said at least part of the time series; and provide the animation for display.
19. A method of monitoring exercise activity in a gym environment, said method comprising: operating an image-based monitoring arrangement to generate a time series of 3D representations of an individual in the gym environment, wherein each of the 3D representations defines a pose of the individual and comprises positions of a plurality of predefined feature points of the individual in a 3D coordinate system; storing at least part of the time series of 3D representations in a storage device, said at least part of the time series corresponding to an exercise performed by the individual; retrieving, from the storage device, said at least part of the time series; rendering, by a replay device, an animation of the individual when performing the exercise based on said at least part of the time series; and providing, by the replay device, the animation for display.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[0019] Embodiments will now be described more fully hereinafter with reference to the accompanying schematic drawings, in which some, but not all, embodiments are shown. Indeed, the subject of the present disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure may satisfy applicable legal requirements.
[0020] Also, it will be understood that, where possible, any of the advantages, features, functions, devices, and/or operational aspects of any of the embodiments described and/or contemplated herein may be included in any of the other embodiments described and/or contemplated herein, and/or vice versa. In addition, where possible, any terms expressed in the singular form herein are meant to also include the plural form and/or vice versa, unless explicitly stated otherwise. As used herein, “at least one” shall mean “one or more” and these phrases are intended to be interchangeable. Accordingly, the terms “a” and/or “an” shall mean “at least one” or “one or more”, even though the phrase “one or more” or “at least one” is also used herein. As used herein, except where the context requires otherwise owing to express language or necessary implication, the word “comprise” or variations such as “comprises” or “comprising” is used in an inclusive sense, that is, to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments.
[0021] As used herein, the terms “multiple”, “plural” and “plurality” are intended to imply provision of two or more elements, whereas the term a “set” of elements is intended to imply a provision of one or more elements. The term “and/or” includes any and all combinations of one or more of the associated listed elements.
[0022] It will furthermore be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing the scope of the present disclosure.
[0023] Well-known functions or constructions may not be described in detail for brevity and/or clarity. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
[0024] Like numerals refer to like elements throughout.
[0025] Before describing embodiments in more detail, a few definitions will be given.
[0026] As used herein, “keypoint” has its conventional meaning in the field of computer vision and is also known as an interest point. A keypoint is a spatial location or point in an image that define what is interesting or what stand out in the image and may be defined to be invariant to image rotation, shrinkage, translation, distortion, etc. More generally, a keypoint may be denoted a “reference point” or “feature point” on an object to be detected in the image, with the reference point having a predefined placement on the object. For humans, keypoints may identify joints and/or extremities. Keypoints may be detected by use of any existing feature detection algorithm(s), for example image processing techniques that are operable to detect one or more of edges, corners, blobs, ridges, etc. in digital images. Non-limiting examples of feature detection algorithms comprise SIFT (Scale-Invariant Feature Transform), SURF (Speeded Up Robust Feature), FAST (Features from Accelerated Segment Test), SUSAN (Smallest Univalue Segment Assimilating Nucleus), Harris affine region detector, and ORB (Oriented FAST and Rotated BRIEF) Further information about conventional keypoint detectors is found in the article “Local invariant feature detectors: a survey”, by Tuytelaars et al, published in Found. Trends. Comput. Graph. Vis. 3(3), 177-280 (2007). Further examples of feature detection algorithms are found in the articles “Simple Baselines for Human Pose Estimation and Tracking”, by Xiao et al, published at ECCV 2018, and “Deep High-Resolution Representation Learning for Human Pose Estimation”, by Sun et al, published at CVPR 2019. Correspondingly, objects may be detected in images by use of any existing object detection algorithm(s). Non-limiting examples include various machine learning-based approaches or deep learning-based approaches, such as Viola-Jones object detection framework, SIFT, HOG (Histogram of Oriented Gradients), Region Proposals (RCNN, Fast-RCNN, Faster-RCNN), SSD (Single Shot MultiBox Detector), You Only Look Once (YOLO, YOLO9000, YOLOv3), and RefineDet (Single-Shot Refinement Neural Network for Object Detection).
[0027] As used herein, “pose” defines the posture of a human object and comprises a collection of positions which may represent keypoints. The positions may be two-dimensional (2D) positions, for example in an image coordinate system, resulting in a 2D pose, or three-dimensional (3D) positions, for example in a scene coordinate system, resulting in a 3D pose. A pose for a human object is also referred to as a “skeleton” herein.
[0028] As used herein, “gym environment” refers to a place, indoors or outdoors, at which individuals perform physical activities by use of machines, free weights, body weight or other equipment, individually or in groups, for the purpose of exercise or training.
[0029] As used herein, “image-based monitoring system” refers to a system that comprises one or more imaging devices and is configured to generate a time sequence of 3D poses of an individual based on images taken by the one or more imaging devices. The imaging device(s) may be responsive to radiation in any wavelength range, for example visible radiation and/or IR radiation. The system may be configured to generate the respective 3D pose based on 2D images taken by two or more imaging devices arranged at different viewing angles onto the individual, or based on image(s) taken by one or more imaging devices capable of depth-sensing, for example as implemented by Microsoft Kinect™, or any combination thereof.
[0030] As used herein, “animation” refers to a plurality of animated images which, when displayed in sequence, appear as moving images. The animated images may be generated by computer graphics.
[0031] As used herein, “avatar” refers to an animated representation of a human individual. The avatar may represent the skeleton and/or the skeletal muscles of the individual. The avatar may be generic and represent all or a group of individuals. Alternatively, the avatar may be customizable to mimic the visual appearance of a specific individual or group of individuals.
[0032] Embodiments relate to monitoring of physical activities performed by individuals or users in a gym environment and is based on the insight that it is possible to both enhance and simplify the review of such activities by use of 3D representations of the users. For example, a time series of 3D representations of a user may be generated and stored during training and later retrieved for presentation, by the user or someone else, at any desirable viewing angle and/or magnification. Thus, embodiments not only enable the user to review performance but also improves remote coaching, by empowering a trainer at a remote location from the gym to thoroughly review how the activity is performed by the user. A time series of 3D representations provides significantly more relevant data about an activity compared to ordinary 2D video footage. The time series of 3D representations may also be algorithmically analyzed to generate one or more objective performance indicators for presentation, for example in terms of a quantified metric or identification of deviation(s) from proper form. The performance indicator(s) may guide the individual towards better implementation of an activity, to thereby increase gains and/or reduce the risk of injury. Further, by being objectively determined, the performance indicator(s) may be useful in officiating, also outside the field of gym environments, for example as a tool for referees or umpires in sport competitions such as weightlifting, cross fit, martial arts, fencing, etc.
[0033]
[0034]
[0035] Any technique for 3D reconstruction and generation of the 3DM may be implemented within the context of the present disclosure. An example will be briefly described with reference to
[0036]
[0037]
[0038] The above-mentioned 3D motion data, 3DM, may be generated by processing plural streams of images taken from different viewing angles to generate corresponding streams of 2D poses, and merging the streams of 2D poses into a stream of 3D poses by triangulation.
[0039]
[0040] In some embodiments, as indicated by dot-dashed lines 30, the feedback device 40 is not part of the replay device. For example, the feedback device 40 may be included in a local electronic device at the location of the individual, i.e. in the gym environment, or at another location, for example at the location of a trainer. The feedback device 40 comprises a display and may also comprise an input interface, such as a keyboard, keypad, computer mouse, one or more touch buttons, voice control, touch screen, etc. The electronic device may be a smartphone, tablet computer, laptop, desktop computer, a TV, a monitor, etc. The replay device 30 is physically separated from the local electronical device and may, for example, be located on a server at the gym or on a computer system resource in the cloud.
[0041] In other embodiments, as indicated by dot-dashed lines 30′, the feedback device 40 is part of the replay device, which thus may be incorporated into the above-mentioned local electronic device.
[0042] In the example of
[0043]
[0044] In step 401, the image-based monitoring arrangement 10 is operated to generate a time series of 3D representations of an individual in the gym environment. The time series corresponds to 3DM in
[0045] In step 403, at least part of the time series, 3DM, is stored in the storage device 31. The at least part of the 3DM corresponds to an exercise performed by the individual and is also denoted “exercise sequence” herein.
[0046] As shown, step 403 may be preceded by a step 402, in which the 3DM generated by the monitoring arrangement 10 is processed for detection of the start of an exercise. Step 402 may be performed by the monitoring arrangement 10. In one example, the arrangement 10 is configured to output the 3DM only when it has detected the start of an exercise, to thereby minimize data traffic between the arrangement 10 and the storage device 31, as well as reduce the required storage capacity of the storage device 31. In another example, the arrangement 10 provides a signal indicating the start of an exercise to the storage device 31, which is thereby caused to store the 3DM associated with the signal. Alternatively, step 403 is performed by the storage device 31. In one example, the storage device 31 is configured to only store the incoming 3DM when the start of an exercise has been detected. In another example, the storage device 31 is configured to operate at a first storage rate by default and, upon detection of the start of an exercise, operate at a second storage rate, which is higher than the first storage rate. The storage rate denotes the number of 3D poses stored per unit time. Common to all examples, irrespective of where the start of the exercise is detected, is that the storage device 31 is operable to selectively increase the storage rate, from either zero or the first storage rate, upon the detection of the start. Correspondingly, step 402 may comprise selectively decreasing the storage rate upon detection that the exercise is completed.
[0047] In step 404, the exercise sequence stored in step 403 is retrieved. In the example of
[0048] In step 406, the replay device 30, 30′ renders an animation of the individual performing the exercise based on the exercise sequence. The animation is a replay of the poses attained by the individual during the exercise. In the example of
[0049] In step 407, the animation is provided for display, for example as animation data AD from the rendering device 32 to the feedback device 40 in
[0050] The animation data AD may be provided in different formats. In some embodiments, AD comprises a self-contained computer file with an interactive 360 animation, which allows a user to pan and zoom the rendered individual independent of the rendering device 32, by the user operating the computer file to select various viewing angles and/or magnifications for the replay and/or to stop the replay and zoom in on selected details. For example, such a computer file may be provided to a feedback device 40 that is physically separate from the replay device (cf. 30 in
[0051] In some embodiments, the method and system described herein enable a trainer, who may be at a remote location, to review the exercise performed by the individual in the gym environment by commanding replay of the animation on a feedback device at the remote location. The replay may be performed in (near) real time with the individual's execution of the exercise, or at a separate time point. In some embodiments, the system comprises plural feedback devices, which are communicatively connected to the replay device (30 in
[0052] As shown in
[0053]
[0054] In step 601, the exercise sequence is processed for detection of an exercise category among a plurality of exercise categories. Step 601 results in a classification of the exercise sequence. The exercise category may be determined with any desired granularity or level of detail. In one example, the exercise category may indicate the type of equipment used in the exercise, such as machine, barbell, dumbbell, kettlebell, body weight, cables, etc. In another example, the exercise category may identify a general type of exercise, such as deadlift, fly, overhead press, etc. In yet another example, the exercise category may identify a specific exercise implementation, such as butterfly machine fly, inclined bench dumbbell fly, cable crossover fly, dumbbell shoulder fly, etc. Step 601 may comprise operating any available human activity recognition algorithm on the exercise sequence. In the example of
[0055] In step 602, a body category for the individual is obtained. The body category represents the body build and/or one or more physical traits of the individual. The body category may be obtained in many different ways. For example, the body category may be prestored for the individual in a database, determined by processing images acquired by the monitoring system 10, determined by processing one or more exercise sequences, entered into the system by the individual, etc. The body category may indicate one or more of the body shape of the individual (for example, “short”, “medium height”, “tall”, “slim”, “medium weight”, “heavy”), the skeletal framework of the individual, or the fitness level of the individual, for example in terms of aerobic or cardiovascular endurance, muscular strength, muscular endurance, flexibility, or body composition. In some embodiments, body shape is given in terms of standardized body types, such as endomorph, mesomorph or ectomorph, for example by so-called somatotyping.
[0056] In step 603, the exercise sequence from step 404 (
[0057] In step 604, which may be part of step 406, the performance indicator(s) determined in step 603 are combined with the animation, which may be done in different ways. In one example, a performance indicator may be presented in plain text. In another example, a performance indicator may be presented as a value of a performance metric calculated in step 603, such as an overall grading of the performance. As shown, step 604 may comprise a step 604A, in which the above-mentioned deviations are indicated visually (highlighted) in the animation to provide corrective feedback, for example by use of contrasting colors or patterns, by encircling deviations, by locally magnifying areas of deviations, etc. The deviation(s) may also be quantified to the user, for example by including ideal 3D poses in the animation of the exercise sequence, and/or by including an ideal trajectory of one or more keypoints in the animation.
[0058]
[0059] The evaluation device 33 further comprises a calculation unit 33B, which is configured to perform step 603 based on 3DM and [EC,P]. As understood from step 603, the calculation unit 33B may be configured to obtain NMP based on EC. If the model 33A provides more than one exercise category, the calculation unit 33B may obtain NMP for the EC with the largest probability value. In the illustrated example, the calculation unit 33B also has access to the body category, BC, of the individual, and is configured to obtain the NMP to match both EC and BC. In accordance with step 603, the calculation unit 33B may process the exercise sequence in relation to NMP to determine the performance data PD.
[0060] However, in a simplified embodiment, the calculation unit 33B is configured to determine the PD from the probability value(s) P only. If the largest (or the only) probability value P is above a probability threshold, the calculation unit 33B may generate PD to indicate that the individual's performance is acceptable. Otherwise, PD may be generated to indicate that individual has failed to perform the exercise. The rationale for the simplified embodiment is that the (largest) probability value is generated by the model 33A to reflect how well the exercise sequence matches an exercise category. If the probability is low, it is likely that the exercise is performed with poor form.
[0061] The structures and methods disclosed herein may be implemented by hardware or a combination of software and hardware. In some embodiments, such hardware comprises one or more software-controlled computer resources.
[0062] While the subject of the present disclosure has been described in connection with what is presently considered to be the most practical embodiments, it is to be understood that the subject of the present disclosure is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and the scope of the appended claims.
[0063] Further, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, parallel processing may be advantageous.
[0064] In the following, clauses are recited to summarize some aspects and embodiments as disclosed in the foregoing.
[0065] C1. A system for monitoring exercise activity in a gym environment, said system comprising: an image-based monitoring arrangement (10) configured to generate a time series of 3D representations (203) of an individual in the gym environment, wherein each of the 3D representations (203) defines a pose of the individual and comprises positions of a plurality of predefined feature points (204) of the individual in a 3D coordinate system (202); a storage device (31) configured to receive and store at least part of the time series of 3D representations (203), said at least part of the time series corresponding to an exercise performed by the individual, and a replay device (30; 30′) configured to retrieve said at least part of the time series from the storage device (31), render an animation of the individual when performing the exercise based on said at least part of the time series, and provide the animation for display.
[0066] C2. The system of C1, wherein the replay device (30; 30′) is configured to process said at least part of the time series to evaluate performance of the exercise performed by the individual and combine a performance indicator (81, 82) representing the performance with the animation.
[0067] C3. The system of C2, wherein the replay device (30; 30′) is further configured to process said at least part of the time series to classify the exercise performed by the individual into an exercise category (EC) among a plurality of exercise categories, and to evaluate the performance based on the exercise category (EC).
[0068] C4. The system of C3, wherein the replay device (30; 30′) is further configured to determine a nominal movement pattern (NMP) for the exercise performed by the individual based on the exercise category (EC), and to evaluate the performance by comparing said at least part of the time series to the nominal movement pattern (NMP).
[0069] C5. The system of C4, wherein the nominal movement pattern (NMP) defines nominal trajectories of the positions of the predefined feature points (204), and wherein the replay device (30; 30′) is configured to detect and evaluate one or more deviations in time and/or space between the positions of the predefined feature points (204) in said at least part of the time series and the positions of the predefined feature points (204) in the nominal trajectories.
[0070] C6. The system of C5, wherein the replay device (30, 30′) is configured to combine the performance indicator (81, 82) with the animation to provide a visual indication of the one or more deviations in the animation.
[0071] C7. The system of any one of C4-C6, which is further configured to obtain, for the individual, a body category (BC) among a plurality of body categories, and wherein the replay device (30; 30′) is configured to determine the nominal movement pattern (NMP) based also on the body category (BC).
[0072] C8. The system of any one of C3-C7, wherein the replay device (30; 30′) comprises a trained machine learning-based model (33A) which is configured to classify an incoming time series of 3D representations into one or more of the plurality of exercise categories.
[0073] C9. The system of C8, wherein the trained machine learning-based model (33A) is configured to generate a probability value (P) for each of the one or more of the plurality of exercise categories, and wherein the replay device (30; 30′) is configured to at least partly evaluate the correctness based on the probability value (P).
[0074] C10. The system of any preceding clause, wherein the replay device (30; 30′) is configured to render the animation in response to a display command (DC) entered into the system by a user.
[0075] C11. The system of C10, wherein the display command (DC) defines at least one viewing angle onto the individual to be included in the animation and/or a magnification ratio of the individual in the animation.
[0076] C12. The system of any preceding clause, wherein the replay device (30; 30′) is configured to render the animation as an avatar (1′) which is spatially matched to the 3D representations in said at least part of the time series.
[0077] C13. The system of C12, which is configured to obtain one or more physiological properties of the individual, wherein the replay device (30; 30′) is configured to render the avatar (1′) to exhibit the one or more physiological properties.
[0078] C14. The system of C13, wherein the one or more physiological properties comprises at least one of weight, height, gender, or body category.
[0079] C15. The system of any preceding clause, wherein image-based monitoring arrangement (10) or the storage device (31) is configured to process the time series for detection of a start of the exercise, and wherein the storage device (31) is operable to store an increased number of 3D representations per unit time upon the detection of the start of the exercise.
[0080] C16. The system of any preceding clause, wherein the predefined feature points (204) correspond to joints and/or extremities of the individual.
[0081] C17. The system of any preceding clause, wherein the system comprises one or more feedback devices (40) operable to display the animation to the individual and/or to a trainer.
[0082] C18. The system of any preceding clause, wherein the system comprises two or more feedback devices (40) which are communicatively connected to the replay device (30).
[0083] C19. A replay device for use in the system according to any one of C1-C18.
[0084] C20. A replay device for use in a system for monitoring exercise activity in a gym environment, wherein the replay device is configured to: retrieve, from a storage device (31), at least part of a time series of 3D representations (203) of an individual in the gym environment, said at least part of the time series corresponding to an exercise performed by the individual, each of the 3D representations (203) defining a pose of the individual and comprising positions of a plurality of predefined feature points (204) of the individual in a 3D coordinate system (202), the time series being generated by an image-based monitoring arrangement (10) in the gym environment; render an animation of the individual when performing the exercise based on said at least part of the time series; and provide the animation for display.
[0085] C21. A method of monitoring exercise activity in a gym environment, said method comprising: operating (401) an image-based monitoring arrangement to generate a time series of 3D representations of an individual in the gym environment, wherein each of the 3D representations defines a pose of the individual and comprises positions of a plurality of predefined feature points of the individual in a 3D coordinate system; storing (403) at least part of the time series of 3D representations in a storage device (31), said at least part of the time series corresponding to an exercise performed by the individual; retrieving (404), from the storage device, said at least part of the time series; rendering (406), by a replay device, an animation of the individual when performing the exercise based on said at least part of the time series; and providing (407), by the replay device, the animation for display.