GROUND ENGAGING TOOL MONITORING SYSTEM

20220307234 · 2022-09-29

Assignee

Inventors

Cpc classification

International classification

Abstract

A monitoring system and method for a tool of working equipment, preferably a ground engaging tool comprising wear members such as an excavator bucket, the system/method including one or more sensors mounted on the working equipment and directed towards the tool and a processor configured to: receive data relating to the tool from the one or more sensors, generate a three dimensional representation of at least a portion of the tool using the received data, compare the generated three dimensional representation with a previously generated three dimensional representation, and identify one or more of wear and loss of at least a portion of the tool, preferably a wear member portion, using the comparison of the generated three dimensional representation with the previously generated three dimensional representation.

Claims

1. A monitoring system for a tool of working equipment, the system including: one or more sensors mounted on the working equipment and directed towards the tool; and a processor configured to: receive data relating to the tool from the one or more sensors; generate a three dimensional representation of at least a portion of the tool using the received data; compare the generated three dimensional representation with a previously generated three dimensional representation; and identify one or more of wear and loss of at least a portion of the tool using the comparison of the generated three dimensional representation with the previously generated three dimensional representation.

2. The monitoring system of claim 1, wherein the tool is a ground engaging tool with replaceable wear parts.

3. The monitoring system of claim 1, wherein the one or more sensors comprise at least one sensor able to obtain data representative of a three dimensional surface shape of the tool.

4. The monitoring system of claim 3, wherein the one or more sensors comprise a multi-layered, time of flight, scanning laser range finder sensor.

5. The monitoring system of claim 3, wherein the one or more sensors comprise a two dimensional sensor providing sufficient two dimensional data to infer the three dimensional surface shape of the tool.

6. The monitoring system of claim 1, wherein the generation of a three dimensional representation of at least a portion of the tool using data received from the one or more sensors comprises the processor being configured to assemble a plurality of two dimensional scans taken over a time period to generate the three dimensional representation.

7. The monitoring system of claim 6, wherein the processor is configured to assemble the plurality of two dimensional scans taken over a time period to generate the three dimensional representation using motion estimate data.

8. The monitoring system of claim 1, wherein the processor is configured to combine data from sensors with different sensing modalities, fidelity, and/or noise characteristics to generate the three dimensional representation.

9. The monitoring system of claim 8, wherein the processor is configured to combine the data using a combinatorial algorithm.

10. The monitoring system of claim 1, wherein one or more of the sensors are located on the working equipment such that they have line of sight of the tool.

11. The monitoring system of claim 10, wherein the one or more of the sensors located on the working equipment are mounted on a movable arm of the working equipment.

12. The monitoring system of claim 1, wherein the process is configured to generate a three dimensional representation of at least a portion of the tool by combining the received data from the one or more sensors with a motion estimate.

13. The monitoring system of claim 12, wherein the motion estimate is derived from sensors data as the tool moves through a field of view of one or more of the sensors.

14. The monitoring system of claim 1, wherein the process is further configured to pre-process the received data prior to generating a three dimensional representation.

15. The monitoring system of claim 14, wherein the pre-processing comprises range-gating.

16. The monitoring system of claim 14, wherein the pre-processing comprises interlacing multiple sensor scans.

17. The monitoring system of claim 14, wherein the pre-processing comprises estimating when the tool is within a field of view of the one or more sensors.

18. The monitoring system of claim 17, wherein the estimating comprises identifying whether the sensor data indicates that, at selected points, the tool, or portions thereof, is identified as being present or absent.

19. The monitoring system of claim 17, wherein the estimating is based on a state machine.

20. The monitoring system of claim 19, wherein the state machine uses heuristics to identify conditions for spatial distribution of three dimensional points corresponding to each state of the state machine.

21. The monitoring system of claim 20, wherein the processor is configured to generate the three dimensional representation by aligning multiple three dimensional models by co-locating the three dimensional models in a common frame of reference.

22. The monitoring system of claim 21, wherein the aligning comprises using an Iterative Closest Point (ICP) or Normal Distributions Transform (NDT) process.

23. The monitoring system of claim 21, wherein the aligning comprises determining a homographical transformation matrix.

24. The monitoring system of claim 1, wherein the processor is further configured to convert the generated three dimensional representation to two dimensional range data.

25. The monitoring system of claim 24, wherein the processor is further configured to compare the generated three dimensional representation with a previously generated three dimensional representation by comparing the two dimensional range data.

26. The monitoring system of claim 24, wherein the processor is further configured to identify one or more of wear and loss of at least a portion of the tool by analysing a comparison of two dimensional images that include the two dimensional range data.

27. The monitoring system of claim 26, wherein the analysing comprises creating a difference image divided into separate regions that correspond to areas of interest of the tool.

28. The monitoring system of claim 27, wherein the difference image is divided into separate regions based upon a predetermined geometric model of the tool and/or edge-detection analysis.

29. The monitoring system of claim 26, wherein the analysing comprises measuring changes measuring changes in the difference image in each region by quantifying pixels and/or applying an image convolution process.

30. The monitoring system of claim 26, wherein the analysing comprises noise rejection using an image mask that prevents analysis of portion of the image deemed to be irrelevant.

31. The monitoring system of claim 1, wherein the processor is configured to output an indication of wear or loss of the portion of the tool.

32. The monitoring system of claim 1, further comprising a vehicle identification system including one or more sensors to establish vehicle identification of an associated vehicle when loss of at least a portion of the tool is identified.

33. The monitoring system of claim 1, wherein the processor is further configured to record and/or transmit global navigation satellite system (GNSS) co-ordinates when loss of at least a portion of the tool is identified.

34-72. (canceled)

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0074] By way of example only, preferred embodiments of the invention will be described more fully hereinafter with reference to the accompanying figures, wherein:

[0075] FIG. 1 illustrates a wear member monitoring system for a ground engaging tool;

[0076] FIG. 2 illustrates example sensor data of a ground engaging tool;

[0077] FIG. 3 illustrates an example three dimensional representation of a ground engaging tool generated from sensor data;

[0078] FIG. 4 illustrates a visual representation of an example comparison of a three dimensional representation of a ground engaging tool with a previously generated three dimensional representation;

[0079] FIG. 5 illustrates a thermal image of the ground engaging tool of FIG. 4; and

[0080] FIG. 6 illustrates a diagrammatic representation of an example a wear member monitoring system.

DETAILED DESCRIPTION OF THE DRAWINGS

[0081] FIG. 1 illustrates a tool monitoring system 10 for a ground engaging tool 20 of working equipment in the form of an excavator 30. It should be appreciated that the invention could apply to other types of vehicles or working equipment. The illustrated excavator 30 is a crawler type excavator 30. However, it should be appreciated that the excavator 30 may be other types of excavators having a ground engaging tool 20 including, for example, wheel loaders, hydraulic shovels, electric rope shovels, dragline buckets, backhoes, underground boggers, bucket wheel reclaimers, and the like. Although the illustrated tool is a ground engaging tool 20, it should also be appreciated that the invention could apply to other types of tools, particularly those with replaceable wear parts, such as construction tools, manufacturing tools, processing tools, or the like.

[0082] The excavator 30 of FIG. 1 has a movable arm 40 including a boom 42 and stick 44. One or more sensors 50 are mounted on the movable arm 40, more particularly on the stick 44 of the movable arm having at least a portion of the ground engaging tool 20 in their field of view 52. Depending on angles of articulation of the movable arm the ground engaging tool 20 may not always be within a field of view 52 of the sensor 50, but preferably the sensors are positioned and directed towards the ground engaging tool 20 in such a manner that the ground engaging tool 20 moves through their field of view 52 during usual working operations such as, for example, during a dumping operation.

[0083] The sensor 50 are in communication with a processor 60, which is preferably located on the excavator 30, even more preferably in the cab 70 of the excavator. The processor 60 could, however, also be located remotely, with data from the sensor 50 being transmitted off vehicle to a remote location. The processor 60 could, also, be located on the excavator 30 with processed information, such as findings or alerts, being transmitted to a remote location for remote monitoring and assessment.

[0084] The sensor 50 is preferably configured to collect data representing a three dimensional model of the current state of the ground engaging tool 20 such as, for example, a point cloud, probability cloud, surface model or the like. In a preferred embodiment, the sensor 50 is a multi-layered, time-of-flight, scanning laser range finder sensor (such as, for example, a SICK LD MRS-8000 sensor). It should be appreciated, however, that alternate sensors that could be used include, but are not limited to, stereo vision systems (both in the visual spectrum or any other spectrum, such as infrared thermal cameras), structured lighting based three dimensional ranging systems (not time of flight), radar, ultrasonic sensors, and those that may infer structure based on passive or indirect means, such as detecting a radiation pattern produced or modified by the ground engaging tool 20 (e.g. MRI or passive acoustic analysis).

[0085] In preferred forms the sensor 50 is a single three dimensional sensor but it should also be appreciated that one or more non-three dimensional sensors could be employed such as, for example, one or more two dimensional sensors or three dimensional sensors with a limited field of view that create a complete three dimensional model over a short time interval. Examples of such configurations include, but are not limited to a monocular structure from motion based sensing systems, including solutions based on event cameras, or a pair of two dimensional scanning laser range finders oriented at angles (preferably approximately 90°) to each other so that one sensor obtains a motion estimate by tracking of the ground engaging tool whilst the other collects time-varying two dimensional scans of the ground engaging tool that are then assembled into a three dimensional model via the motion estimate data.

[0086] The entire area of interest of a ground engaging tool 20 may not be captured by a single scan or frame of the sensor 50, but sensor data may be combined with motion estimates derived from the sensor data as the ground engaging tool 20 moves through the field of view 52 of the sensor 50 to generate the three dimensional model.

[0087] Depending on the sensor 50 location, there will likely be an expected variation of distance ranging data that can be considered acceptable, with data closer to, or further away from the sensor 50 than this able to be safely discarded. This data may be from dust, dirt, a dig-face or other items that are not relevant portions of the ground engaging tool 20. The processor 60 may, therefore, pre-process received sensor by range-gating which can significantly reduce the amount of data requiring comprehensive processing.

[0088] Other sensor specific pre-processing steps may also be performed. For example, for a SICK LDMRS-8000 sensor 50 multiple scans can be interlaced to present data with a wider vertical field of view for analysis. Similarly, noise rejection based on point clustering can be undertaken for this specific sensor. Other pre-processing steps may be utilised depending on the type, and in some cases even brand, of the sensor 50 being employed.

[0089] If relevant portions of the ground engaging tool 20 to be monitored by the system 10 are not continuously visible, then a determination of when to start and stop collecting data from the sensor 50 for analysis by the processor 60 should be made. The start point of relevant data collection may be referred to as a ‘trigger’ point or event. The trigger point may be identified by examining each frame or scan of sensor data and determining the ratio of points that are in a location where the ground engaging tool 20 is expected to be present to those where the ground engaging tool 20 is expected to be absent. This ratio may be compared against a pre-determined threshold value, preferably based on known geometry of the ground engaging tool 20 or a state machine.

[0090] For example, in an application where a bucket of a shovel is being considered, a measurement across the ground engaging tool 20 from one side of the bucket to the other could be used to very simply split the sensor data into areas where a lot of data is expected (e.g. where there are wear members in the form of teeth are located) and areas where less data is expected (e.g. where wear members in the form of shrouds are located). The ratio of these points could be determined through a relatively simple division operation, or through a more complex routine such as, for example, a Fuzzy-Logic ‘AND’ logic operation via an algebraic product. This value can then be compared against a threshold value where the number of teeth are compared to the number of shrouds and the expected field of view of the sensor in a frame where the ground engaging tool 20 is visible to obtain a ratio for comparison.

[0091] A state machine, on the other hand, may be comprised of the following states: wear members not visible, wear members partially visible, members fully visible, wear members partially beyond the field of view of the sensors, and/or wear members fully outside the field of view of the sensor. State detection may be based on heuristics that identify the conditions for spatial distribution of three dimensional points corresponding to each state. The estimating may also be supplemented by a rejection mechanism that rejects data indicating that the wear members may still be obstructed such as by being engaged in a dig face or obscured by material that is identified to not be of interest. This rejection mechanism may check for empty data around the known tool dimensions. The rejection mechanism may also check for the approximate shape (for example, planar, spherical, ellipsoidal) of the tool via examination of the results of a principal components analysis of three dimensional points.

[0092] FIG. 2 illustrates example sensor 50 data 100 from a single scanning laser range finder scan frame of a ground engaging tool 20 portion of a shovel (not shown) once the trigger point has been reached. The data 100 includes clearly identifiable wear members in the form of teeth 110 and shrouds 120. Although not readily apparent from FIG. 2, each point contains range information, relative to the sensor 50, such that there is sufficient data to generate a three dimensional representation of the ground engaging tool 20.

[0093] To improve reliability, a buffer (preferably circular) of sensor data prior to determination of the trigger point is stored and subsequent analysis may be performed on scans in the buffer, unless a particular scan is discarded for lacking integrity (e.g. insufficient data points, inability to track key features used for three dimensional model creation, etc.).

[0094] Whilst not essential, multiple sets of sensor 50 data are preferably combined over relatively short time intervals to create a more effective three dimensional representation of the ground engaging tool 20. For most sensor 50 locations, sensing modalities, and applications of this technology, it can be expected that the ground engaging tool 20 is not likely to be visible all the time and that the data from the sensor will be subject to variances in quality due to, for example, signal noise, temporary occlusions (such as, for example, dust or material being excavated) and the current weather conditions (such as, for example, fog or rain). Accordingly, it is desirable to combine multiple sets of data from the sensor 50 over a predetermined time interval to create a better representation of the ground engaging tool 20 than provided by a single set of data. In a preferred implementation sensor data over a single dump motion is used. This is determined by the size of the buffer and trigger event, and the ability of motion tracking processing to retain a motion tracking ‘lock’ on the ground engaging tool 20. If no new data is received over a predetermined period a processing event may be triggered.

[0095] To combine multiple sets of sensor data into a single model, a sensing modality appropriate three dimensional voxelised representation may be used. In a preferred form with a scanning laser range finder, a spherical frame based voxelisation that encodes a ray-tracing like description may be used. Multiple points from different scans that fall into the same voxel are merged into a single point via some appropriate statistical model (such as, for example, the median). The resolution of the model can be determined from the desired fidelity of the wear measurement or loss output and the capabilities of the sensor 50 in use. Alternatively, data may be combined directly to a two dimensional gridded representation, with similar statistical merging of data from multiple scans. A statistical model, such as a cumulative mean, may be employed. The values could also be improved via other statistical models, such as the application of an univariate Kalman filter.

[0096] FIG. 3 illustrates an example three dimensional representation 200 of a ground engaging tool 20 created by combining multiple sets of three dimensional sensor data measured over a single dump motion in a spherical co-ordinate voxelisation. The representation 200 includes clearly identifiable wear members in the form of teeth 210 and shrouds 220. Once such a three dimensional representation 200 has been generated it may be compared with a previously generated three dimensional representation. The current three dimensional representation 200 is also preferably stored so that it can be used as a previously generated three dimensional representation in future such comparisons.

[0097] Over time multiple three dimensional representations of the ground engaging tool 20 are collected during operation. Depending on sensor 50 mounting arrangements, these may be collected in a common frame by virtue of the relative arrangement of the sensor 50 and ground engaging tool 20. Otherwise, they may be in different spatial reference frames or otherwise not readily co-located for comparison. In such cases the collected three dimensional representations 200 of the ground engaging tool 20 models are preferably transformed to be co-located in a single reference frame to ensure the processor 60 can perform an accurate comparison.

[0098] A variety of approaches could be used to align multiple three dimensional representations 200 to be co-located in a common frame. A preferred approach is to use a reference three dimensional representation 200 and to align all other three dimensional representations 200 to that reference. This reference three dimensional representation may simply be the first representation generated during or after commissioning or any other representation generated at any stage of the process, providing that it is used in a consistent manner.

[0099] Alignment is preferably performed using an Iterative Closest Point (ICP) process. The ICP process preferably has constraints with respect to expected degrees of freedom. For example, a hydraulic face shovel bucket can only translate in two dimensions and rotate about a single axis relative to a sensor 50 mounted on a stick 44. Another example of a suitable alignment algorithm would be the computation of a homographical transformation matrix based on matching keypoints between the reference representation and intermediate representations in a two dimensional image space, supplemented with an appropriate colour normalisation step for range alignment or rotation about the unconstrained axis.

[0100] Another example of a suitable alignment algorithm is a Normal Distribution Transform process (NDT). Another example of a suitable alignment algorithm is to perform the alignment in two stages, firstly by application of a gross alignment mechanism, such as rotation as defined by alignment with a pre-determined coordinate system to the point cloud's principal axis as determined by principal component analysis (PCA) together with a translation based on centroids, boundaries, statistical or other geometric features, and secondly by subsequent application of a fine alignment mechanism, such as the application of the ICP or NDT process.

[0101] Once aligned (if necessary) the processor 60 compares a recently generated three dimensional representation with at least one previously generated three dimensional representation. The comparison is primarily to detect changes in the ground engaging tool 20 over a predetermined time period. A large change in the three dimensional ground engaging tool 20 representation 200 over a relatively short period of time such as, for example, between dump motions, is indicative of a ground engaging tool 20 loss or breakage event. A smaller change over a longer period of time is indicative of abrasive wear to the ground engaging tool 20.

[0102] In a preferred form, the three dimensional representations 200 are converted to a two dimensional range image using image processing techniques such as, for example, superimposing a cartesian frame over the three dimensional model and projecting range measurements from the model onto a pair of the cartesian axes and colouring each pixel by the range value according to the third, orthogonal axis. Further filtering operations, such as opening-closing or dilate-erode, can be applied to the image to fill holes due to occlusions or otherwise generally improve the quality of the image. Combining images over appropriate time periods to further reduce transient noise effects can also be used such as, for example, a moving average of images over a window of a few minutes or removing pixels that are only observed in a small number of images.

[0103] The processor 60 then compares images over varying time-bases. By performing an image subtraction operation, for example, differences between images are highlighted. These comparisons can be performed to detect wear and loss events over time periods appropriate to the item of interest. For example, by comparing the current image to the most recent previous image, large changes in the state of the ground engaging tool 20 are highlighted and are indicative of a ground engaging tool 20 loss or breakage event. By comparing an image created over a moderate moving average window such as, for example, a few dump events to a similar image taken the previous day or even earlier, the wear of the ground engaging tool 20 should be evident in both the depth (colour) of the difference image and as a change in the border of the ground engaging tool 20 features in the difference image.

[0104] FIG. 4 illustrates a visual representation of an example comparison 300 of a three dimensional representation of a ground engaging tool with a previously generated three dimensional representation in two dimensional image format. The representation difference image 300 includes clearly identifiable wear members in the form of teeth 310 and shrouds 320. In this example the time period between representations being compared is relatively long showing both wear in the form of relatively small dark regions 330 around the perimeter of the wear members and tooth tip loss in the form of a relatively larger dark block 340 of one of the teeth 310. Wear in depth is also visible. FIG. 5 illustrates a thermal photographic image of the ground engaging tool 20 of FIG. 4 showing the tooth tip loss 340 highlighted in the difference image 300.

[0105] After comparison the processor 60 is configured to identify wear and/or loss of the ground engaging tool 20. The process for such identification may be selected to suit the ground engaging tool 20 being monitored and the type of determinations required. In a preferred form, where the determination is to identify wear and/or loss of at least a portion of the ground engaging tool 20 as illustrated, an image convolution process with an appropriately scaled and weighted kernel may be applied to the difference image. Alternatively, a pixel row counting algorithm may be applied to the difference image.

[0106] Application of the convolutional filter to the difference image is performed using a square kernel with linearly increasing weights from the border to the centre. The kernel size is chosen to be about the same size as that of the object that can reasonably expected to be lost, or a little larger, when converted to pixels via scaling and resolution of the difference image. Examination of the magnitude of the result of the convolution operation is used to identify a loss or magnitude of wear and to locate the area of wear or loss. This magnitude is compared against a predetermined threshold or an adaptive threshold. An example predetermined threshold may be based on a fraction of the maximum possible result in the event of a large loss event. This can be tuned by hand to change the sensitivity of the detection. Example adaptive thresholds may be based upon comparing the value over time and looking for changes in value that would indicate a statistical outlier, or a machine learning approach whereby a threshold value is determined via operator feedback regarding the accuracy of detections.

[0107] The image used for comparison is preferably divided into vertical regions corresponding to expected locations of teeth and shrouds based on predetermined geometric model of the ground engaging tool 20 which is typically determined by knowledge of its geometry. Such sectioning is preferably performed in an automated manner, for example via the use of an edge-detection algorithm and inspection for substantially vertical line features.

[0108] For each vertical region, starting from the edge of the image closest to the tooth tips and iterating row by row towards the base of the teeth, each contiguous line of pixel values in the difference image that indicate an absence in the more recent model by comparison to the earlier model is preferably counted. The number of missing rows for each pixel is compared against either a known threshold value such as, for example, a predetermined threshold or an adaptive threshold.

[0109] An example predetermined threshold may be based upon a difference in length between a fully worn ground engaging tool 20 element when mounted on a digging implement, and that of the digging implement or mounting hardware without the ground engaging tool 20 element, converted to pixels via scaling and resolution of the difference image. Example adaptive thresholds may be based upon comparing the value over time and looking for changes in value that would indicate a statistical outlier, or a learning approach whereby a threshold value is determined via operator feedback regarding the accuracy of detections.

[0110] Once wear or loss is detected the processor 60 outputs an indication of the wear or loss. This output may take a number of forms, but preferably an alert is linked to the output to provide a notification of the identified wear or loss event. The notification is preferably provided to at least an operator of the working equipment 30.

[0111] In addition to simply providing a notification of wear or loss, the processor 60 is preferably able to identify and output an indication of any other useful identified characteristics such as, for example, an identification of abnormal wear occurring (e.g. faster than expected wear of at least a portion of the ground engaging tool 20). This may be performed by comparing pixels or groups of pixels, including moving and overlapping windows of pixels, in difference images constructed over varying time-bases to some predetermined baseline results indicative of acceptable wear rates. The acceptable wear rates may be either for the entire ground engaging tool 20 or for specific portions of the ground engaging tool 20 such as, for example one or more specific wear members.

[0112] Further useful notifications may be generated from a multi-variate and spatial correlation with data from other systems. For example, wear rate may be correlated to one or more of the type of material being excavated, the time of day, the operator of the machine, etc. Such notifications may be useful for interpretive analysis, such as providing an assistive input to a system for automatically detecting changes in the properties of the excavated material.

[0113] The output may include an alert. For example, an alert may be output when a ground engaging tool 20 loss event is detected. A user interface may be provided. The alert may be presented to an operator of the working equipment 30 on such a user interface. The output may also be provided to other systems of the working equipment including, for example, control systems of the working equipment 30. Examples of how alerts maybe provided to the operator include, but are not limited to, one or more of an audio alert, a visual alert, and a haptic feedback alert. The alerts preferably distinguish between wear and loss events. The output also preferably informs an operator which portion of a ground engaging tool 20 is identified as being lost or worn. Such information is preferably also available via an application programming interface and/or digital output for consumption by other systems, such as a control system or remote operations.

[0114] A vehicle identification system may be provided, preferably a Radio Frequency Identification (RFID) based system or Global Positioning System (GPS) or fleet management system (FMS) which may use a mix of methods such as, for example, a truck-based FMS. With such a vehicle identification system an associated vehicle, such as a haulage vehicle receiving material from the working equipment 20, can be identified. When a ground engaging tool 20 loss event occurs the vehicle identification system may be utilised to identify a vehicle that most likely contains the lost portion of the ground engaging tool 20.

[0115] The processor 60 may also be configured to provide historical tracking. Such historical tracking may allow an operator to view difference image information, three dimensional models, data or images from the sensors 50 themselves (if applicable, depending on the sensing modality) and/or images from adjacent sensors (such as a thermal camera) to assist the operator in identifying the current state of the ground engaging tool 20 and/or historical state changes. Such historical tracking may be utilised to review a loss event, whereby manual historical review could be used to supplement any delays in detection by the system. For example, a lower false alarm rate may be achieved by increasing an averaging window and comparison periods, at the expense of a possibly larger delay between a ground engaging tool 20 loss event actually occurring and being identified by the system.

[0116] The processor may also be configured to transmit data. The data is preferably transmitted to a remote location. Such data may include one or more of three dimensional representations of the ground engaging tool 20, difference image information, and alerts. Such data is preferably sent from the working equipment 20 to a remote server or cloud environment for additional processing, analysis, tracking and/or reporting. The remote server or cloud environment may supplement a local processor of the working equipment or even carry out some of the processing instead of a local processor on the working equipment. Examples of some metrics which could be derived from such information include wear rates and ground engaging tool 20 life estimation. Such information could be supplemented with information from other sources such as, for example, specific dig energy and economic constraints or variables, ground engaging tool part prices to allow for recommendations on ground engaging tool 20 change-out periods, or tied-in as an input to other systems such as, for example, automated ground engaging tool 20 ordering systems. The processor may be configured to receive an input from an operator of the working equipment indicating that a wear or loss event has occurred. Upon receiving such an input analysis may be conducted and/or a notification sent remotely. The notification may be used to alert maintenance to the working equipment. Such an ‘on demand’ approach may mean the tool of the working equipment can have larger maintenance inspection intervals.

[0117] In addition to wear or loss being determined, the processor may be configured to determine when a wear member is replaced by looking for positive, rather than negative, changes in the difference image. Such information can be used to determine wear part life and/or replacement patterns. An analysis of the difference image may also be utilised to recognise the shape of the wear part. This may be used to identify the wear part in use to determine operator preferences and/or activities. A suitability analysis may be conducted in which wear and/or loss characteristics of identified wear members can be determined. Recommendations of specific replacement wear members can be provided after such a suitability analysis determination.

[0118] FIG. 6 illustrates a diagrammatic representation of an example a wear member monitoring system having sensors 500, a processor 600 and an output 700. The processor 600 is configured to: receive data relating to relating to the ground engaging tool 20 from the one or more sensors at step 610, generate a three dimensional representation of at least a portion of the ground engaging tool using the received data at step 620, compare the generated three dimensional representation with a previously generated three dimensional representation 630 at step 640; identify one or more of wear and loss of at least a portion of the ground engaging tool using the comparison of the generated three dimensional representation with the previously generated three dimensional representation at step 650; and when wear or loss of at least a portion of the ground engaging tool is identified, output 700 an indication of that wear or loss at step 660.

[0119] Advantageously, the invention provides a monitoring system 10, and associated method, for identifying lost and worn wear members of a ground engaging tool 20. This can increase productivity as digging with damaged ground engaging tools 20, such as those having worn or detached wear members, is inherently less effective. Furthermore, identifying when a ground engaging tool 20 has a loss event allows for quick recovery of the loss avoiding other potential problems on a worksite such as damage to downstream equipment.

[0120] The monitoring system 10 also allows for a preventative maintenance regime such that wear members of a ground engaging tool 20 can be monitored and replaced when they reach a predetermined worn state in order to avoid unscheduled downtime.

[0121] Preferably, a three dimensional representation of the ground engaging tool 20 can be created and registered in a consistent frame of reference. Advantageously, the algorithm can be independent to the sensing modality used to create the representation. Noise can be reduced in the three dimensional representation by combining multiple models over short time intervals. Two three dimensional representations, collected at different points of time via a relatively computationally modest subtraction operation (such as, for example, range image subtraction) can highlight differences in the state of the ground engaging tool 20 over a period of time between collection of the respective sets of data. Repeating this over varying time scales can be used to detect different scales of wear and to obtain different levels of responsiveness to gross changes (such as, for example, a loss event).

[0122] It should be appreciated that the system and method can be applied to any ground engaging tool 20 such as, for example, monitoring wear parts on buckets, on backhoe, face shovel, wheel loader, bucket wheel excavator, and drilling rigs.

[0123] Depending on implementation, minimal, if any, prior knowledge of the ground engaging tool 20 geometry is required. Output can be in a relatively simple format (such as, for example, a range image of differences) that can be interrogated via standard image processing techniques to obtain a large amount of knowledge of the state of the ground engaging tool 20 compared to, for example, comparatively rudimentary linear measurements of a tooth length. The output can also be readily combined with other data sources to significantly increase the utility of the measurement for deeper insights of the ground engaging tool 20. The system and method are reliable having low, and typically easily tuneable, false alarm rates. The output can also be in a format readily suitable for operator alert and off-board monitoring and/or dashboarding.

[0124] In this specification, adjectives such as first and second, left and right, top and bottom, and the like may be used solely to distinguish one element or action from another element or action without necessarily requiring or implying any actual such relationship or order. Where the context permits, reference to an integer or a component or step (or the like) is not to be interpreted as being limited to only one of that integer, component, or step, but rather could be one or more of that integer, component, or step etc.

[0125] The above description of various embodiments of the present invention is provided for purposes of description to one of ordinary skill in the related art. It is not intended to be exhaustive or to limit the invention to a single disclosed embodiment. As mentioned above, numerous alternatives and variations to the present invention will be apparent to those skilled in the art of the above teaching. Accordingly, while some alternative embodiments have been discussed specifically, other embodiments will be apparent or relatively easily developed by those of ordinary skill in the art. The invention is intended to embrace all alternatives, modifications, and variations of the present invention that have been discussed herein, and other embodiments that fall within the spirit and scope of the above described invention.

[0126] In this specification, the terms ‘comprises’, ‘comprising’, ‘includes’, ‘including’, or similar terms are intended to mean a non-exclusive inclusion, such that a method, system or apparatus that comprises a list of elements does not include those elements solely, but may well include other elements not listed.