Device for measuring a slaughter animal body object

10420350 ยท 2019-09-24

Assignee

Inventors

Cpc classification

International classification

Abstract

A device for measuring a slaughter animal body object includes an image camera having an image-camera recording region, a depth camera having a depth-camera recording region, and an evaluation unit. The cameras are positioned in relation to one another by a positioning device in such a manner that the camera recording regions of the image camera and of the depth camera overlap in a common camera recording region at least in certain sections. The evaluation unit identifies measurement points in the common camera recording region and determines the distances thereof from one another.

Claims

1. A device for measuring a slaughter animal body object, the device comprising: an image camera having an image-camera recording range for optically recording a section of a surface of a slaughter animal body object and for recording light intensity values (g) of image points and area coordinates (x, y) of the image points, the light intensity values (g) and the area coordinates (x, y) being provided as light intensity value data on the surface of the slaughter body for transfer purposes; an evaluator connected to said image camera, said evaluator registering the light intensity value data provided by the image camera and identifying measurement points on the surface of the slaughter body from the light intensity value data; a depth camera having a depth-camera recording range for optically recording the section of the surface of the slaughter animal body object and for recording space coordinates (x, y, z) of image points within the depth-camera recording range, the space coordinates (x, y, z) including area coordinates (x, y) and a depth value (z), and the space coordinates (x, y, z) being provided as space coordinate data for transfer purposes; a positioner for positioning said depth camera relative to said image camera, said positioner positioning said image camera and said depth camera in relation to one another to have the depth-camera recording range and the image-camera recording range overlap, at least in certain sections, in a common camera recording range; said depth camera being connected with said evaluator, said evaluator registering the space coordinates (x, y, z), said evaluator assigning the light intensity value data (x, y, g) and the space coordinate data (x, y, z) on the basis of matching area coordinates (x, y), the assigned light intensity value data and space coordinate data being provided as four item data tuples, the four item data tuples including area coordinates (x, y), depth value (z) and light intensity value (g), said evaluator identifying, based on computational detection and selection of different tissue sections using the differences in light intensity value, a first measurement point and a second measurement point of the surface of the slaughter animal body object from the light intensity value data of image points provided by the image camera, and said evaluator determining, on the basis of the space coordinate data (x, y, z) of a first four item data tuple of a first image point of the first measurement point and on the basis of the space coordinate data (x, y, z) of a second four item data tuple of a second image point of the second measurement point, a spatial distance between the first measurement point and the second measurement point.

2. The device according to claim 1, wherein the slaughter animal body object is a slaughter animal body half which has a cutting side, the section of the surface is the surface of the cutting side, and the surface of the cutting side of the slaughter animal body half is optically captured both by the image camera recording range and the depth camera recording range.

3. The device according to claim 1, wherein said image camera is a chromaticity camera, the light intensity values are registered separately according to color channels, and the light intensity values are provided separately according to color channels in the light intensity value data.

4. The device according to claim 1, wherein the depth value from the individually determined space coordinates (x, y, z) is used for identifying a measurement point at the section of the surface of the slaughter animal body object.

5. The device according to claim 1, wherein the depth value from the individually determined space coordinates (x, y, z) of a plurality of points on the section of the surface of the slaughter animal body object is used for determining a model ideal surface, and the spatial distance of identified measurement points is determined on the basis of the depth value (z) that corresponds to a model ideal surface shape.

6. The device according to claim 1, wherein the depth camera is a TOF (time-of-flight) camera.

Description

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING

(1) In the following, the invention is explained as an embodiment in more detail by means of the following figures. They show:

(2) FIG. 1 a schematic drawing

(3) FIG. 2 a schematic drawing with an ideal surface.

DESCRIPTION OF THE INVENTION

(4) This embodiment is a device for measuring a slaughter animal body object in the form of a slaughter animal body half 1.

(5) An invented device for measuring a slaughter animal body half 1 comprises an image camera 2 and a depth camera 4.

(6) The image camera 2 is an RGB camera and has an image-recording range with a recording angle .sub.RGB.

(7) Within the image recording range, a cutting-side surface of the slaughter animal body half 1, here illustrated by the plane axis g.sub.SKH of the cutting-side surface, can be at least partially recorded by the image camera 2.

(8) Within the image recording range, the image camera 2 can additionally record light intensity values (g) of image points and their area coordinates (x, y) on the cutting-side surface of the slaughter animal body half 1.

(9) The recorded light intensity value data and area coordinates are combined to light intensity value data (x, y, g) and provided for transfer purposes by the image camera 2.

(10) According to the invention, the light intensity value data are transferred to an evaluation unit 3 which is connected to the image camera 2 and registers and further processes the transferred light intensity value data.

(11) The depth camera 4 intended by the invention is designed as a TOF (Time-of-flight) camera and has a depth camera recording range with a recording angle .sub.D.

(12) Within the depth camera recording range, the cutting-side surface of the slaughter animal body half 1 can also be at least partially recorded.

(13) The depth camera 4 can simultaneously record space coordinates of image points on the cutting-side surface of the slaughter animal body half 1, and the space coordinates always consist of the area coordinates (x, y) and a depth value (z).

(14) The space coordinates are provided by the depth camera 4 as space coordinate data (x, y, z) and are also transferred to the evaluation unit 3 which is also connected to the depth camera 4.

(15) In the invention, the image camera 2 and the depth camera 4 are positioned relative to each other by a positioning device 5 in such a way that the image camera recording range and the depth camera recording range overlap at least in certain sections in a common recording range that is as large as possible.

(16) The evaluation unit 3 of the present invention is capable of using the light intensity value data of the image camera 2 for identifying and defining discrete measurement points P.sub.1, P.sub.2 on the cutting-side surface of the slaughter animal body half 1.

(17) In this way, an object detection of defined areas on the cutting-side surface of the slaughter animal body half 1 is made possible such that, for example, image points with high light intensity value data are assigned to fat tissue segments and image points with low light intensity value data are assigned to meat tissue segments. On the basis of the different light intensity value data a concrete differentiation between light-intensive and low-light image points and thus a differentiation between fat and meat tissue sections can then be carried out automatically.

(18) The measurement points P.sub.1 and P.sub.2 are subsequently determined on the basis of this information so that they mark, for example, the outer edges of a fat tissue section.

(19) Furthermore, in the invention the evaluation unit 3 can assign the light intensity value data provided by the image camera 2 and the space coordinate data provided by the depth camera 4 to each other via matching area coordinates and thus determine the appropriate depth value for each measurement point P.sub.1, P.sub.2.

(20) In addition to this, the evaluation unit 3 makes it possible to combine the light intensity value data and the space coordinates to data tuples, whereby one data tuple can always be assigned to each measurement point P.sub.1, P.sub.2, and it is particularly advantageous that on the basis of the data tuples of the measurement points P.sub.1, P.sub.2 the spatial distance between them can be determined.

(21) Thus, a particular technical advantage is provided by the measurement of the cutting-side surface of the slaughter animal body half 1 and by an object detection of relevant areas in the surface, for which the otherwise usual two-dimensional area information is complemented by the depth value to enable a three-dimensional object detection on the cutting-side surface of the slaughter animal body half.

(22) In an inventive embodiment, the image camera 2 and the depth camera 4 are positioned in relation to each other in such a way that the corresponding recording ranges of the cameras overlap in a common recording range, at least in certain sections.

(23) The image points are recorded in the common recording range in real time, which means that no or only a slight relative movement of the slaughter animal body half 1 relative to the device occurs between the recording of the specific image point by the image camera 2 and the recording of the same image point by the depth camera 4.

(24) The image camera 2 and the depth camera 4 are positioned in the invented device such that the measurement standard n.sub.RGB of the image camera and the measurement standard n.sub.D of the depth camera are parallel to each other as far as possible, and a distance d arises between the cameras such that a sufficiently large common recording range is provided.

(25) During the measurement procedure, the slaughter animal body half 1 is passed along the device on a movement axis g.sub.t by a transport unit, here designed as a tube track (not shown).

(26) Thanks to the inclusion of the individual depth values it is particularly advantageous that it is not necessary to align the slaughter animal body half 1 precisely relative to the device during the measurement.

(27) In fact, it is sufficient if the cutting-side surface of the slaughter animal body half 1 faces the image camera 2 and the depth camera 4 so that the relevant measurement points P.sub.1, P.sub.2 can be clearly identified and a sufficiently high resolution of image points is provided.

(28) Compared with solutions known so far, the device according to this invention therefore offers the technological advantages that a very exact measurement of the cutting-side surface of the slaughter animal body half 1 and an exact object detection of relevant surface areas, such as fat, meat or bone tissue, can be carried out automatically and that, simultaneously, by including the depth values, possible measurement irregularities caused by an imprecise positioning of the slaughter animal body half 1 or by an existing unevenness of the cutting-side surface of the slaughter animal body half 1 can be compensated.

(29) In FIG. 2 a further particularly advantageous embodiment of the invention is shown and, for simplification purposes, the positioning device and the movement axis of the slaughter animal body half 1 are not illustrated anew.

(30) The slaughter animal body half 1 shown in FIG. 2 has a real surface shape that does not match a model-like ideal shape that is supposed to be a plane in the embodiment. The deviation of the real surface shape is demonstrated by the position of the first measurement point P.sub.1. The determined measurement point P.sub.1 is therefore not positioned on the model-like ideal shape of the cutting-side surface, illustrated by a plane axis g.sub.SKH in FIG. 2.

(31) Due to the deviation of the determined measurement point P.sub.1 from the idealized cutting-side plane, a deviating distance of the measurement points in the space would arise on the basis of the real surface shape.

(32) In order to reduce the inaccuracy caused by the measurement points deviating from the ideal plane, the further embodiment shown in FIG. 2 is designed such that several representative auxiliary points, here H.sub.1 to H.sub.3, are determined on the cutting-side surface of the slaughter animal body half 1 in a first step. In a next step, an idealized cutting-side plane, illustrated by the straight line g.sub.SKH, is defined on the basis of these auxiliary points H.sub.1 to H.sub.3.

(33) Afterwards, the deviating measurement point P.sub.1 is projected onto the idealized cutting-side plane and thus the projected measurement point P.sub.1 is created. The z-value, which corresponds to the z-value of the idealized cutting-side plane in the point of the corresponding area coordinates, is assigned to the measurement point P.sub.1.

(34) A distance for the further use within a line segment and/or area measurement can now be determined between the projected measurement point P.sub.1 and a further determined measurement point P.sub.2 and thus a higher accuracy can be achieved.

LIST OF REFERENCE NUMERALS

(35) 1 slaughter animal body half 2 image camera 3 evaluation unit 4 depth camera n.sub.RGB measurement standard of image camera n.sub.D measurement standard of depth camera n.sub.C measurement standard of slaughter animal body half g.sub.SKH plane axis of slaughter animal body half g.sub.t movement axis of slaughter animal body half g.sub.n projection axis of first measurement point .sub.RGB recording angle of image camera .sub.D recording angle of depth camera .sub.C,RGB angle between slaughter animal body half and image camera P.sub.1 first measurement point P.sub.2 second measurement point P.sub.1 projected first measurement point on ideal surface H.sub.1 first auxiliary point H.sub.2 second auxiliary point H.sub.3 third auxiliary point