Contaminant Detection Sensitivity Improves

| Cleanrooms and Contamination Control

Food processing machinery uses metal detectors calibrated on the size of the food and its moisture content

Food industry specialist Phil Brown explains the challenges of detecting metal contaminants in dairy products

Due to high levels of automated processing machinery in the food industry, the use of tools for maintenance, and other mixing equipment, metal remains the most prevalent contaminant risk. Metal detectors are used to reduce the risk but there are factors that affect the chance of successful detection.

A typical example is when equipment is calibrated to detect a stainless-steel sphere measuring 2mm in diameter. While it may identify and reject this contaminant, the machine may fail to detect a stainless-steel wire that is slightly smaller in diameter, but longer than 2mm, which can introduce orientation effects.

It can be easier to detect stainless steel and non-ferrous wires when they pass through the aperture space sideways or upright, rather than in alignment with the conveyor. This is because of the magnetic permeability of the metal, which is much lower for stainless steel than ferrous metals.

In this respect, reducing the aperture size in relation to the product size can be a simple and effective way to increase metal detector sensitivity. This is because sensitivity is expressed as the smallest detectable spherical contaminant travelling through the geometric centre of the aperture. The centre of aperture is always the least sensitive position so if the contaminant is detectable in this position, then it will be more easily detected closer to the aperture walls.

Sensitivity Signatures

Dairy product applications are typically wet and conductive, which can present an additional challenge for metal detectors. Cheese, for example, with its high moisture content, combined with salt, can be highly conductive and cause a reaction like metal being present.

To identify a metal contaminant within such conductive products, a metal detector must remove or reduce this product effect. Single pass calibration is meant to do this. However, the underlying operating frequency of the metal detector impacts how effective a calibration can be at eliminating the product effect. With single frequency metal detectors running ‘wet’ products there is often a trade-off between ferrous and stainless-steel performance depending on the selected frequency. Typically, higher frequencies exhibit increased performance in detecting stainless-steel versus ferrous metals. The best approach is to find a frequency that provides a balance between the lowest product effect and the detection of target contaminants.

Using simultaneous multi-frequency technology is the most reliable way to remove product effect without compromising the sensitivity of a metal detector.

Testing Detectors

The standard technique for measuring the sensitivity of metal detectors in food inspection is to use metal test spheres. Yet, metal contaminants typically enter the production line as flat metal flakes, shards, swarf or thin wires, rather than globular shapes. So why test using spheres?

The main rationale is that it provides machinery suppliers and food processors with a comparative sensitivity control. A sphere does not exhibit orientation effect and will always produce the same signal when passed through the same position of a metal detector’s aperture.

The food metal detection industry has general sphere size guidelines. For example, a wet block of cheese measuring approximately 75mm high, currently has sphere size parameters of 2.0mm for ferrous metals, 2.5mm for non-ferrous and 3.5mm for stainless steel. However, these levels are not always one size fits all, as the product effect from different types of cheese as an example, can vary greatly.

Jonathan Newell
Latest posts by Jonathan Newell (see all)

Related news

Read More News From Unspecified Company: