As cities build upwards to accommodate growing populations, the safety of deep excavation, the process of digging large foundation pits to anchor high-rise buildings, has become a significant challenge in the construction industry. These pits must withstand the problem of shifting of the underlying earth, changes in groundwater pressure, and the heavy machinery while remaining stable enough to protect workers and nearby structures. Failures at this stage can trigger collapses, flooding or structural damage.
Work in the International Journal of Critical Infrastructures discusses an AI (artificial intelligence) system designed to improve safety monitoring at deep foundation pit support sites. The system aims to identify abnormal behaviour, such as unsafe actions, improper equipment use, or entry in restricted zones without protective gear, in close to real time so that warnings can be sounded in a timely manner.
Construction sites have traditionally relied on manual supervision and earlier generations of automated monitoring. But these approaches often struggle to detect unsafe behaviour quickly and accurately. Many systems record high false acceptance rates, meaning they mistakenly classify dangerous actions as safe. Others process video feeds too slowly to intervene effectively in rapidly changing environments.
The new system combines several advanced AI techniques to address those weaknesses. It begins by extracting key frames from surveillance footage using the fractional Fourier transform. This is a mathematical method that analyses data across different domains. By identifying the most informative frames rather than scanning every second of video, the system reduces computational load but still retains critical information.
The system then uses a spatiotemporal graph convolutional network, a form of deep learning that analyses both space and time data. The spatial analysis examines how workers and machinery are positioned relative to one another, while the temporal analysis tracks how movements change over time. Unlike conventional image-recognition models that treat frames in isolation, this approach captures sequences of actions and interactions. This is vital for working out what is happening moment to moment on the construction site.
The final step is to use a hybrid model that combines a convolutional neural network (CNN) with a so-called long short-term memory network (LSTM). The CNN can recognise visual features such as body posture or equipment shape. The LSTM can detect patterns in sequences of data. Working together, those two tools allow the system to determine not only what is happening in a single frame, but whether a series of movements constitutes a safety violation.
In their tests on active deep excavation sites, the researchers got a minimum false acceptance rate of 2.43 per cent and a peak abnormal behaviour recognition accuracy of 99.12 per cent. Processing time was as low as 0.19 seconds per analysis cycle, allowing near real-time monitoring.
Qi, W. (2026) ‘An adaptive recognition of abnormal behaviour in deep excavation support construction site of high-rise buildings’, Int. J. Critical Infrastructures, Vol. 22, No. 7, pp.1–17.
No comments:
Post a Comment