Griffith University researchers design AI video surveillance system to detect social distancing violations


Researchers at Griffith University have developed an AI video surveillance system to monitor social distancing violations at an airport without compromising privacy. The team eliminated the traditional need to store sensitive data on a central system by keeping image processing covered by a local network of cameras.

According to Professor Dian Tjondronegoro of Griffith Business School, data privacy is currently one of the most serious concerns with this technology. The system must monitor people’s activities to be effective at all times.

The case study was carried out at Gold Coast Airport, which before the COVID-19 outbreak had 6.5 million passengers per year, with nearly 17,000 passengers on site daily. The airport has hundreds of cameras covering 290,000 square meters with several stores and more than 40 check-in points.

The research team tested several cutting-edge algorithms, light enough for local computation, on nine cameras. The test was carried out in three related case studies: Automatic Crowd Counting, Automatic People Detection Test, and Social Distancing Violation Detection to find the best performance balance without compromising the accuracy of reliability.

Their goal was to develop a system capable of real-time analysis with the ability to detect and notify airport personnel of social distancing violations.

Three cameras were used for the automatic detection of the violation of social distance. They covered the waiting area, the check-in area and the food court. Two people were tasked with comparing the live video feeds and the results of the AI ​​analysis to check if the people marked in red were in violation.

Researchers have found that camera angles dramatically affect the ability of AI to detect and track the movement of people in a public space and thus recommend orienting cameras between 45 and 60 degrees.

According to Professor Tjondronegoro, the design of their AI-enabled system is flexible enough to allow humans to double-check the results, reducing data bias and improving the transparency of how the system works.

The system can be expanded in the future by adding new cameras and then adjusted for other purposes. It can be used very well to prevent overcrowding, security breaches, etc., in public places. The study demonstrates that responsible design of AI can be useful in future developments of this application of the technology.



Source link


Leave A Reply