THE STUDY OF PUBLIC CONCERN REGARDING THE USE OF ARTIFICIAL INTELLIGENCE IN PUBLIC SURVEILLANCE AND SECURITY: A CASE STUDY OF THE BTS AND MRT MASS TRANSIT SYSTEMS
Main Article Content
Abstract
This study aims to (1) assess the level of public concern regarding the use of Artificial Intelligence (AI) for surveillance and security in the BTS and MRT mass transit systems, (2) analyze the factors influencing public concern about AI surveillance, (3) evaluate public opinions on the use of AI for analyzing and processing passenger behavior in mass transit systems, and (4) propose guidelines for developing AI-based surveillance and security systems that align with public needs and privacy expectations. This quantitative research employed a survey as the primary data collection tool. The sample consisted of 500 BTS and MRT passengers selected using stratified sampling. The data were analyzed using descriptive statistics to determine means and standard deviations, as well as correlation analysis and hypothesis testing to examine the factors influencing public concern.
The findings indicate that the public exhibits a high level of concern, particularly regarding privacy and data security, with concern levels averaging between 3.8 and 4.2. Perceived privacy risks were positively correlated with concern levels (r = 0.72, p < 0.01), whereas perceived AI benefits (r = -0.45, p = 0.05) and transparency in data management (r = -0.60, p = 0.02) were negatively correlated with concern levels. Additionally, 85% of respondents supported the need for agencies to disclose policies on data management and privacy protection transparently. The study suggests that AI-based surveillance measures should align with public expectations by emphasizing communication on AI benefits, transparent data management, and adequate privacy safeguards to build public trust in AI surveillance and security systems.
Article Details
References
Bélanger, F., & Crossler, R. E. (2011). Privacy in the digital age: A review of information privacy research in information systems. MIS Quarterly, 35(4), 1017-1042.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340.
Gates, K., & Macaulay, M. (2019). Surveillance and privacy in the digital age. The Annual Review of Sociology, 45, 5-10.
Kritikson, W., & Yim, P. (2020). Public perceptions of AI-based surveillance in urban transit systems: A study in Bangkok. Asian Journal of Urban Studies, 8(1), 75-87.
Li, C., & Zhao, X. (2021). Trust and transparency in AI surveillance: The role of data management policies. Journal of Public Policy and Technology, 4(2), 45-53.
Li, Y., Yang, J., & Chen, H. (2020). The application of artificial intelligence in public security and its impact on urban transport safety. Journal of Public Transportation Research, 12(3), 120-135.
Nissenbaum, H. (2010). “Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.
Smith, A. (2018). The impact of artificial intelligence on privacy: Implications for public policy. Journal of Information Technology and Policy, 7(4), 210-223.
Smith, A., & Miller, J. (2019). Balancing security and privacy in AI surveillance: Public attitudes towards AI in urban spaces. Urban Policy Review, 22(2), 110-125.
Westin, A. F. (2003). Social and political dimensions of privacy. In “Privacy and freedom” (pp. 20-38). New York: Atheneum.
Zhou, Y., & Li, C. (2022). Enhancing public trust in AI through transparent data policies: Evidence from smart city initiatives. Technology and Society, 35(2), 130-144.