Empirical model for analysis of the dynamics of algorithmization (artificial intelligence technology) in the field of security by the example of the USA
https://doi.org/10.31249/poln/2021.03.04
Abstract
How does the state security system evolve under the influence of the artificial intelligence technology? To answer this question, an empirical model is proposed. The model evaluates the state security system (by the example of the USA) using the security consistency parameter, which estimates how the state perceives threats (indicator of threats) and whether the state has the necessary capabilities to counter them (indicator of capabilities) in relation to the artificial intelligence technology. The model (as well as the conceptualization of the artificial intelligence technology in the context of the security domain) provides evidence of how security transformations occur. It serves as a tool for studying the corresponding changes and assessing the state security system. It is necessary to indicate the limitation of the study: we do not consider direct military applications in the field of automation and algorithms (artificial intelligence technology). The validation of the empirical model has been undertaken using the case of the USA (eight-time intervals are subject to analysis, namely: 1999, 2002, 2006, 2010, 2012, 2015, 2017, 2019). With the development of the technology itself, the “interest” of the state and the definition of threats, as well as the rapid growth of the capabilities of the artificial intelligence technology (coincides with the years of maximum progress in computing power and the introduction of new algorithms) are growing, and since 2012, the dynamic has been linear, since more new “discoveries” have contributed to evolutionary rather than “revolutionary” growth trajectory. The developed model is scalable. This feature may be useful in the empirical security studies: the artificial intelligence technology within the model can be replaced with other types of digital technologies (for example, big data, cloud computing or 5 g connection technologies, etc.); thus, empirical models of security consistency under the impact of other technologies can be developed. The approach proposed allows to under- take cross-country comparisons with respect to specific types of digital technologies and their interactions with the security domain.
Keywords
About the Authors
A. V. TurobovRussian Federation
Moscow
M. G. Mironyuk
Russian Federation
Moscow
References
1. Aletras N., Tsarapatsanis D., Preoţiuc-Pietro D., Lampos V. Predicting judicial decisions of the European court of human rights: A natural language processing perspective. PeerJ computer science. 2016, Vol. 2, P. 93.
2. Alter S. Sherer S.A. A general, but readily adaptable model of information system risk. Communications of the association for information systems. 2004, Vol. 14, Article 1, P. 1-28. 10.17705/1 CAIS. 01401. DOI: 10.17705/1CAIS.01401
3. Amoore L., Raley R. Securing with algorithms: knowledge, decision, sovereignty.Security dialogue. 2017, Vol. 48, Iss. 1, P. 3-10.
4. Ang R.P., Goh D.H. Predicting juvenile offending: A comparison of data mining methods. International journal of offender therapy and comparative criminology. 2013, 57(2), P. 191-207.
5. Ayoub K., Payne K. Strategy in the age of artificial intelligence. Journal of strategic studies. 2016, Vol. 39, P. 793-819.
6. Baldwin D.A. Security studies and the end of the Cold War. World politics. 1995, Vol. 45, P. 117-141.
7. Baldwin D.A. The concept of security. Review of international studies. 1997, Vol. 23, P. 5-26. EDN: FOQWVB
8. Balzacq T. (ed.). Securitization theory. London: Routledge. 2011, 272 p.
9. Buzan B. People, states and fear. An agenda for international security studies in the Post-Cold War Era. Brighton: ECPR Press, 1991, 318 p.
10. Bayley D.H. The Police and political development in Europe. In: Tilly C., Ardant G. (eds). The Formation of National States in Western Europe. Princeton, NJ: Princeton university press, 1975, P. 328-339.
11. Brauch H.G., Spring O.Ú., Mesjasz C., Grin J., Dunay P., Behera N.C., Chourou B., Kameri-Mbote P., Liotta P.H. (eds). Globalization and environmental challenges: rreconceptualizing security in the 21 st century: Vol. 3. Berlin: Springer Science, Business Media, 2008, 1141 p.
12. Brose C. The new revolution in military affairs: War's sci-fi future. Foreign affairs. 2019, Vol. 98, N 3. Mode of access, Available at Foreign affairs: https://www.foreignaffairs.com/articles/2019-04-16/new-revolution-military-affairs (accessed: 15.05.2021).
13. Brożek B., Janik B. Can artificial intelligences be moral agents? New ideas in psychology. 2019, Vol. 54, P. 101-106.
14. Bulgurcu B., Cavusoglu H., Benbasat I. Information security policy compliance: an empirical study of rationality-based beliefs and information security awareness. MIS Quarterly. 2010, N 34, P. 523-548.
15. Buzan B., Wæver O. Regions and powers. Cambridge: Cambridge university press, 2003, 598 p. DOI: 10.1017/CBO9780511491252
16. Chen D.L., Eagel J. Can machine learning help predict the outcome of asylum adjudications? Proceedings of the ACM Conference on AI and the Law. 2017, P. 237-240.
17. Cocelli M., Arkin E. A threat evaluation model for small-scale naval platforms with limited capability. EEE Symposium Series on Computational Intelligence (SSCI 2016). 2017, P. 1-8.
18. Coglianese C., Lehr D. Regulating by robot: Administrative decision making in the Machine-learning era. Georgetown law journal. 2017, 1734 p.
19. Davis, Zachary. Artificial intelligence on the battlefield: implication for deterrence and surprise. Institute for national strategic security. 2019, P. 114-131.
20. Deshmuhk A. A Framework for Online Internal Controls. AMCIS August. 2004, P. 4471-4479.
21. Devereux S., Vincent K. Using technology to deliver social protection: exploring opportunities and risks. Development in practice. 2010, Vol. 20, N 3, P. 367-379.
22. Edwards P.N. The closed world: computers and the politics of discourse in Cold War America. Cambridge: MIT press, 1997, p. 468.
23. Ferguson M.P. The digital maginot line: autonomous warfare and strategic incoherence. Prism. 2019, Vol. 9, N 2, P. 132-145.
24. Fioramonti L., Kononykhina O. Measuring the Enabling Environment of Civil Society: A Global Capability Index. Voluntas. 2015, Vol. 26, P. 466-487.
25. Floyd R., Matthew R.A. Environmental security: approaches and issues. In environmental security: approaches and issues. London: Routledge, 2012, 320 p.
26. Galanos V. Artificial intelligence does not exist: Lessons from shared cognition and the opposition to the nature/nurture divide. IFIP advances in information and communication technology. 2018, P. 359-373.
27. Grant R.M., Verona G. What's holding back empirical research into organizational capabilities? Remedies for common problems. Strategic organization. 2015, Vol. 13, Iss. 1, P. 61-74.
28. Buzan B., Waever O., Wilde, J. de. Security: a new framework for analysis. London: Lynne Rienner, 1998, 239 p.
29. Hanlon R.J., Christie K. Freedom from fear, freedom from want: an introduction to human security. Torento: University of Torento press, 2016, 288 p.
30. Horowitz, M.C. Artificial intelligence, international competition, and the balance of power. Texas national security review. 2018, Vol. 1, P. 37-57.
31. Jarrahi M.H. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business horizons. 2018, Vol. 61, Iss. 4, P. 577-586.
32. Johansson F., Falkman G.A. Сomparison between two approaches to threat evaluation in an air defense scenario. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). 2008, Vol. 5285, P. 110-121. DOI: 10.1007/978-3-540-88269-5_11
33. Johnson J.S. Artificial Intelligence: A Threat to strategic stability. Strategic studies quarterly. 2020, Vol. 14, P. 16-39.
34. Kaplan A., Haenlein M. Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons. 2019, Vol. 62, Iss. 1, P. 15-25.
35. Kennedy D. The move to institutions. Cardoso law review. 1987, N 8(5), P. 841-988.
36. Keskinbora K.H. Medical ethics considerations on artificial intelligence. Journal of clinical neuroscience. 2019, Vol. 64, P. 277-282.
37. Kissell R., Malamut R. Algorithmic Decision-Making Framework. The Journal of trading. 2005, Vol. 1, P. 12-21.
38. Kumar S., Tripathi B.K. Modelling of threat evaluation for dynamic targets using Bayesian network approach. Procedia technology. 2016, Vol. 24, P. 1268-1275.
39. Liu Z., Chen H. A predictive performance comparison of machine learning models for judicial cases. 2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017. 2018, P. 1-6.
40. Longino H. Individuals or populations? In: Cartwright N., Montuschi E. (eds). Philosophy of social science: an introduction. Oxford: Oxford university press, 2014, P. 102-120.
41. Lum K., Isaac W. To predict and serve? Significance. 2016, Vol. 13, Iss. 5, P. 14-19.
42. Martin Katz D., Bommarito M.J., Blackman J. A general approach for predicting the behavior of the Supreme Court of the United States. PLoS ONE. 2017, Vol. 12(4). DOI: 10.1371/journal.pone.0174698
43. Martin K. Ethical implications and accountability of algorithms. Journal of business ethics. 2018, Vol. 160, P. 835-850. DOI: 10.1007/s10551-018-3921-3
44. McClendon L., Meghanathan N. Using machine learning algorithms to analyze crime data. Machine learning and applications: an international journal. 2015, Vol. 2, N 1, P. 1-12.
45. Mead Earle E. (ed.). Makers of modern strategy: military thought from Machiavelli to Hitler. Princeton, NJ: Princeton university press, 1944, p. 951.
46. Mikhaylov S.J., Esteve M., Campion A. Artificial intelligence for the public sector: Opportunities and challenges of cross-sector collaboration. Philosophical transactions of the royal society a: mathematical, physical and engineering sciences. 2018, Vol. 376, Is. 2128. DOI: 10.1098/rsta.2017.0357
47. Mittelstadt B.D., Allo P., Taddeo M., Wachter S., Florid, L. The ethics of algorithms: mapping the debate. Big data and society. 2016, Vol. 3, Iss. 2, P. 1-21.
48. Naeem H., Masood A. An optimal dynamic threat evaluation and weapon scheduling technique. Knowledge-based systems. 2010, Vol. 23, Iss. 4, P. 337-342.
49. Nakaya T., Yano K. Visualising crime clusters in a space-time cube: An exploratory data-analysis approach using space-time kernel density estimation and scan statistics. Transactions in GIS. 2010, Vol. 14, Iss. 3, P. 223-239.
50. Nance William D., Straub Detmar W. An Investigation into the Use and Usefulness of Security software in Detecting Computer Abuse. ICIS 1988 Proceedings. 36. 1988, N 36. Mode of access:, Available at AIS Electronic Library: http://aisel.aisnet.org/icis1988/36 (accessed: 05.05.2021).
51. Naseem A., Shah S.T. H., Khan S.A., Malik A.W. Decision support system for optimum decision-making process in threat evaluation and weapon assignment: Current status, challenges and future directions. Annual reviews in control. 2017, Vol. 43, P. 169-187.
52. Neack L. National, international, and human security: a comparative introduction. Lanham, MD: Rowman & Littlefield, 2017, 236 p.
53. Paret P. (ed.). Makers of modern strategy from Machiavelli to the Nuclear age. Princeton, NJ: Princeton university press, 1986, 951 p.
54. Parker G. The Military Revolution: Military Innovation and the Rise of the West. 1500-1800: 2 nd ed. Cambridge: Cambridge university press, 1996, 292 p.
55. Patil S., Potoglou D., Lu H., Robinson N., Burge P. Trade-off across privacy, security and surveillance in the case of metro travel in Europe. Transportation research procedia. 2014, Vol. 1, Iss. 1, P. 121-132.
56. Payne K. Artificial intelligence: A revolution in strategic affairs? Survival. 2018, Vol. 60, P. 7-32.
57. Reis J., Santo P.E., Melão N. Artificial intelligence in government services: a systematic literature review. Advances in intelligent systems and computing. 2019, P. 241-252.
58. Russell S., Norvig P. Artificial intelligence a modern approach. New Jersey: Prentice-Hal, 2010, 1152 p.
59. Sharre P. Killer apps: The real dangers of an AI arms race. Foreign Affairs. 2019. Mode of access: https://www.foreignaffairs.com/articles/2019-04-16/killer-apps(accessed: 20.04.2021).
60. Tene O., Polonetsky J. Taming the golem: challenges of ethical algorithmic decision-making. North Carolina journal of law & Technology. 2018, Vol. 19, Iss. 1, P. 1-15.
61. Vallverdu J. The emotional nature of post-cognitive singularities. In: Callaghan V. et al. (eds). The Technological singularity, the frontiers collection. Germany: Springer-Verlag, Heidelberg, 2017, P. 193-208.
62. Whitman M.E., Mattord H.J. Principles of information security. Boston: Course Technology, 2011, 656 p.
63. Williams M., Axon L., Nurse J.R.C., Creese S. Future scenarios and challenges for security and privacy. 2016 IEEE 2 nd International Forum on Research and Technologies for Society and Industry Leveraging a Better Tomorrow, RTSI 2016. 2016, P. 1-6.
64. Wolfers A. "National Security" as an Ambiguous Symbol. Political science quarterly. 1952, N 67(4), P. 481-502.
65. Wright Q. A Study of War. Chicago: University of Chicago press, 1942, 466 p.
66. Zarsky T. The Trouble with algorithmic decisions: an analytic road map to examine efficiency and fairness in automated and opaque decision making. Science technology and human values. 2016, P. 118-132.
67. Zegart A., Morell M. Spies, lies, and algorithms: Why U.S. Intelligence agencies must adapt or fail. Foreign Affairs. 2019, N 3., Available at Foreign Affairs: Mode of access: https://www.foreignaffairs.com/articles/2019-04-16/spies-lies-and-algorithms (accessed: 30.04.2021).