How a Machine Learns and Fails – A Grammar of Error for Artificial Intelligence

Authors

  • Matteo Pasquinelli University of Arts and Design pf Karlsruhe
  • Emilio Cafassi Facultad de Ciencias Sociales, Universidad de Buenos Aires (UBA)
  • Carolina Monti CONICET-UNLP
  • Hernán Peckaitis FSOC-UBA-e-TCS - Umai
  • Graciana Zarauza CONICET-e-TCS-Umai- UNLP

DOI:

https://doi.org/10.24215/23143924e054

Keywords:

artificial intelligence, machine learning, algorithmic bias, statistic error, training data

Abstract

Working at the convergence between the humanities and computer science, this text aims to outline a general grammar of machine learning and systematically provide an overview of its limits, approaches, biases, errors, fallacies and vulnerabilities. The conventional term Artificial Intelligence is retained although technically speaking, it would be more accurate to call it machine learning or computational statistics, but these terms would not be attractive to companies, universities and the art market. A review is made of the limitations affecting AI as a mathematical and cultural technique, highlighting the role of error in the definition of intelligence in general. Machine learning is described as consisting of three parts: training data set, statistical algorithm and model application (as classification or prediction) and three types of biases are distinguished: world, data and algorithm. It is argued that the logical limits of statistical models produce or amplify bias (which is often already present in the training data sets) and cause classification and prediction errors. On the other hand, the degree of information compression by the statistical models used in machine learning causes a loss of information that results in a loss of social and cultural diversity. In short, the main effect of machine learning on society as a whole is cultural and social normalization. There is a degree of mythologizing and social bias around its mathematical constructs, where Artificial Intelligence has inaugurated the era of statistical science fiction.

Downloads

Download data is not yet available.

Author Biography

Matteo Pasquinelli, University of Arts and Design pf Karlsruhe

Profesor de filosofía de los medios de comunicación en la Universidad de Artes y Diseño de Karlsruhe (Alemania), donde coordina el grupo de investigación sobre inteligencia artificial y filosofía de los medios de comunicación KIM. Su investigación se centra en la intersección de las ciencias cognitivas, la economía digital y la inteligencia artificial. Ha editado la antología Alleys of Your Mind: Augmented Intelligence and Its Traumas (Meson Press) y, con Vladan Joler, el ensayo visual "The Nooscope Manifested: AI as Instrument of Knowledge Extractivism" (nooscope.ai).  Actualmente está por publicar una monografía sobre la historia de la IA titulada The Eye of the Master: A Labour Theory of Artificial Intelligence.

References

Bates, D.W. (2002). Enlightenment Aberrations: Error and Revolution in France. Ithaca, NY: Cornell University Press.

Bolukbasi, T. et al. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. arXiv.org. Disponible en: arxiv.org/abs/1607.06520

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies, Oxford, UK: Oxford University Press.

Crawford, K. (diciembre, 2017). The Trouble with Bias. Conferencia magistral en la Annual Conference on Neural Information Processing Systems (NIPS). Video disponible en: https://www.youtube.com/watch?v=fMym_BKWQzk

Davison, J. (Junio, 2018). No, Machine Learning is not just glorified Statistics. Medium. Recuperado el 21 de marzo de 2019 de https://towardsdatascience.com/no-machine-learning-is-not-just-glorified-statistics-26d3952234e3

Deleuze, G. and Guattari, F. (1983). Anti-Oedipus. Minneapolis: University of Minnesota Press.

Eco, U. (1986). Semiotics and the Philosophy of Language. Bloomington: Indiana University Press.

Eco, U. (2000). Apocalypse Postponed, Bloomington, IN: Indiana University Press.

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.

Foucault, M. (2005). The Order of Things: An Archaeology of the Human Sciences. (2da Ed. reimpresa. London: Routledge.

Gitelman, L. (ed.) (2013). Raw Data is an Oxymoron. Cambridge, MA: MIT Press.

Guattari, F. (2013). Schizoanalytic Cartographies. London: Continuum.

Illusory Correlation. (Marzo, 2019). En Wikipedia: http://en.wikipedia.org/wiki/Illusory_correlation [Consultado el 21 de marzo de 2019]

Ingold, D. and Soper, S. (Abril, 2016). Amazon Doesn’t Consider the Race of Its Customers. Should It? Bloomberg. Recuperado el 21 de marzo de 2019 de: www.bloomberg.com/graphics/2016-amazon-same-day

Katz, Y. (2017). Manufacturing an artificial intelligence revolution (SSRN Scholarly Paper ID 3078224). Social Science Research Network. http://dx.doi.org/10.2139/ssrn.3078224

LeCun, Y. et al. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation, 1 (4), pp. 541–551. https://doi.org/10.1162/neco.1989.1.4.541

LeCun, Y. (Julio, 2018). Learning World Models: the Next Step towards AI. Conferencia magistral en la International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden.

Leibniz, G. W. (1951). Preface to the General Science. 1677. Wiener, Leibniz: Selections.

Li, Guanpeng et al. (2017). Understanding error propagation in deep learning neural network (DNN) accelerators and applications. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, ACM.

Lipton, Z.C. (2016). The Mythos of Model Interpretability. arXiv preprint. Disponible en: https://arxiv.org/abs/1606.03490

McQuillan, D. (Junio, 2018). Manifesto on Algorithmic Humanitarianism. Presentado en el Simposio Reimagining Digital Humanitarianism, Goldsmiths, University of London. Disponible en: https://osf.io/preprints/socarxiv/ypd2s/download

McQuillan, D. (2018). People’s Councils for Ethical Machine Learning. Social Media and Society, 4 (2). https://doi.org/10.1177%2F2056305118768303

Moretti, F. (2013). Distant Reading. London: Verso Books.

Murgia, M. (Abril, 2019). “Who’s using your face? The ugly truth about facial recognition”, Financial Times. Disponible en: https://www.ft.com/content/cf19b956-60a2-11e9-b285-3acd5d43599e

O’Neil, C. (2016). Weapons of Math Destruction, New York: Broadway Books.

Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism, New York: NYU Press.

Pasquinelli, M. (2015). Anomaly Detection: The Mathematization of the Abnormal in the Metadata Society. Paper presentado en Transmediale. Disponible en: www.academia.edu/10369819. [Consultado el 21 de marzo de 2019].

Pasquinelli, M. (2017). Arcana Mathematica Imperii: The Evolution of Western Computational Norms. En: Maria Hlavajova et al. (eds.), Former West Cambridge, MA: MIT Press, pp. 281–293.

Vincent, J. (Octubre, 2018). Christie’s sells its first AI portrait for $432,500, beating estimates of $10,000. The Verge. Recuperado el 21 de marzo de 2019 de: dewww.theverge.com/2018/10/25/18023266

Virilio, P. (1994). The Vision Machine. London/Bloomington: British Film Institute/Indiana University Press.

Vogl, J. (2007). Becoming Media: Galileo’s Telescope. Grey Room, 29, pp. 14–25. https://doi.org/10.1162/grey.2007.1.29.14

Published

2022-07-13

How to Cite

Pasquinelli, M., Cafassi, E., Monti, C., Peckaitis, H., & Zarauza, G. (2022). How a Machine Learns and Fails – A Grammar of Error for Artificial Intelligence. Hipertextos, 10(17), 13–29. https://doi.org/10.24215/23143924e054

Issue

Section

Traducciones