What is artificial intelligence’s issue? It isn’t synthetic or clever.
Evgeny Morozov said…
Evgeny Morozov is a writer and scholar who has been critical of the hype surrounding artificial intelligence (AI). He argues that the term “artificial intelligence” is misleading because it suggests that machines possess a kind of human-like intelligence, which is not actually the case.
Morozov points out that much of what we call “AI” is actually a collection of algorithms and statistical models that are programmed to perform specific tasks. These algorithms are not capable of true intelligence or understanding in the way that humans are.
Furthermore, Morozov argues that the term “artificial” is also misleading, because it implies that these systems are somehow separate from human influence. In reality, AI is created and shaped by human programmers, and reflects the biases and assumptions of its creators.
Overall, Morozov’s critique highlights the need for a more nuanced understanding of what AI is and what it can do. While AI has the potential to be a powerful tool, it is important to recognize its limitations and the ways in which it is shaped by human values and biases.
The call for a moratorium on the development of AI systems by Elon Musk and Steve Wozniak is part of a broader debate about the potential risks and benefits of AI. While AI has the potential to transform many areas of society, there are concerns that it could also have unintended consequences, such as the displacement of jobs, the exacerbation of economic inequality, and the development of AI systems that are biased or unreliable like: Chat GPT .
The signatories argue that a temporary pause in the development of AI systems would provide an opportunity to address these concerns and put in place safeguards to ensure that AI is developed in a responsible and ethical manner. They suggest that rigorous safety protocols, including auditing and testing of AI systems, are needed to ensure that they are safe and reliable.
It is worth noting that the call for a moratorium on AI development is not without controversy. Some argue that such a pause would stifle innovation and delay the potential benefits of AI. Others argue that the risks of AI can be addressed through other means, such as regulation and oversight.
Overall, the debate about AI and its potential risks and benefits is ongoing, and it is likely that we will continue to see calls for a cautious and responsible approach to AI development in the years to come.
Machines are unable to feel the past, present, and future rather than just knowing about them.
While there is ongoing debate about the term “artificial intelligence” and its limitations, retiring the label altogether may not be the most practical solution. The term has become widely used and recognized in both academia and industry, and it is unlikely that it will disappear from public discourse anytime soon.
Instead, it may be more productive to have a more nuanced discussion about what the term “artificial intelligence” actually means and what its limitations are. This could involve exploring alternative terms, such as “machine learning” or “algorithmic decision-making”, that may be more accurate and less misleading.
At the same time, it is important to recognize that the term “artificial intelligence” has helped to popularize and stimulate interest in this field, and that there are many valuable and important applications of AI. While we should be mindful of the limitations and potential risks of AI, we should also continue to support research and development in this field, and work to ensure that it is developed in a responsible and ethical manner.
It is true that the early funding and development of AI was heavily influenced by military imperatives, and this has had a lasting impact on how we understand and approach the field. The emphasis on pattern-matching and other capabilities that could be used in military contexts has shaped the development of AI and its applications in many different domains.
However, it is important to recognize that AI has evolved and expanded beyond its military origins, and that there are many important and valuable applications of AI that are not related to warfare or national security. For example, AI is increasingly being used in healthcare, finance, transportation, and other industries to improve efficiency, productivity, and quality of life.
At the same time, it is important to remain mindful of the potential risks and unintended consequences of AI, and to ensure that it is developed and used in a responsible and ethical manner. This includes addressing issues such as bias and discrimination in AI systems, ensuring that AI is developed with transparency and accountability, and fostering public dialogue and engagement about the implications of AI for society as a whole.
This make a valid point. Intelligence is not just about pattern-matching; it also involves the ability to draw generalizations, to make connections, and to understand the broader context of a situation.
In the case of Duchamp’s Fountain, we see an example of how a change in perspective and context can transform our understanding of an object. Duchamp’s work challenges traditional notions of what art is and what it can be, and it demonstrates the power of creative thinking and the ability to draw new and unexpected connections.
This kind of generalization is also important in the development of AI. While pattern-matching is a valuable and important capability, AI systems must also be able to understand context, draw connections between different data points, and make generalizations that go beyond the specific examples they have been trained on. This is important for ensuring that AI is able to adapt and generalize to new situations, and to address complex problems that may not fit neatly into predefined categories.
Ultimately, the development of AI requires a range of different capabilities, including pattern-matching, generalization, creativity, and the ability to understand and navigate complex systems. By recognizing and embracing these different aspects of intelligence, we can help to ensure that AI is developed in a responsible and effective manner, and that it benefits society as a whole.
You make some insightful points about the limitations of AI in terms of its ability to understand and engage with the emotional and contextual dimensions of human intelligence. While AI systems are capable of powerful pattern-matching and prediction, they lack the capacity for creativity, emotion, and the kind of bi-logic that is central to human thinking.
As you note, this has significant implications for how we think about the role of AI in society. If we continue to view AI as a replacement for human intelligence, we risk overlooking the essential qualities that make us human and the ways in which our thinking and decision-making are shaped by emotion, context, and experience.
Instead, it is important to approach AI as a tool that can augment human intelligence, rather than replace it. By recognizing the limitations of AI and leveraging its strengths in combination with human intelligence, we can develop more effective and responsible applications of this technology.
Moreover, as you suggest, it is important to be cautious about how we use language to describe AI, and to avoid misleading labels such as “artificial intelligence” that may obscure the true nature of this technology. By adopting more accurate and nuanced language, we can help to promote a more informed and critical dialogue about the role of AI in society, and ensure that we are leveraging this technology in ways that are aligned with our values and goals.