With the onset of AI and technological advances, the time to stop and consider if we should do things is evermore apparent. This summer, I had the opportunity to virtually attend a panel discussion on AI and Wonder. One of the questions posed is how do we reconcile the convenience of AI with our innate need to think critically and wonder? Is the cost of this technology the loss of our critical thinking?
James DeMarco, from the Microsoft AI department, gave a very poignant response, “If everyone can answer the question from ChatGPT then we are asking the wrong questions.” In other words we are only asking our students for information not insight. He challenged that each technology advancement allows us the opportunity to ask harder questions and create insights rather than just information. These insights hold the key to resolving social issues of the present and future.
Nicol Turner Lee, a sociologist with the Brookings Institution pointed out that “the structural reality [is] that we’re not training humanities students as computer scientists nor are we training computer scientists with some type of social sciences or humanities background … technology is being built by people who don't necessarily have that same type of empathy and care” that comes from the humanities. Her point being that the developers monopolizing the conversation are looking only to advance the product and are not necessarily interested in other concerns about the use of the software.
Their solution - the humanities need to be at the table for every conversation involving technology, specifically to ask these questions of what it can do and what we should do with it. To some it seems these conversations are pointless and unnecessary. But when we look at technological advances, we are forced to reflect on how a boon for social advancement can easily become a new tool for war or oppression. As a society we must stop and reflect on AI and construct moral guardrails for its use, we must re-engage our students with the question of SHOULD we do something?
Philosophy and ethics are not esoteric conversations without application. The answers have very real consequences affecting human lives and our interactions with the world around us. We can not teach the technology without teaching the moral framework too. The agreed upon definitions of a “good” agent and “nefarious” agent must be used to design the guardrails of the technology itself.
It served as a wonderful reminder of why I designed my philosophy classes to be interdisciplinary. When we look at art, we can see the transition of 2D art to 3D as it relates to Platonic metaphysics as opposed to Aristotelian. When we read creation myths, we are able to look at the changing cosmologies and definitions of good and evil in cultures across time, and how that affected their social orders, moral principles, and legal systems throughout history. When the students stop and actually reflect on human events and history as an answer to a particular question, they engage with the philosophical thought as it materialized in physical reality.
Comments