Case Studies

AI regulations and policies are continuing to evolve as people adapt to the use of AI. Let’s look at some real-life examples.

Education

For students and educators, generative AI’s capacity in writing, problem solving, and conducting research has upended the goals and evaluations of our education system. For instance, ChatGPT 4 has been able to generate college-level essays to earn passing grades at Harvard with minimal prompting for various subjects (Yglesias (2023)). Many educational institutions reacted with various policies and adaptations; first to protect the current educational environment, then to consider adapting to generative AI’s capacity.

In the first few months after ChatGPT was released, many schools and universities restricted the use of AI in education. The two largest public school systems in the United States, New York City Public Schools and Los Angeles Public Schools, banned the use of ChatGPT in any school work, declaring that any use of ChatGPT counted as plagiarism Singer (2023b). Many universities also followed with similar policies. However, educators soon realized that most students embraced generative AI despite the ban for most assignments (Terry (2023), Roberts (2023)). Furthermore, enforcement to bar AI from students, such as using AI detection software or banning AI from school networks, created disparities in students. Teachers noticed that AI detection software biased against the writings of non-native English learners (Roberts (2023)). Children from wealthy families could also access AI through personal smartphones or computers (Singer (2023b)).

With these lessons, some educational systems started to embrace the role of AI in students’ lives and are developing less-restrictive various policies. New York City Public School and Los Angeles Public Schools quietly rolled back their ban, as did many universities (Singer (2023b)). Groups of educators have come together to give guidelines and resources on how to teach with the use of AI, such as the Mississippi AI Institute, MIT’s Daily-AI curriculum, and Gettysburg College’s Center for Creative Teaching and Learning.

Each educational institution and classroom is adapting to AI differently. The Mississippi AI Institute suggested that there are some common questions to consider (Donahue (2023)):

  • How are we inviting students to demonstrate their knowledge, and is writing the only (or the best) way to do that? For instance, some universities have encouraged the use of in-class assignments, handwritten papers, group work and oral exams (K. Huang (2023)).

  • What are our (new) assignment goals? And (how) might generative AI help or hinder students in reaching those goals? Some educators want to use AI to help students get over early brainstorming hurdles, and want students to focus on deeper critical thinking problems (Roberts (2023)). Many educators have started to develop AI literacy and “critical computing” curricula to teach students how to use AI effectively and critically (Singer (2023a)).

  • If we’re asking students to do something that AI can do with equal facility, is it still worth asking students to do? And if so, why? Educators will need to think about what aspects of their lesson goals will be automated in the future, and what are critical and creative skills that students need to hone in on.

  • If we think students will use AI to circumvent learning, why would they want to do that? How can we create conditions that motivate students to learn for themselves? Educators have started to teach young students the limits of AI creativity and what kind of bias is embedded in AI models, which has led students to think more critically about use of AI (Singer (2023a)).

  • What structural conditions would need to change in order for AI to empower, rather than threaten, teachers and learners? How can we create those conditions? Some teachers have started to actively learn how their students use AI, and are using AI to assist with writing their teaching curriculum (Singer (2023b)).

Healthcare

The health care industry is an example of an industry where the speed of technology development has led to gaps in regulation, and the US recently released an Executive Order about creating healthcare-specific AI policies.

The U.S. Food and Drug Administration (FDA) regulates AI-enabled medical devices and software used in disease prevention, diagnosis, and treatment. However, there are serious concerns about the adequacy of current regulation, and many other AI-enabled technologies that may have clinical applications fall out of the scope of FDA regulation (Habib and Gross (2023); Association (2023)). Other federal agencies, such as the Health and Human Services Office of Civil Rights, have important roles in the oversight of some aspects of AI use in health care, but their authority is limited. Additionally, there are existing federal and state laws and regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), that impact the use and development of AI. This patchwork landscape of federal and state authority and existing laws has led the American Medical Association (AMA) to advocate for a “whole government” approach to implement a comprehensive set of policies to ensure that “the benefits of AI in health care are maximized while potential harms are minimized” (News (2023)).

The AMA and health care leaders have highlighted the importance of specialized expertise in the oversight and adoption of AI products in health care delivery and operations. For example, Dr. Nigam Shah and colleagues call for the medical community to take the lead in defining how LLMs are trained and developed:

By not asking how the intended medical use can shape the training of LLMs and the chatbots or other applications they power, technology companies are deciding what is right for medicine (Shah, Entwistle, and Pfeffer (2023)).

The medical community should actively shape the development of AI-enabled technologies by advocating for clinically-informed standards for the training of AI, and for the evaluation of the value of AI in real-world health care settings. At an institutional level, specialized clinical expertise is required to create policies that align AI adoption with standards for health care delivery. And in-depth knowledge of U.S. health insurance system is required to understand how complexity and lack of standardization in this landscape may impact AI adoption in clinical operations (schulman2023). In summary, health care leaders and the medical community need to play an active role in the development of new AI regulations and policy.

References

Association, American Medical. 2023. “AMA Issues New Principles for AI Development, Deployment & Use.” American Medical Association. https://www.ama-assn.org/press-center/press-releases/ama-issues-new-principles-ai-development-deployment-use.
Donahue, E. 2023. “AI Should Revolutionize Teaching, but Not in the Way You Think.” 2023. https://blog.mississippi.ai/ai-should-revolutionize-teaching-but-not-in-the-way-you-think.
Habib, Anand R., and Cary P. Gross. 2023. FDA Regulations of AI-Driven Clinical Decision Support Devices Fall Short.” JAMA Internal Medicine 183 (12): 1401–2. https://doi.org/10.1001/jamainternmed.2023.5006.
Huang, K. 2023. “Alarmed by a.i. Chatbots, Universities Start Revamping How They Teach.” 2023. https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html.
News, Industry. 2023. “AMA Issues New Principles for AI Development, Deployment &Amp; Use.” HealthcareNOWradio.com. https://www.healthcarenowradio.com/ama-issues-new-principles-for-ai-development-deployment-use/#:~:text=Key%20concepts%20outlined%20by%20the,governance%20of%20health%20care%20AI.
Roberts, M. 2023. “AI Is Forcing Teachers to Confront an Existential Question.” The Washington Post. https://www.washingtonpost.com/opinions/2023/12/12/ai-chatgpt-universities-learning/.
Shah, Nigam H., David Entwistle, and Michael A. Pfeffer. 2023. Creation and Adoption of Large Language Models in Medicine.” JAMA 330 (9): 866–69. https://doi.org/10.1001/jama.2023.14217.
Singer, N. 2023a. “At This School, Computer Science Class Now Includes Critiquing Chatbots.” 2023. https://www.nytimes.com/2023/02/06/technology/chatgpt-schools-teachers-ai-ethics.html.
———. 2023b. “Despite Cheating Fears, Schools Repeal ChatGPT Bans.” 2023. https://www.nytimes.com/2023/08/24/business/schools-chatgpt-chatbot-bans.html.
Terry, O. 2023. “I’m a Student: You Have No Idea How Much We’re Using ChatGPT.” The Chronicle of Higher Education. https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt.
Yglesias, Matthew. 2023. “ChatGPT Goes to Harvard.” 2023. https://www.slowboring.com/p/chatgpt-goes-to-harvard.