Societal Impact

There is the potential for AI to dramatically influence society. It is our responsibility to proactively think about what uses and impacts we consider to be useful and appropriate and those we consider harmful and inappropriate.

Disclaimer: The thoughts and ideas presented in this course are not to be substituted for legal or ethical advice and are only meant to give you a starting point for gathering information about AI policy and regulations to consider.

Guidelines for Responsible Development and Use of AI.

There are currently several guidelines for the responsible use and development of AI:

As this is an emerging technology, more guidelines will be developed and updated as the technology evolves. When you read this, more guideline and updates are likely to be available. It is important to be aware of the current ethical guidelines and regulations for your respective field.

A cartoon of a robot reading'.

Major Ethical Considerations

In this chapter we will discuss the some of the major ethical considerations in terms of possible societal consequences for the use or development of AI tools:

  1. Intentional and Inadvertent Harm - Data and technology intended to serve one purpose may be reused by others for unintended purposes. How do we prevent intentional harm?
  2. Replacing Humans - AI tools can help humans, but they are not a replacement. Humans are still much better at generalizing their knowledge to other contexts (Sinz et al. (2019)). Also studies suggests that humans value content and objects created by humans more than that of AI when it relates to abstract thought or unique work (Bellaiche et al. (2023), Granulo, Fuchs, and Puntoni (2021)).
  3. Inappropriate Use and Lack of Oversight - There are situations in which using AI might not be appropriate now or in the future. A lack of human monitoring and oversight can result in harm.
  4. Bias Perpetuation and Disparities - AI models are built on data and code that were created by biased humans, thus bias can be further perpetuated by using AI tools. In some cases bias can even be exaggerated. This combined with differences in access may exacerbate disparities.
  5. Security and Privacy Issues - Data for AI systems should be collected in an ethical manner that is mindful of the rights of the individuals the data comes from. Data around usage of those tools should also be collected in an ethical manner. Commercial tool usage with proprietary or private data, code, text, images or other files may result in leaked data not only to the developers of the commercial tool, but potentially also to other users.
  6. Climate Impact - As we continue to use more and more data and computing power, we need to be ever more mindful of how we generate the electricity to store and perform our computations.
  7. Transparency - Being transparent about what AI tools you use where possible, helps others to better understand how you made decisions or created any content that was derived by AI, as well as the possible sources that the AI tools might have used when helping you. It may also help with future unknown issues related to the use of these tools.

Keep in mind that some fields, organizations, and societies have guidelines or requirements for using AI, like for example the policy for the use of large language models for the International Society for Computational Biology. Be aware of the requirements/guidelines for your field.

Note that this is an incomplete list; additional ethical concerns will become apparent as we continue to use these new technologies. We highly suggest that users of these tools be careful to learn more about the specific tools they are interested in and to be transparent about the use of these tools, so that as new ethical issues emerge, we will be better prepared to understand the implications.

Intentional and Inadvertent Harm

AI tools need to be developed with safeguards and continually audited to ensure that the AI system is not responsive to harmful requests by users. With additional usage and updates, AI tools can adapt and thus continual auditing is required.

Of course using AI to help you perform a harmful action would result in intentional harm. This may sound like an obvious and easy issue to avoid, at least by those with good intent. However, the consequences may be much further reaching than might be first anticipated.

Perhaps you or your company develop an AI tool that helps to identify individuals that might especially benefit from a product or service that you offer. This in and of itself is likely not harmful. However, the data you have used, the data that you may have collected, and the tool that you have created, all could be used for other malicious reasons, such as targeting specific groups of people for advertisements when they are vulnerable.

Therefore it is critical that we be considerate of the downstream consequences of what we create and what might happen if that technology or data was used for other purposes.

A robot thinking'.

Tips for avoiding inadvertent harm

For decision makers about AI use:

  • Consider how the content or decisions generated by an AI tool might be used by others.
  • Continually audit how AI tools that you are using are preforming.
  • Do not implement changes to systems or make important decisions using AI tools without AI oversight.

For decision makers about AI development:

  • Consider how newly developed AI tools might be used by others.
  • Continually audit AI tools to look for unexpected and potentially harmful or biased behavior.
  • Be transparent with users about the limitations of the tool and the data used to train the tool.
  • Caution potential users about any potential negative consequences of use

Replacing Humans

While AI systems are useful, they do not replace human strengths. While AI systems are good at synthesizing lots of data, humans remain far superior at generalizing concepts to new contexts (Sinz et al. (2019)).

AI systems should be thought of as better computers as opposed to replacements for humans.

A small robot on a computer'.

While there are some contexts in which human labor has already been replaced by robotics and AI, studies show that humans tend to prefer human-made goods when those goods are not strictly functional (Bellaiche et al. (2023), Granulo, Fuchs, and Puntoni (2021)). It has been proposed that there will be radical shifts in the way that humans work in many fields including health care, banking, retail, security, and more (Selenko et al. (2022)). Yet we need to implement changes gradually to allow for time to better understand the consequences and mindfully consider how such changes impact human employment and well-being.

Selenko et al. (2022) have proposed a framework for considering the impact of AI usage on human workers to promote benefit and avoid harm. It suggests considering usage in a few different ways: AI for complementing work, AI for replacing tasks, and AI for generating new tasks. It suggests considering how such usages might reduce tedious or dangerous work, while also preserving work-related benefits such as self-esteem, belonging, and perceived meaningfulness. See here for the article.

Example 1 AI might become much more prominent in the field of journalism and may help deliver more rapidly, deliver news from dangerous locations, and possibly even create content less biased politically or otherwise if the models are specifically trained to be objective (Latar (2015)). Yet, larger usage of AI in journalism also poses additional risks of misinformation, infiltration by outsiders, and a lack of human values if the usage lacks appropriate and sufficient human oversight.

“robot journalist story writers will have instant access to new insights and information, and their new ability to compose the story and publish it in seconds may cause human journalists to become obsolete. This is alarming, as no robot journalists can replace human journalists as the guardians of democracy and human rights.” (Latar (2015))

“This potential threat to the profession of human journalism is viewed by some optimistic journalists merely as another tool that will free them of the necessity to conduct costly and, at times, dangerous investigations. The robot journalists will provide them, so the optimists hope, with an automated draft for a story that they will edit and enrich with their in-depth analysis, their perspectives and their narrative talents.

The more pessimistic journalists view the new robot journalists as a real threat to their livelihood and style of working and living.

Computer science is a field that has historically lacked diversity. It is also critical that we support diverse new learners of computer science, as we will continue to need human involvement in the development and use of AI tools. This can help to ensure that more diverse perspectives are accounted for in our understanding of how these tools should be used responsibly.

Tips for supporting human contributions

For decision makers about AI use:

  • Avoid thinking that content by AI tools must be better than that created by humans, as this is not true (Sinz et al. (2019)).
  • Recall that humans wrote the code to create these AI tools and that the data used to train these AI tools also came from humans. Many of the large commercial AI tools were trained on websites and other content from the internet.
  • Be transparent where possible about when you do or do not use AI tools, give credit to the humans involved as much as possible.
  • Make decisions about using AI tools based on ethical frameworks in terms of considering the impact on human workers.


For decision makers about AI development:

  • Be transparent about the data used to generate tools as much as possible and provide information about what humans may have been involved in the creation of the data.
  • Make decisions about creating AI tools based on ethical frameworks in terms of considering the impact on human workers.


A new term in the medical field called AI paternalism describes the concept that doctors (and others) may trust AI over their own judgment or the experiences of the patients they treat. This has already been shown to be a problem with earlier AI systems intended to help distinguish patient groups. Not all humans will necessarily fit the expectations of the AI model if it is not very good at predicting edge cases (Hamzelou n.d.). Therefore, in all fields it is important for us to not forget our value as humans in our understanding of the world.

Inappropriate Use and Lack of Oversight

There are situations in which we may, as a society, not want an automated response. There may even be situations in which we do not want to bias our own human judgment by that of an AI system. There may be other situations where the efficiency of AI may also be considered inappropriate. While many of these topics are still under debate and AI technology continues to improve, we challenge the readers to consider such cases given what is currently possible and what may be possible in the future.

Some reasons why AI may not be appropriate for certain situation include:

  • Despite the common misconception that AI systems have clearer judgment than humans, they are in fact typically just as prone to bias and sometimes even exacerbate bias (Pethig and Kroenung (2023)). There are some very mindful researchers working on these issues in specific contexts and making progress where AI may actually improve on human judgment, but generally speaking AI systems are currently typically biased and reflective of human judgment but in a more limited manner based on the context in which they have been trained.
  • AI systems can behave in unexpected ways (Gichoya et al. (2022)).
  • Humans are still better than AI at generalizing what they learn for new contexts (Sinz et al. (2019)).
  • Humans can better understand the consequences of discussions from a humanity standpoint.

Some examples where it may be considered inappropriate for AI systems to be used (even with human involvement) include:

  • In the justice system to determine if someone is guilty of a crime or to determine the punishment of someone found guilty of a crime.
  • It may be considered inappropriate for AI systems to be used in certain warfare circumstances.

Additionally there are many contexts in which using AI without human intervention could be very problematic including:

  • Diagnosis of disease for patients - Delivering this news should likely come from a human. Secondly, the stakes for errors in the AI system could be very high. What if the system works poorly occasionally for certain individuals? What if the system starts behaving strangely? What if a patient with an unusual situation comes in that the AI system can’t work well for?

Even for seemingly benign uses, if humans do not intervene, it is possible that negative consequences could occur if the system starts working poorly or unusually.

Example 2 Real-World Example

Uber drivers in India experienced issues with the facial recognition technology for logging into the App. This caused many drivers to get locked out of their accounts temporarily or permanently resulting in a reduction in their capacity to work and earn a living (Bansal (2022)).

Read more about this in this article.

Tips for avoiding inappropriate uses and lack of oversight

For decision makers about AI use:

  • Stay up-to-date on current laws, practices, and standards for your field, especially for high-risk uses.
  • Stay up-to-date on the news for how others have experienced their use of AI.
  • Stay involved in discussions about appropriate uses for AI, particularly for policy.
  • Begin using AI slowly and iteratively to allow time to determine the appropriateness of the use. Some issues will only be discovered after some experience.
  • Involve a diverse group of individuals in discussions of intended uses to better account for a variety of perspectives.
  • Seek outside expert opinion whenever you are unsure about your AI use plans.
  • Consider AI alternatives if something doesn’t feel right.

For decision makers about AI development:

  • Be transparent with users about the potential risks that usage may cause.
  • Stay up-to-date on current laws, practices, and standards for your field, especially for high-risk uses.
  • Stay up-to-date on the news for how others may have experienced problems using AI.
  • Stay involved in discussions about appropriate uses for AI, particularly for policy.
  • Involve a diverse group of individuals in development to better account for a variety of perspectives.
  • Seek outside expert opinion whenever you are unsure about your AI development plans.
  • Consider AI alternatives if something doesn’t feel right.
  • Design tools with safeguards to stop users from requesting harmful or irresponsible uses.
  • Design tools with responses that may ask users to be more considerate in the usage of the tool.

Bias Perpetuation and Disparities

One of the biggest concerns is the potential for AI to further perpetuate bias. AI systems are trained on data created by humans. If this data used to train the system is biased (and this includes existing code that may be written in a biased manner), the resulting content from the AI tools could also be biased. This could lead to discrimination, abuse, or neglect for certain groups of people, such as those with certain ethnic or cultural backgrounds, genders, ages, sexuality, capabilities, religions or other group affiliations.

It is well known that data and code are often biased (Belenguer 2022). The resulting output of AI tools should be evaluated for bias and modified where needed. Please be aware that because bias is intrinsic, it may be difficult to identify issues. Therefore, people with specialized training to recognize bias should be consulted. It is also vital that evaluations be made throughout the software development process of new AI tools to check for and consider potential perpetuation of bias.

Because of differences in access to technology, disparities may be further exacerbated by the usage of AI tools. Consideration and support for under-served populations will be even more necessary. For example tools that only work well on individuals with light skin, will lead to further challenges to some individuals.

Developing and scaling-up artificial intelligence-based innovations for use in low- and middle-income countries will thus require deliberate efforts to generate locally representative training data (Paul and Schaefer (2020)).

In the flip side, AI has the potential if used wisely, to reduce health inequities by potentially enabling the scaling and access to expertise not yet available in some locations.

Tips for avoiding bias

For decision makers about AI use:

  • Be aware of the biases in the data that is used to train AI systems.
  • Check what data was used to train the AI tools that you use where possible. Tools that are more transparent are likely more ethically developed.
  • Check if the developers of the AI tools you are using were/are considerate of bias issues in their development where possible. Tools that are more transparent are likely more ethically developed.
  • Consider the possible outcomes of the use of content created by AI tools. Consider if the content could possibly be used in a manner that will result in discrimination.


For decision makers about AI development:

  • Check for possible biases within data used to train new AI tools.
    • Are there harmful data values? Examples could include discriminatory and false associations.
    • Are the data adequately inclusive? Examples could include a lack of data about certain ethnic or gender groups or disabled individuals, which could result in code that does not adequately consider these groups, ignores them all together, or makes false associations.
    • Are the data of high enough quality? Examples could include data that is false about certain individuals.
  • Evaluate the code for new AI tools for biases as it is developed. Check if any of the criteria for weighting certain data values over others are rooted in bias.
  • Continually audit the code for potentially biased responses. Potentially seek expert help.
  • Be transparent with users about potential bias risks.
  • Consider the possible outcomes of the use of content created by newly developed AI tools. Consider if the content could possibly be used in a manner that will result in discrimination.

See Belenguer (2022) for more guidance. We also encourage you to check out the following video for a classic example of bias in AI:

For further details check out this course on Coursera about building fair algorithms. We will also describe more in the next section.

Security and Privacy Issues

Security and privacy are a major concern for AI usage. Here we discuss a few aspects related to this.

Image of a robot at a door'.

Use the right tool for the job

There are three kinds of commercial AI tools (Nigro (2023)):

  • Consumer tools (likely not private/secure)
  • Enterprise tools (can be secure with the right legal agreements in place)
  • Open source tools (depends on where you use them and whether you control the computers they run on)

Public commercial AI tools are often not designed to protect users from unknowingly submitting prompts that include propriety are private information. Different AI tools have different practices in terms of how they do or do not collect data about the prompts that people submit. They also have different practices in terms of if they reuse information from prompts to other users. Note that the AI system itself may not be trained on responses for how prompt data is collected or not. So asking the AI system may not give accurate answers.

Thus if users of public AI tools, such as ChatGPT submit prompts that include propriety or private information, they run the risk of that information being viewable not only by the developers/maintainers of the AI tool used, but also by other users who use that same AI tool.

AI can have security blind spots

Furthermore, AI tools are not always trained in a way that is particularly conscious of data security. If for example, code is written using these tools by users who are less familiar with coding security concerns, protected data or important passwords may be leaked within the code itself. AI systems may also utilize data that was actually intended to be private.

Data source issues

It is also important to consider what data the responses that you get from a commercial AI tool might actually be using. Are these datasets from people who consented to their data being used in this manner? If you are generating your own tools, did people consent for their data to be used as you intend?

Data privacy is a major issue all on it’s own:

98% of Americans still feel they should have more control over the sharing of their data (Pearce (2021))

It is important to follow legal and ethical guidance around the collection of data and to use tools that also abide by these guidelines.

Tips for reducing security and privacy issues

For decision makers about AI use:

  • Check that no sensitive data, such as Personal Identifiable Information (PII) or propriety information becomes public through prompts to consumer AI systems or systems not designed or set up with the right legal agreements in place for sensitive data.
  • Consider purchasing a license for a private AI system if needed or create your own if you wish to work with sensitive data (seek expert guidance to determine if the AI systems are secure enough).
  • Ask AI tools for help with security when using consumer tools, but to not rely on them alone. In some cases, consumer AI tools will even provide little guidance about who developed the tool and what data it was trained on, regardless of what happens to the prompts and if they are collected and maintained in a secure way.
  • Promote regulation of AI tools by voting for standards where possible.

Possible Generative AI Prompt: Are there any methods that could be implemented to make this code more secure?


For decision makers about AI development:

  • Consult with an expert about data security if you want to design or use a AI tool that will regularly use private or propriety data.
  • Be clear with users about the limitations and security risks associated with tools that you develop.
  • Promote regulation of AI tools by voting for standards where possible.

Possible Generative AI Prompt: Are there any possible data security or privacy issues associated with the plan you proposed?

Climate Impact

AI can help humans to innovate ways to improve efficiency and to devise strategies to help mitigate climate issues (Jansen et al. (2023); Cowls et al. (2023)). Importantly this needs to be done in a manner with social justice in mind, as often those that have the least resources deal with climate issues are also the most likely to be impacted (Jansen et al. (2023); Bender et al. (2021)).

A few organizations are working on supporting the use of AI for climate crises mitigation uses such as:

However, AI also poses a number of climate risks (Bender et al. (2021); Hulick (2021); Jansen et al. (2023); Cowls et al. (2023)) .

  1. The data storage and computing resources needed for the development of AI tools could exacerbate climate challenges (Bender et al. (2021))
  2. If not designed carefully, AI could also spread false solutions for climate crises or promote inefficient practices (Jansen et al. (2023)).
  3. Differences in access to AI technologies may exacerbate social inequities related to climate (Hulick (2021))

A cartoon of robots camping.

Tips for reducing climate impact

For decision makers about AI use:

  • Where possible use tools that are transparent about resource usage and that identify how they have attempted to improve efficiency


For decision makers about AI development:

  • Modify existing models as opposed to unnecessarily creating new models from scratch where possible.
  • Avoid using models with datasets that are unnecessarily large (Bender et al. (2021))
  • Solutions such as federated learning, where AI models are iteratively trained in multiple locations using data at those locations, instead of collectively sharing the data to create more massive datasets can help reduce the required resources and also help preserve data privacy and security.
  • Use emerging tools and guidelines to estimate and monitor the resource usage involved in training models (Castaño Fernández (2023)).
  • Be transparent about resources used to train models (Castaño Fernández (2023)).
  • Utilize data storage and computing options that are designed to be more environmentally conscious options, such as solar or wind power generated electricity.

Transparency

In the United States Blueprint for the AI Bill of Rights, it states:

You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.

This transparency is important for people to understand how decisions are made using AI, which can be especially vital to allow people to contest decisions.

It also better helps us to understand what AI systems may need to be fixed or adapted if there are issues.

An image of glass robots

Tips for being transparent

For decision makers about AI use:

  • Where possible include the AI tool and version that you may be using and why so people can trace back where decisions or content came from
  • Use tools that are transparent about what data was used where possible


For decision makers about AI development:

  • Providing information about what training data was or methods used to develop new AI models can help people to better understand why it is working in a particular

Summary

Here is a summary of all the tips we suggested:

  • Be mindful of how content created with AI or AI tools may be used for unintended purposes.
  • Be aware that humans are still better at generalizing concepts to other contexts (Sinz et al. (2019)).
  • Always have expert humans review content created by AI and value human contributions and thoughts.
  • Carefully consider if an AI solution is appropriate for your context.
  • Be aware that AI systems are biased and their responses are likely biased. Any content generated by an AI system should be evaluated for potential bias.
  • Be aware that AI systems may behave in unexpected ways. Implement new AI solutions slowly to account for the unexpected. Test those systems and try to better understand how they work in different contexts.
  • Be aware of the security and privacy concerns for AI, be sure to use the right tool for the job and train those at your institute appropriately.
  • Consider the climate impact of your AI usage and proceed in a manner makes efficient use of resources.
  • Be transparent about your use of AI.

Overall, we hope that awareness of these concerns and the tips we shared will help us all use AI tools more responsibly. We recognize however, that as this is emerging technology and more ethical issues will emerge as we continue to use these tools in new ways. Staying up-to-date on the current ethical considerations will also help us all continue to use AI responsibly.

References

Bansal, Varsha. 2022. “Uber’s Facial Recognition Is Locking Indian Drivers Out of Their Accounts.” MIT Technology Review. https://www.technologyreview.com/2022/12/06/1064287/ubers-facial-recognition-is-locking-indian-drivers-out-of-their-accounts/.
Belenguer, Lorenzo. 2022. AI Bias: Exploring Discriminatory Algorithmic Decision-Making Models and the Application of Possible Machine-Centric Solutions Adapted from the Pharmaceutical Industry.” Ai and Ethics 2 (4): 771–87. https://doi.org/10.1007/s43681-022-00138-8.
Bellaiche, Lucas, Rohin Shahi, Martin Harry Turpin, Anya Ragnhildstveit, Shawn Sprockett, Nathaniel Barr, Alexander Christensen, and Paul Seli. 2023. “Humans Versus AI: Whether and Why We Prefer Human-Created Compared to AI-Created Artwork.” Cognitive Research: Principles and Implications 8 (1): 42. https://doi.org/10.1186/s41235-023-00499-6.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922.
Castaño Fernández, Joel. 2023. “A Greenability Evaluation Sheet for AI-Based Systems.” Bachelor thesis, Universitat Politècnica de Catalunya. https://upcommons.upc.edu/handle/2117/393798.
Cowls, Josh, Andreas Tsamados, Mariarosaria Taddeo, and Luciano Floridi. 2023. “The AI Gambit: Leveraging Artificial Intelligence to Combat Climate Change—Opportunities, Challenges, and Recommendations.” AI & SOCIETY 38 (1): 283–307. https://doi.org/10.1007/s00146-021-01294-x.
Gichoya, Judy Wawira, Imon Banerjee, Ananth Reddy Bhimireddy, John L. Burns, Leo Anthony Celi, Li-Ching Chen, Ramon Correa, et al. 2022. AI Recognition of Patient Race in Medical Imaging: A Modelling Study.” The Lancet Digital Health 4 (6): e406–14. https://doi.org/10.1016/S2589-7500(22)00063-2.
Granulo, Armin, Christoph Fuchs, and Stefano Puntoni. 2021. “Preference for Human (Vs. Robotic) Labor Is Stronger in Symbolic Consumption Contexts.” Journal of Consumer Psychology 31 (1): 72–80. https://doi.org/10.1002/jcpy.1181.
Hamzelou, Jessica. n.d. “Artificial Intelligence Is Infiltrating Health Care. We Shouldn’t Let It Make All the Decisions.” MIT Technology Review. Accessed May 8, 2023. https://www.technologyreview.com/2023/04/21/1071921/ai-is-infiltrating-health-care-we-shouldnt-let-it-make-decisions/.
Hulick, Kathryn. 2021. “Training AI to Be Really Smart Poses Risks to Climate.” https://www.snexplores.org/article/training-ai-energy-emissions-climate-risk.
Jansen, Fieke, Merve Gulmez, Becky Kazansky, Narmine Abou Bakari, Claire Fernandez, Harriet Kingaby, and Jan Tobias Mühlberg. 2023. “The Climate Crisis Is a Digital Rights Crisis: Exploring the Civil-Society Framing of Two Intersecting Disasters.” In Ninth Computing Within Limits 2023. Virtual: LIMITS. https://doi.org/10.21428/bf6fb269.b4704652.
Latar, Noam. 2015. “The Robot Journalist in the Age of Social Physics: The End of Human Journalism?” In, 65–80. https://doi.org/10.1007/978-3-319-09009-2_6.
Nigro, Pam. 2023. AI Security Risks: Separating Hype from Reality Security Magazine.” https://www.securitymagazine.com/articles/100219-ai-security-risks-separating-hype-from-reality.
Paul, Amy K, and Merrick Schaefer. 2020. “Safeguards for the Use of Artificial Intelligence and Machine Learning in Global Health.” Bulletin of the World Health Organization 98 (4): 282–84. https://doi.org/10.2471/BLT.19.237099.
Pearce, Guy. 2021. “Beware the Privacy Violations in Artificial Intelligence Applications.” ISACA. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2021/beware-the-privacy-violations-in-artificial-intelligence-applications.
Pethig, Florian, and Julia Kroenung. 2023. “Biased Humans, (Un)Biased Algorithms?” Journal of Business Ethics 183 (3): 637–52. https://doi.org/10.1007/s10551-022-05071-8.
Selenko, Eva, Sarah Bankins, Mindy Shoss, Joel Warburton, and Simon Lloyd D. Restubog. 2022. “Artificial Intelligence and the Future of Work: A Functional-Identity Perspective.” Current Directions in Psychological Science 31 (3): 272–79. https://doi.org/10.1177/09637214221091823.
Sinz, Fabian H., Xaq Pitkow, Jacob Reimer, Matthias Bethge, and Andreas S. Tolias. 2019. “Engineering a Less Artificial Intelligence.” Neuron 103 (6): 967–79. https://doi.org/10.1016/j.neuron.2019.08.034.