• About this Course
    • 0.1 Specialization Sections
    • 0.2 Available course formats
  • Introduction
  • 1 VIDEO Summary of This Course
  • 2 Introduction
    • 2.1 Motivation
    • 2.2 Target Audience
    • 2.3 Curriculum
  • AI Possibilities
  • 3 Introduction to AI Possibilities
    • 3.1 Introduction
      • 3.1.1 Motivation
      • 3.1.2 Target Audience
      • 3.1.3 Curriculum Summary
      • 3.1.4 Learning Objectives
  • 4 VIDEO What Is AI
  • 5 What Is Artificial Intelligence
    • 5.1 Specific and General Intelligence
    • 5.2 Shifting Goalposts
    • 5.3 Our AI Definition
    • 5.4 What Is and Is Not AI
      • 5.4.1 Smartphones
      • 5.4.2 Calculators
      • 5.4.3 Computer Programs
      • 5.4.4 Examples of AI In the Real World
      • 5.4.5 DISCUSSION Is It AI
    • 5.5 Summary
  • 6 AI Case Studies
    • 6.1 Amazon Recommendations
    • 6.2 Financial Forecasting
      • 6.2.1 Categorizing Businesses
      • 6.2.2 Incorporating new predictors for forecasting
      • 6.2.3 Using Large Language Models to predict inflation
  • 7 VIDEO How AI Works
  • 8 How AI Works
    • 8.1 Early Warning for Skin Cancer
    • 8.2 Collecting Datapoints
      • 8.2.1 What Is Data
      • 8.2.2 Preparing the Data
    • 8.3 Understanding the Algorithm
      • 8.3.1 Testing the Algorithm
    • 8.4 Interfacing with AI
    • 8.5 Understanding the AI Spring
      • 8.5.1 Transformer Models
      • 8.5.2 Diffusion Models
    • 8.6 Summary
  • 9 VIDEO Different Types of AI
  • 10 Demystifying Types of AI
    • 10.1 Machine Learning
    • 10.2 Neural Networks
    • 10.3 Deep Learning
    • 10.4 Natural Language Processing
    • 10.5 Generative AI
    • 10.6 Large Language Model
    • 10.7 Transformer Model
    • 10.8 Variational Autoencoders (VAEs)
    • 10.9 Generative Adversarial Networks (GANs)
    • 10.10 Strengths and Weaknesses
  • 11 VIDEO Real Life Possibilities
  • 12 What Is Possible
  • 13 VIDEO What Is Possible
  • 14 VIDEO What Is NOT Possible
  • 15 Ground Rules for AI
  • 16 VIDEO Knowing the Ground Rules
  • Avoiding AI Harm
  • 17 Introduction to Avoiding AI Harm
    • 17.1 Motivation
    • 17.2 Target Audience
    • 17.3 Curriculum
    • 17.4 Learning Objectives
  • 18 Societal Impact
    • 18.1 Ethics Codes
    • 18.2 Major Ethical Considerations
    • 18.3 Intentional and Inadvertent Harm
      • 18.3.1 Tips for avoiding inadvertent harm
    • 18.4 Replacing Humans and Human Autonomy
      • 18.4.1 Tips for supporting human contributions
    • 18.5 Inappropriate Use
      • 18.5.1 Tips for avoiding inappropriate uses
    • 18.6 Bias Perpetuation and Disparities
      • 18.6.1 Tips for avoiding bias
    • 18.7 Security and Privacy issues
      • 18.7.1 Use the right tool for the job
      • 18.7.2 AI can have security blind spots
      • 18.7.3 Data source issues
      • 18.7.4 Tips for reducing security and privacy issues
    • 18.8 Climate Impact
    • 18.9 Tips for reducing climate impact
    • 18.10 Transparency
      • 18.10.1 Tips for being transparent in your use of AI
    • 18.11 Summary
  • 19 VIDEO Effective use of Training and Testing data
  • 20 Effective use of Training and Testing data
    • 20.1 Population and sample
    • 20.2 Training data
    • 20.3 Testing data
    • 20.4 Evaluation
    • 20.5 Proper separation of Training and Testing data
    • 20.6 Validation
    • 20.7 Conclusions
  • 21 Algorithm considerations
    • 21.1 Harmful or Toxic Responses
      • 21.1.1 Tips for avoiding the creation of harmful content
    • 21.2 Lack of Interpretability
      • 21.2.1 Tips for avoiding a lack of interpretability
    • 21.3 Misinformation and Faulty Responses
      • 21.3.1 Tips for reducing misinformation & faulty responses
    • 21.4 Summary
  • 22 Adherence practices
    • 22.1 Start Slow
      • 22.1.1 Tips for starting slow
    • 22.2 Check for Allowed Use
      • 22.2.1 Tips for checking for allowed use
    • 22.3 Use Multiple AI Tools
      • 22.3.1 Tips for using multiple AI tools
    • 22.4 Educate Yourself and Others
      • 22.4.1 Tips to educate yourself and others
    • 22.5 Summary
  • 23 Consent and AI
    • 23.1 Summary
  • 24 IDARE and AI
    • 24.1 AI is biased
    • 24.2 Examples of AI Bias
      • 24.2.1 Amazon’s resume system was biased against women
      • 24.2.2 X-ray studies show AI surprises
    • 24.3 Mitigation
    • 24.4 Be extremely careful using AI for decisions
    • 24.5 More inclusive teams means better models
    • 24.6 Access
    • 24.7 Summary
  • 25 Ethical process
    • 25.1 Ethical Use Process
      • 25.1.1 Reflection during inception of the idea
      • 25.1.2 Reflection during use
      • 25.1.3 Reflection after use
    • 25.2 Ethical Development Process
      • 25.2.1 Reflection during inception of the idea
      • 25.2.2 Planning Reflections
      • 25.2.3 Development Reflection
      • 25.2.4 Post-development Reflection
    • 25.3 Summary
  • Determining AI Needs
  • 26 VIDEO Introduction to Determining AI Needs Video
  • 27 Introduction to Determining AI Needs
    • 27.1 Motivation
    • 27.2 Target Audience
    • 27.3 Curriculum
  • 28 VIDEO: What are the Components of AI
  • 29 What are the components of AI?
    • 29.1 Learning objectives:
    • 29.2 Intro
    • 29.3 What makes an AI model accurate?
    • 29.4 What makes an AI model efficient?
    • 29.5 Putting it together
  • 30 VIDEO: Determining your AI needs
  • 31 Determining your AI needs
    • 31.1 Learning objectives:
    • 31.2 Intro
    • 31.3 Generalized Custom AI Use Cases
      • 31.3.1 Customized Knowledge
    • 31.4 Customized Security
    • 31.5 Customized Interface
      • 31.5.1 Generalized strategies for these needs
    • 31.6 The Whole Picture
      • 31.6.1 Technical expertise needs
      • 31.6.2 Funding needs
      • 31.6.3 Time needs
    • 31.7 Example project strategies
      • 31.7.1 Cogniflow example
      • 31.7.2 PrivateAI
      • 31.7.3 ChatGPT API
      • 31.7.4 Hugging Face
    • 31.8 Conclusion
  • 32 VIDEO: Customized Knowledge for AI
  • 33 Customized Knowledge for AI
    • 33.1 Learning objectives:
    • 33.2 Intro
    • 33.3 Summary of possible strategies
      • 33.3.1 Prompt engineering
      • 33.3.2 Prompt tuning or “P-tuning”
      • 33.3.3 Fine Tuning
      • 33.3.4 Find a base model to start with
    • 33.4 Example strategies for Fine tuning
  • 34 VIDEO: Customized Security for AI
  • 35 Customized Security for AI
    • 35.1 Learning objectives:
    • 35.2 Intro
    • 35.3 Data security basics
    • 35.4 Secure AI solutions for protected data
    • 35.5 Data obscuring techniques
    • 35.6 Example Security Customization strategies
      • 35.6.1 PrivateAI
      • 35.6.2 deidentify
      • 35.6.3 AWS servers + HuggingFace
    • 35.7 Always double, triple, quadruple, check
  • 36 VIDEO: Customized Interface for AI
  • 37 Customized Interfaces for AI
    • 37.1 Learning objectives:
    • 37.2 Intro
    • 37.3 General strategies for custom interfaces
    • 37.4 Examples of AI customized interface strategies
    • 37.5 Premade AI tools
    • 37.6 AI tool APIs
    • 37.7 Custom builds
  • 38 VIDEO: Evaluating your customized AI tool
  • 39 Evaluating your customized AI tool
    • 39.1 Learning objectives:
    • 39.2 Intro
    • 39.3 Evaluating Accuracy of an AI model
    • 39.4 Evaluating Computational Efficiency of an AI model
    • 39.5 Evaluating Usability of an AI model
  • AI Policy
  • 40 Introduction to AI Policy
    • 40.1 Motivation
    • 40.2 Target Audience
    • 40.3 Curriculum
  • 41 Building a team to guide your AI use
    • 41.1 Who might be on your team?
  • 42 VIDEO Building a team to guide your AI use
  • 43 AI Acts, Orders, and Policies
    • 43.1 The EU AI Act
    • 43.2 Industry-specific policies
      • 43.2.1 Mississippi AI Institute
      • 43.2.2 US Federal Drug Administration
  • 44 VIDEO AI Acts, Orders, and Policies
  • 45 Other Laws That Can Apply to AI
    • 45.1 Intellectual Property
    • 45.2 Data Privacy and Information Security
    • 45.3 Liability
    • 45.4 Who can tell you about your particular legal concerns
  • 46 VIDEO Existing Laws That Apply to AI
  • 47 Considerations for creating an AI Policy
    • 47.1 An AI policy alone is not enough
    • 47.2 Get lots of voices weighing in from the beginning
    • 47.3 Consider how to keep your guidance agile
    • 47.4 Make it easy for people to follow your policy through effective training
  • 48 VIDEO Title!
  • About the Authors
  • 49 References
  • This content was published with bookdown by:

    The Fred Hutch Data Science Lab

    Style adapted from: rstudio4edu-book (CC-BY 2.0)

    Click here to provide feedback

How AI Works

Chapter 19 VIDEO Effective use of Training and Testing data

https://docs.google.com/presentation/d/11oUc4KvmSiQBCj8rzj9v_e5te62qBIyriFHG_hQvFZA/edit#slide=id.p


All illustrations CC-BY.
All other materials CC-BY unless noted otherwise.