Building an AI advisory team

AI Policy is a teamwork endeavor. Experts from many different fields need to come together to bring the latest updates. As someone in charge of making sure your organization is using AI wisely and properly, staying up-to-date on the laws, regulations, and computational resources is vital. It’s also something that is really difficult to do alone. Building a team of individuals that can help you confidently navigate the evolving landscape should be one of your top priorities as a leader.

Disclaimer: The thoughts and ideas presented in this course are not to be substituted for legal or ethical advice and are only meant to give you a starting point for gathering information about AI policy and regulations to consider.

There are multiple possible roles that you could fill, depending on your organization’s AI needs and uses. Having representation from technical, policy and social science backgrounds helps ensure a multidisciplinary, holistic approach to building and overseeing responsible AI.

This list is a starting point for you when deciding what sort of roles you need for your own team and is not written in any particular order. This is not an exhaustive list of all possible experts that you can gather. You should consult with your legal council, board members, and other oversight staff in order to properly address your own specialized needs.

  1. Legal counsel that understands AI and the nuances of the rapidly changing laws can advise on regulations relevant to your organization.

  2. Policy and governance analysts can research and draft internal policies on transparency, auditability, harm mitigation, and appropriate AI uses. They can also advise and assist with compliance.

  3. Data protection officers who can aid with implementing privacy-by-design principles and handling personally identifiable information legally and securely are especially important for organizations that deal with personal data.

  4. Ethicists are experts who can provide guidance on ethical issues and review systems for potential biases, risks, and policy compliance.

  5. Trainers and educators can create and run programs aimed at keeping all employees aware of responsibilities in developing and using AI respectfully and in compliance with AI policies and regulations.

  6. Oversight committee members are experts who review research studies (both before a study begins and while it is ongoing). Their job is to make sure researchers are protecting the welfare, rights, and privacy of research subjects. Oversight committees like institutional review boards are especially important for organizations involved in any human research that uses AI.

  7. Technical experts understand how to design, build, and deploy AI models. They can also offer advice on the algorithms, data, and computational infrastructure your organization might need. They might be AI or machine learning experts, data scientists, DevOps engineers, cloud architects, or systems engineers.

  8. Information security architects are vital for identifying and mitigating the security risks associated with AI systems. They can provide advice on data privacy measures, security weak points, and incident response plans.

The specific roles and their required skillsets will vary depending on the size, industry, and AI maturity of the company. Having a balanced team with both technical and strategic expertise is key to successfully implementing AI policies in your organization. Remember, effective communication and collaboration between these roles is crucial for a successful AI implementation.