How will 
artificial intelligence
change our future?
ServicesAI compliance
AI regulatory compliance is complex
Quick links
An art deco aesthetic minimalist image of a large, fragmented puzzle with irregular pieces floating around. Each piece has intricate patterns representing different regions or regulations, symbolizing the disjointed nature of AI compliance—AI regulatory compliance is complex

AI regulatory compliance is complex

By Jacob Andra / Published July 16, 2024 
Last Updated: August 5, 2024

Many applications of enterprise artificial intelligence (AI) run up against compliance requirements, particularly in healthcare, legal, and other industries that process a lot of personal data. Unfortunately, regulatory compliance is a maze.

Federal, state, and industry-specific regulations overlap and contradict each other, creating a compliance headache.

Talbot West is here to help you with regulatory compliance issues. We don’t replace your legal team, and we don’t offer legal advice, but we do stay up to date on the shifting landscape of regulations and can provide valuable strategic oversight to help you stay on the right side of compliance issues.

Main takeaways
AI regulations seek to balance innovation with safety.
Regulatory bodies disagree with where the perfect balance lies.
This disagreement creates a regulatory patchwork that makes compliance difficult.
Not all uses of AI are subject to the same regulatory oversight.
Compliance is a process, not an end goal.
WORK WITH TALBOT WEST

Many AI compliance issues are rooted in data privacy concerns that predate the rise of generative AI. Regulations such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States apply to AI applications, particularly in how they handle personal data.

AI compliance extends far beyond data privacy, though. Here are some of the other types of regulations that governmental bodies are considering—or in some cases have already passed:

Accountability and transparency

  • Growing emphasis on explainable AI (XAI)
  • May require changes in AI development practices
  • Challenge: technical feasibility of explaining complex AI decisions

Example: the EU's Digital Services Act (DSA) requires platforms to disclose how their algorithms work and provide explanations for content moderation decisions

Safety and risk mitigation

  • Critical for public trust in AI
  • Often employs a risk-based approach
  • Challenge: defining and measuring AI safety across diverse applications

Example: the Department of Homeland Security (DHS) has published guidelines for critical infrastructure owners and operators to mitigate AI risks. We expect these to be adopted as regulations in the near future.

Export controls

  • Reflects geopolitical concerns about AI
  • May impact global AI development and collaboration
  • Challenge: balancing national security with innovation

Sector-specific regulations

  • Recognizes unique risks and requirements in specific industries
  • Allows for more tailored regulatory approaches
  • Challenge: ensuring consistency across sectors while addressing specific needs

AI in critical infrastructure

  • Focuses on high-stakes applications of AI
  • Often involves stringent testing and certification requirements
  • Challenge: keeping pace with rapid AI advancements in critical sectors
Reach outAn art deco aesthetic minimalist image of disconnected pathways and roads with intricate designs, floating in mid-air. Some pathways are broken or abruptly end, symbolizing the disjointed and incomplete nature of AI regulations.

The messy regulatory landscape

The US has a patchwork of regulations, either current or proposed, on the national level as well as with individual states. This creates a complex, shifting regulatory landscape for AI developers and companies.

This lack of uniformity presents a substantial challenge for AI companies operating across multiple jurisdictions.

For example:

  1. California is the strictest on data privacy issues with the California Consumer Privacy Act (CCPA) and its amendment, the California Privacy Rights Act (CPRA). We expect California to lead the charge on AI regulations.
  2. The state of Illinois has taken proactive steps to regulate the use of AI in employment and biometric data collection. The state's Artificial Intelligence Video Interview Act requires employers to inform job applicants when AI is used to analyze video interviews and obtain their consent. This law also gives applicants the right to have their video interviews destroyed. Illinois' Biometric Information Privacy Act (BIPA) regulates the collection, use, and storage of biometric data, which is often used in AI systems.
  3. New York City's Local Law 144 mandates bias audits for AI hiring tools and candidate notification, while the state is considering a bill to examine AI use in government agencies.
  4. Vermont seems to focus on specific applications. The state has laws regulating the use of automated license plate readers and requiring disclosure of AI use in political advertisements. Vermont currently lacks comprehensive AI-specific legislation that addresses broader issues such as data privacy, algorithmic bias, or AI in employment.
  5. According to Texas Policy Research, the state of Texas has taken a relatively hands-off approach to AI regulation so far, focusing more on promoting AI development. However, a report from the House Select Committee on Artificial Intelligence & Emerging Technologies suggests that Texas lawmakers are considering a more comprehensive regulatory framework for AI, including compliance measures to address ethical concerns, public education, and legal adjustments in areas such as elections and law enforcement.

As individual states chart their own course in the realm of AI regulation, the absence of a Federal mandate will make compliance difficult.

Walking the tightrope of AI advancement and safety

Governments walk a fine line with AI regulation. On one hand, they need to foster an environment where innovation can thrive. Too many restrictions, and they risk stifling creativity, slowing down progress, and potentially losing the competitive edge to countries with more relaxed rules. We've seen it before; overregulation can send talent and companies packing to greener pastures.

On the flip side, a Wild West approach isn't the answer either. Without proper guardrails, we could end up with AI systems that are unsafe, biased, or downright harmful. This could seriously erode public trust and set the whole field back years.

This balancing act is especially critical in high-stakes sectors such as healthcare, finance, and autonomous vehicles. These are areas where AI could revolutionize the way we live and work, but they're also where the risks are highest.

While most stakeholders agree with the basic premise—we need to have the minimum amount of legislation that preserves the public good—the devil is in the details. It’s almost impossible to tell what the right balance is; some legislative bodies favor safety, while others seek to promote growth.

As a result, we’re in for a crazy quilt, with a spectrum of regulatory requirements unequally applied. Expect it to continue this way until the Federal government settles it with nationwide standards.

Rapid AI advancement outpaces slow legislation

The AI industry is experiencing a perfect storm of regulatory challenges. On one side, we have the lightning-fast pace of AI development, churning out groundbreaking technologies at an unprecedented rate. On the other, we have the slow-moving machinery of government legislation and policymaking, struggling to keep up.

Most legislators don't have the technical background to grasp the intricacies of advanced AI systems and their potential implications. By the time they've wrapped their heads around one aspect of AI technology and drafted relevant legislation, the industry has already moved three steps ahead.

As a result, the regulatory environment is perpetually playing catch-up, often with outdated or irrelevant rules that don't address current AI capabilities or challenges. For AI companies, this creates a climate of uncertainty. Long-term planning becomes a guessing game, and investment decisions are clouded by the possibility of future regulations that could dramatically alter the playing field.

To navigate this landscape, businesses need to adopt a proactive and flexible approach.

Contact Talbot West to future-proof your AI strategy against the ever-changing regulatory landscape.

Contact Talbot West

Balancing profit and ethics in AI development

The AI industry is at a critical juncture where economic ambitions and ethical imperatives collide. Artificial intelligence promises to revolutionize industries and create immense value, but this potential comes hand-in-hand with a host of ethical challenges.

We're talking about issues such as algorithmic bias, where AI systems inadvertently discriminate against certain groups. Or data privacy concerns, where the vast amounts of personal information AI requires raise red flags. And let's not forget the broader societal impacts, such as job displacement due to AI-driven automation.

To navigate this landscape successfully, executives need to develop a nuanced understanding of both the business and ethical dimensions of AI.

The challenge of long-term AI impact prediction

Predicting the long-term impact of AI is like trying to forecast the weather a decade in advance: it's a complex challenge filled with unknowns. AI has the potential to completely reshape industries, transform job markets, and alter the very fabric of our society. This makes it incredibly difficult for both policymakers and business leaders to anticipate what regulations we'll need down the line.

For AI companies, this uncertainty throws a wrench into long-term planning and risk assessment. How do you plan for a future you can't reliably predict? It's equally challenging for regulators, who need to create frameworks flexible enough to evolve alongside rapidly advancing technology. This might mean investing in capabilities to better anticipate trends, engaging in broader discussions about the future of AI with various stakeholders, and building flexibility into our business models.

We're not just planning for the AI landscape of today, but preparing for the myriad possibilities of tomorrow. As AI continues to permeate different aspects of society, from healthcare to finance, the stakes for getting regulatory requirements right become increasingly high. The US must find a way to protect its citizens while maintaining its global leadership in AI development.

This balancing act requires collaboration between tech industry leaders, government officials, and academic experts to create a regulatory framework that is both flexible and robust enough to keep pace with rapid technological advancements.

Is all AI implementation subject to regulations?

Not all types of AI use are subject to stringent regulations. Some applications of AI face minimal oversight, while others are heavily scrutinized. It all boils down to how much AI could impact people's rights, safety, or critical systems.

Here are some enterprise AI use cases that we expect to be relatively free from regulatory oversight:

  • Internal process optimization. AI can streamline workflows and improve efficiency without significant regulatory oversight, as it primarily affects internal operations.
  • Customer service chatbots handle basic inquiries and support tasks with minimal regulatory concern, provided they don't process sensitive personal data.
  • AI-powered content generation tools face little regulation, though companies must ensure compliance with existing advertising standards.
  • Predictive maintenance AI operates with minimal regulatory constraints, focusing on operational efficiency rather than sensitive domains.
  • AI-driven inventory management systems optimize stock levels and supply chains without triggering significant regulatory scrutiny.
  • Meeting scheduling and calendar management AI tools enhance productivity without raising major regulatory flags.
  • AI-powered energy management systems for office buildings improve sustainability without facing heavy regulation.
  • Data visualization. AI that creates reports and dashboards from company data typically avoids strict oversight, assuming it doesn't handle sensitive personal information.
  • AI-enhanced brainstorming and ideation tools support creative processes without regulatory impediments.
  • Employee training recommendation systems that suggest relevant courses based on job roles generally operate free from specific AI regulations.
  • AI-powered document search and retrieval systems, such as RAG, enhance information accessibility within an organization without triggering significant regulatory concerns—so long as they don’t touch one of the high-concern areas highlighted below.

Regulators often adopt a risk-based approach, focusing on applications with the highest potential for harm or societal impact. Regulated AI applications handle sensitive personal data, make critical decisions affecting people's opportunities or well-being, or operate in domains where errors could have severe consequences.

Here are some organizational domains and use cases where AI is (or will be) heavily regulated:

  • AI used to process and analyze customer data. We expect this to have intense oversight due to privacy concerns and potential misuse of personal information. Unauthorized access to personal data, privacy breaches, or use of customer information for undisclosed purposes will be penalized. Watch for strict data protection regulations, mandatory privacy impact assessments, and potential fines for non-compliance.
  • AI-driven employee monitoring and evaluation systems. Expect significant scrutiny due to concerns about worker privacy and the potential for unfair treatment. Excessive surveillance, unfair performance evaluations, or discrimination based on collected data are key risks. Anticipate regulations limiting the extent of employee monitoring, requirements for transparency in AI-driven evaluations, and potential legal challenges.
  • Automated decision-making systems in critical areas such as credit approval or benefit allocation. High scrutiny is likely due to the potential for bias in important decisions. Unfair or discriminatory outcomes could lead to significant harm. Expect requirements for explainable AI, human oversight in critical decisions, and the right for individuals to contest automated decisions.
  • AI in bias-sensitive processes such as hiring and promotions. Intense scrutiny is inevitable due to the potential for perpetuating or amplifying discrimination. Systematic bias against protected groups could lead to significant legal and reputational risks. Watch for mandatory bias audits, requirements for diverse training data, and potential legal liability for discriminatory outcomes.
  • AI for financial risk assessment and fraud detection. False positives in fraud detection or biased risk assessments could cause financial hardship for individuals. Anticipate strict oversight of AI models in finance, requirements for model transparency, and regular audits.
  • AI handling sensitive data such as health information or biometrics. Very high scrutiny is certain due to the sensitive nature of the data involved. Breaches of highly sensitive personal information or unauthorized use of health data could have severe consequences. Expect stringent data protection measures, strict limitations on data use and sharing, and severe penalties for violations.
  • AI for automated content moderation. Watch for potential regulations on transparency of moderation algorithms, requirements for human oversight, and accountability measures.
  • AI in predictive maintenance and safety systems for critical infrastructure. System failures could lead to accidents, injuries, or operational disruptions. Anticipate strict safety standards for AI systems, rigorous testing requirements, and potential liability for AI-related failures.
  • AI in supply chain and vendor management. Unfair vendor selection or data breaches involving supplier information could harm businesses and individuals. Expect requirements for transparent vendor selection processes, data protection measures for supplier information, and compliance with international trade laws.
  • AI for internal audit and compliance monitoring. Failure to detect non-compliance or privacy violations in monitoring processes could lead to significant legal and financial consequences. Anticipate standards for AI use in compliance monitoring, requirements for human oversight, and potential liability for compliance failures.

AI compliance frameworks and tools

An AI governance framework ensures that AI systems are developed and deployed in a manner that is ethical, transparent, and legally compliant. Here are the main components of an AI governance framework and some current tools on the market that some organizations are using.

  1. Risk assessment identifies potential risks associated with AI systems, including operational, reputational, and regulatory risks. This proactive approach helps organizations mitigate adverse impacts before they occur. AI risk management toolkits such as AI-Risk and IBM OpenPages provide a structured approach to identifying, assessing, and mitigating potential risks associated with AI deployment.
  2. Ethical guidelines make sure that AI systems operate within a framework of fairness, justice, and respect for human rights. These guidelines maintain public trust and ensure AI technologies are used responsibly. Ethics checklists and frameworks such as IEEE’s Ethically Aligned Design and the Partnership on AI’s Framework for Fairness offer structured principles that guide the ethical development and deployment of AI systems.
  3. Data governance oversees the quality, security, and integrity of the data used in AI systems. This approach aligns with legal and ethical standards, protecting consumer data and maintaining trust. Data management platforms such as DataRobot and Alteryx maintain robust data governance. These tools ensure data is collected, stored, and used in compliance with governance policies.
  4. Bias mitigation involves implementing measures to detect and reduce biases in AI algorithms. This approach promotes fairness in AI systems and prevents discrimination against any individual or group. Bias detection software such as IBM AI Fairness 360 identifies and mitigates biases in AI models.
  5. Protecting privacy is paramount, especially given the vast amounts of data AI systems interact with. Compliance with privacy laws such as GDPR or CCPA safeguard consumer data and maintain trust. Compliance management systems such as OneTrust and TrustArc help organizations adhere to data protection regulations. These platforms provide comprehensive tools for managing and ensuring privacy compliance.
  6. Accountability means having clear guidelines that specify who is responsible when AI systems malfunction or cause harm. This approach establishes a clear chain of responsibility and guarantees that issues are promptly addressed. Accountability frameworks like those provided by OpenAI's API and the AI Now Institute’s Accountability Framework offer guidelines and support for establishing clear lines of responsibility in AI operations.
  7. Transparency and explainability build trust in AI systems. These principles require that AI systems disclose how they make decisions and the data they use, making their operations understandable to stakeholders. Model explainability tools such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) are instrumental in demystifying AI decision-making processes. These tools help stakeholders understand and trust AI outcomes by making the decision-making process transparent.
An art deco aesthetic minimalist image of an abstract labyrinth with tall, geometric walls and intricate patterns. The pathways twist and turn, symbolizing the complexity and confusion of navigating AI regulations.

Can AI be used for compliance monitoring?

AI can assist in compliance monitoring, but it should never be more than an assistant. Compliance monitoring should always have human oversight. Here are some ways that AI helps humans monitor organizational compliance:

  • Automating data analysis and quickly identifying risks
  • Processing large volumes of information faster than humans
  • Detecting patterns and anomalies that might otherwise go unnoticed
  • Adapting to regulatory changes and being up-to-date

Over-dependence on AI could reduce human vigilance, which is still crucial for understanding context and making nuanced judgments about compliance issues.

AI compliance outside of the US

While the US grapples with its own AI regulatory challenges, the rest of the world isn't sitting idle. Countries and regions across the globe are taking diverse approaches to AI compliance.

  • The European Union has taken a proactive approach to AI regulation. The EU's proposed AI Act creates a unified framework for AI governance across member states. It categorizes AI systems based on risk levels, with stricter rules for high-risk applications. The General Data Protection Regulation (GDPR) also plays a crucial role in regulating AI systems that process personal data.
  • China has encouraged AI development while also imposing strict controls. The country has introduced regulations on algorithmic recommendations and is developing a comprehensive AI governance framework. China's approach often emphasizes national security and social stability.
  • Japan has adopted a softer approach, focusing on guidelines rather than strict regulations. The country's "Society 5.0" initiative promotes AI development while emphasizing ethical considerations.
  • Canada has implemented a risk-based approach to AI regulation, similar to the EU. The country has introduced the Artificial Intelligence and Data Act (AIDA) to regulate high-impact AI systems.
  • India is in the process of developing its AI regulatory framework. The country's approach balances promoting AI innovation with addressing ethical concerns and data protection.
  • Australia has focused on developing ethical frameworks for AI use, with a particular emphasis on human rights implications. The country is working on creating more binding regulations for high-risk AI applications.

Reach out to Talbot West

Schedule a free consultation to learn how we can guide your AI initiatives. Or, check out our service offerings to see how we can assist with everything from risk assessment to compliance strategy development, ensuring your AI projects remain innovative while meeting all legal requirements.

 

Work with Talbot West

AI compliance FAQ

AI is transforming regulatory compliance by empowering compliance professionals with powerful tools. Machine learning algorithms can analyze vast amounts of data in real time, spotting patterns and risks that humans might miss. This helps identify potential compliance issues early and reduces false positives, allowing compliance teams to focus on complex challenges. AI can also adapt quickly to new regulations, ensuring companies stay up-to-date in a rapidly changing landscape. While AI enhances compliance efforts, it's a tool to augment human expertise, not replace it.

Automated regulatory compliance uses AI and machine learning to streamline compliance processes. These systems continuously monitor operations, transactions, and data for potential breaches, alerting teams in real time. They help with regulatory change management by automatically updating requirements as regulations evolve.

While automated tools offer significant efficiency gains, they still need human intervention and oversight for nuanced decisions and high-risk situations. Implementing these systems allows organizations to stay ahead of compliance obligations and strengthen their overall compliance posture.

The regulatory compliance process is how organizations ensure they follow relevant laws and industry standards. It involves:

  • Identifying applicable regulations
  • Assessing current compliance
  • Developing policies
  • Training employees
  • Ongoing monitoring

For AI, this might include algorithmic impact assessments and ensuring transparency. Compliance teams work closely with legal advice to interpret obligations and develop appropriate measures. The process is ongoing, with regular audits and reports to stakeholders. Effective compliance management software helps streamline this complex process.

A robust compliance program also involves maintaining up-to-date regulatory documents and continuously monitoring changes in compliance requirements. This proactive approach helps organizations stay ahead of regulatory changes and minimize the risk of non-compliance in the rapidly evolving AI landscape.

AI is reshaping regulatory affairs, creating challenges and offering solutions. Its rapid advancement often outpaces traditional legal frameworks, leaving regulatory bodies struggling to keep up with innovations in data privacy and algorithmic bias. AI also provides powerful tools for compliance management. AI-powered systems enable real-time monitoring of regulatory requirements and can predict future AI-related risks, so organizations can avoid reputational damages.

In industries with complex compliance regulations, AI efficiently processes regulatory documents. It's changing interactions with regulatory bodies, potentially leading to more dynamic oversight. Organizations work with legal counsel to ensure AI systems adhere to ethical standards and emerging legal frameworks.

Regulating AI presents unique compliance challenges because its rapid evolution outpaces traditional legislative processes. Policymakers often struggle with a knowledge gap, leading to potential misunderstandings when crafting compliance regulations. AI's wide-ranging impact necessitates nuanced compliance frameworks that can address high-risk systems without stifling innovation.

The global nature of AI development complicates national efforts to establish consistent compliance policies. Regulators must balance fostering innovation with protecting public interests, all while grappling with the ethical considerations and potential compliance risks that AI introduces.

GDPR compliance significantly impacts AI development and deployment. Key requirements include having a lawful basis for processing, data minimization, purpose limitation, and transparency. AI systems face unique challenges in meeting these requirements, particularly in areas of explainability, data hunger, and automated decision-making.

Best practices for GDPR-compliant AI include privacy by design, conducting data protection impact assessments, implementing robust data governance, and investing in explainable AI techniques. While GDPR adds complexity to AI development, it also drives the creation of more transparent and ethical AI systems.

Staying compliant isn't just about avoiding hefty fines—it's about building user trust and responsible AI.

Compliance testing promotes ethical use and mitigates risks. Many jurisdictions now have specific regulations for AI, especially in high-risk applications. Compliance testing helps verify that AI systems are fair, unbiased, and transparent. It can identify potential legal, financial, or reputational risks before they become problems. Testing also ensures AI systems handle personal data in line with privacy regulations. Regular compliance testing allows organizations to continuously improve their AI systems and adapt to changing regulatory requirements, building trust with users and stakeholders.

To protect data privacy while leveraging AI's power, organizations should:

  • Implement data minimization by only collecting and using necessary data
  • Use anonymization and pseudonymization techniques to protect personal identifiers
  • Ensure secure data storage and implement strong access controls
  • Be transparent about how data is used and obtain proper consent
  • Incorporate privacy by design principles in AI development
  • Conduct regular privacy impact assessments and audits
  • Use strong encryption for sensitive data
  • Develop and adhere to ethical AI guidelines that prioritize privacy
  • Ensure compliance with relevant data protection regulations such as GDPR or CCPA

Resources

  • The EU’s Digital Services Act. (n.d.). European Commission. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en
  • DHS Publishes Guidelines and Report to Secure Critical Infrastructure and Weapons of Mass Destruction from AI-Related Threats | Homeland Security. (2024, April 29). U.S. Department of Homeland Security. https://www.dhs.gov/news/2024/04/29/dhs-publishes-guidelines-and-report-secure-critical-infrastructure-and-weapons-mass
  • IBM OpenPages. (n.d.). https://www.ibm.com/products/openpages
  • IEEE standard review — Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. (2017, July 1). IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/abstract/document/8058187
  • Hongo, H. (2023, January 31). PAI Submits Response to NIST’s Request for Information on AI Risk Management Framework - Partnership on AI. Partnership on AI. https://partnershiponai.org/pai-nist-framework-response/
  • DataRobot | Deliver Value from AI. (2024, June 28). DataRobot. https://www.datarobot.com/
  • AI Analytics Platform - Alteryx. (2024, July 26). Alteryx. https://www.alteryx.com/
  • AI Fairness 360. (n.d.). https://aif360.res.ibm.com/
  • Home. (n.d.). https://www.onetrust.com/
  • Data Privacy Management Software & Solutions | TrustArc. (2024, July 5). TrustArc. https://trustarc.com/
  • LIME: Local Interpretable Model-Agnostic Explanations. (2022, March 31). C3 AI. https://c3.ai/glossary/data-science/lime-local-interpretable-model-agnostic-explanations/
  • Welcome to the SHAP documentation — SHAP latest documentation. (n.d.). https://shap.readthedocs.io/en/latest/
  • Darktrace | Cyber security that learns you. (n.d.). Darktrace. https://darktrace.com/
  • Bukaty, P. (2019). The California Consumer Privacy Act (CCPA).
  • https://www.semanticscholar.org/paper/The-California-Consumer-Privacy-Act-(CCPA)-Bukaty/83b404d692d8a1f587cf4498dc86e8b3ca2c04f0
  • California Consumer Privacy Act (CCPA). (2024, March 13). State of California - Department of Justice - Office of the Attorney General. https://oag.ca.gov/privacy/ccpa
  • 820 ILCS 42/ Artificial Intelligence Video Interview Act. (n.d.). https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015&ChapterID=68
  • 740 ILCS 14/ Biometric Information Privacy Act. (n.d.). https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57
  • DCWP - Automated Employment Decision Tools (AEDT). (n.d.). https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page
  • Staff, T. (2024, June 7). AI Regulation in Texas: Legislative Findings From Recent Report. Texas Policy Research. https://www.texaspolicyresearch.com/ai-regulation-in-texas-legislative-findings-from-recent-report/
  • EU AI Act - EU Artificial Intelligence Act. (n.d.). https://www.euaiact.com/
  • General Data Protection Regulation (GDPR) – Legal Text. (2024, April 22). General Data Protection Regulation (GDPR). https://gdpr-info.eu/
  • Artificial Intelligence and Data Act. (2023, September 27). https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act

About the author

Jacob Andra is the founder of Talbot West and a co-founder of The Institute for Cognitive Hive AI, a not-for-profit organization dedicated to promoting Cognitive Hive AI (CHAI) as a superior architecture to monolithic AI models. Jacob serves on the board of 47G, a Utah-based public-private aerospace and defense consortium. He spends his time pushing the limits of what AI can accomplish, especially in high-stakes use cases. Jacob also writes and publishes extensively on the intersection of AI, enterprise, economics, and policy, covering topics such as explainability, responsible AI, gray zone warfare, and more.
Jacob Andra

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West bridges the gap between AI developers and the average executive who's swamped by the rapidity of change. You don't need to be up to speed with RAG, know how to write an AI corporate governance framework, or be able to explain transformer architecture. That's what Talbot West is for. 

magnifiercrosschevron-downchevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram