How will 
artificial intelligence
change our future?
Services
AI ethics: balancing progress and principles
Quick links
Art Deco-style illustration depicting a human and a robot interacting. Both hold glowing holographic puzzle pieces that are nearly touching, symbolizing collaboration. Geometric patterns, bold lines, and vibrant colors typical of Art Deco, such as gold, teal, and deep blue.

AI ethics: balancing progress and principles

By Jacob Andra / Published June 25, 2024 
Last Updated: July 25, 2024

We've all heard the horror stories: biased algorithms, privacy breaches, AI gone rogue. But does that mean we should shy away from artificial intelligence? Hardly. It means we need to get smarter about how we use it. Including robust AI governance protocols and a strong adherence to ethical principles.

9 big-picture ethical issues with AI

Beyond the scope of your organizational goals for AI implementation, there are bigger, societal issues with artificial intelligence and its implications. The World Economic Forum lists the following 9 ethical concerns. Some of these are already upon us, while others sound like science fiction:

  1. Unemployment: the impact of AI on jobs and how society will adapt to potential widespread job displacement.
  2. Inequality: how to distribute wealth created by AI-driven productivity gains.
  3. Humanity: the effects of AI on human behavior and interactions, including potential manipulation of human psychology.
  4. Artificial stupidity: guarding against mistakes and limitations in AI systems.
  5. Racist robots: eliminating bias in AI systems to ensure fairness and neutrality.
  6. Security: protecting AI systems from adversaries and malicious use.
  7. Evil genies: preventing unintended consequences from AI systems interpreting commands too literally or without full context.
  8. Singularity: maintaining control over increasingly complex and potentially superintelligent AI systems.
  9. Robot rights: considering the ethical treatment of AI as systems become more advanced and potentially sentient.

Of the above 9 ethical concerns, it’s hard to solve unemployment, social inequality, the singularity, and the rights of AIs on an organizational level. But you can certainly safeguard against security breaches, bias, and stupidity with the right protocols in place.

Bringing it down from a society-wide perspective, let’s look at the most crucial ethical dilemmas and potential risks that you can address on an organizational level.

AI ethics on an enterprise level

An abstract cityscape where buildings are formed by interconnected lines and nodes, representing AI networks, while at the forefront, human figures and natural elements blend into the network, symbolizing ethical integration at an enterprise level.

When implementing artificial intelligence of any sort in your organization, you want to stay connected to basic human values and ethical principles. Here are some guidelines you can use as a framework:

  1. Data privacy and consent: ensure proper collection, storage, and use of data used to train and operate AI systems, with clear consent from individuals whose data is being used.
  2. Transparency and explainability: develop AI systems that can provide clear explanations for their decisions, especially in areas that affect employees or customers.
  3. Accountability structures: establish clear lines of responsibility for AI-driven decisions and their consequences within the organization.
  4. Algorithmic fairness in internal processes: ensure AI systems used for hiring, promotion, or performance evaluation do not discriminate against protected groups.
  5. Customer protection: safeguard customers from potential harm or manipulation by AI-driven systems, particularly in marketing or service delivery.
  6. Intellectual property and data ownership: clarify ownership of AI-generated content and innovations, as well as the data used to train AI systems.
  7. Cybersecurity and system integrity: protect AI systems from tampering, hacking, or unauthorized access.
  8. Ethical use guidelines: develop clear policies on acceptable uses of AI within the organization, including restrictions on potentially harmful applications. Develop a robust AI governance framework and follow it.
  9. Stakeholder engagement: involve employees, customers, and other stakeholders in discussions about AI implementation and its ethical implications.

Top 10 principles to guide ethical AI implementation

If you follow these principles, you’ll buffer yourself against most ethical risks and pitfalls.

  1. Transparency: AI systems should be explainable, with decision-making processes understandable to users and stakeholders.
  2. Fairness and non-discrimination: AI should be designed and operated to avoid bias against groups or individuals.
  3. Privacy concerns and data protection: employ proper data management practices, including secure storage, controlled access, and regulated deletion of personal data.
  4. Accountability: establish clear lines of responsibility for the ethical implications of AI systems' use or misuse.
  5. Safety and reliability: ensure AI systems operate consistently within design parameters and do not pose threats to people's physical or mental well-being.
  6. Human agency and oversight: maintain meaningful human oversight and the ability to intervene in AI operations, especially for high-risk applications.
  7. Beneficence: consider the common good and strive for positive societal impact in AI development.
  8. Lawfulness and compliance: adhere to relevant laws and regulations throughout the AI system lifecycle.
  9. Security: protect AI systems and their data from cyber threats and unauthorized access.
  10. Continuous monitoring and improvement: conduct regular auditing and assessment of artificial intelligence systems to identify and address ethical issues or biases that may arise over time.

Examples of AI ethics in action

There are a wide range of ethical scenarios that crop up for artificial intelligence applications. In each of the following examples, human intelligence and human values counteracted the potentially unethical fallout of AI.

Healthcare

A hospital's AI-powered diagnostic tool showed bias against certain ethnic groups, leading to potential misdiagnoses.

The hospital partnered with diverse medical institutions to retrain the AI on a more representative dataset, implemented ongoing bias checks, and added a human oversight layer for high-stakes diagnoses.

Finance

A bank's AI-driven loan approval system was rejecting applications from qualified individuals in low-income neighborhoods.

The bank revised its AI model to consider alternative credit data, implemented fairness metrics, and introduced a human appeal process for rejected applications.

Retail

An e-commerce giant's AI-powered pricing algorithm was found to be engaging in unintentional price discrimination based on user location.

The company redesigned its algorithm to focus on supply-demand dynamics rather than user data, and implemented regular audits to ensure fair pricing across all demographics.

Manufacturing

An automotive company's AI-powered worker productivity monitoring system was found to be infringing on employee privacy and causing undue stress, leading to health issues and high turnover rates.

The company redesigned the system to focus on overall production efficiency rather than individual tracking, implemented clear data usage policies, allowed workers to access their own data, and introduced AI-assisted ergonomic improvements to enhance worker well-being alongside productivity.

Education

A university's AI admissions screening tool was found to favor applicants from certain high schools, potentially perpetuating educational inequality.

The university removed school-specific data from the AI's decision-making process, introduced a holistic review system combining AI recommendations with human judgment, and increased outreach to underrepresented schools.

Human resources

A large corporation's AI-driven resume screening tool was inadvertently filtering out qualified female candidates for technical positions.

The company retrained the AI using gender-neutral language, implemented regular bias audits, and introduced a diverse panel of human recruiters to review AI recommendations.

Legal

A law firm's AI-powered case prediction tool was showing bias towards outcomes favoring larger corporate clients.

The firm recalibrated the AI using a more balanced case history, introduced confidence intervals for predictions, and established a policy of using the AI tool as a supplement to, not a replacement for, legal expertise.

Agriculture

An agtech company's AI-driven pesticide application system was optimizing for maximum crop yield, leading to overuse of chemicals harmful to local ecosystems and potentially affecting food safety.

The company recalibrated its AI to balance yield with environmental impact and food safety standards. They incorporated data from environmental sensors and long-term ecological studies, implemented strict upper limits on chemical use, and introduced transparency features allowing consumers to trace the AI-guided growing practices for their food.

Media

A streaming service's artificial intelligence content recommendation system was creating "filter bubbles," limiting users' exposure to diverse content.

The company revised its algorithm to include "discovery" parameters, introduced user-controlled diversity settings, and began providing transparent explanations for its recommendations.

Public sector

A city's AI-driven traffic management system was optimizing flow in wealthy areas at the expense of poorer neighborhoods.

The city government recalibrated the system to prioritize overall transit time reduction, introduced equity audits, and created a citizen oversight committee to ensure fair implementation across all areas.

It all starts with good leadership

When leaders consistently emphasize the importance of ethical AI practices and demonstrate their commitment through actions, it sends a clear message throughout the organization that ethics isn't just a buzzword, but a fundamental aspect of how your company operates.

Leaders need to align AI ethical oversight with the company's core values. This alignment creates a coherent framework that guides all AI-related activities, from development to deployment. It's not enough to have a separate set of AI ethics guidelines; they should be an extension of what the company already stands for.

Of course, fostering an ethical AI culture requires more than just words. Leaders must be willing to allocate real resources to these initiatives. This means dedicating budget, time, and personnel to develop and implement ethical AI practices. It could involve investing in training programs, developing auditing tools, or funding research into emerging ethical challenges.

Or, it could involve working with Talbot West on a set of AI governance protocols to keep you on the right side of ethical and legal issues as you integrate artificial intelligence into your enterprise. Contact us to learn how we can help.

Future trends in AI ethics

An abstract cityscape with skyscrapers formed by financial symbols (dollar signs, graphs) and connected by clean lines representing data flows. In the forefront, extremely stylized human figures interact with these elements, symbolizing ethical AI applications in finance.

As AI becomes more prevalent and powerful, we're seeing a rapidly evolving landscape that businesses need to navigate carefully.

Regulatory complexity

Evolving regulations and compliance requirements are at the forefront. Governments and regulatory bodies worldwide are scrambling to catch up with the rapid advancements in AI technology. Whether you think their actions are heavy-handed or too weak, we’re in for a deluge of regulation—though it won’t be evenly distributed. Some states and nations will act more strongly than others, creating a patchwork of compliance demands.

We're seeing proposals like the EU's AI Act, which aims to categorize AI systems based on their risk level and impose stricter regulations on high-risk applications. In the U.S., while there's no comprehensive federal AI regulation yet, we're seeing movement at the state level and in specific sectors like healthcare and finance. Companies will need to stay agile, constantly updating their compliance strategies as these regulations take shape.

Autonomous and agentic AI

As AI systems become more autonomous and capable of making complex decisions, we'll need to grapple with questions of accountability and control. The development of AI in sensitive areas like healthcare or criminal justice will require particularly careful ethical scrutiny.

Generative AI will continue to grow in capability, and we’re already seeing the rise of agentic AI: artificial intelligence systems that can make decisions and take actions. As AI becomes increasingly agentic, expect ethical and moral principles to come ever more into sharp focus.

Corporate reputation management

Consumers and employees alike are becoming more aware and concerned about the ethical implications of AI. We've seen instances where companies faced significant backlash due to perceived unethical use of AI, whether from biased hiring algorithms or privacy-invading data practices.

On the flip side, companies that demonstrate a strong commitment to ethical AI can build trust and loyalty among their stakeholders. As AI becomes more ubiquitous, a company's approach to AI ethics may become as important to its brand as its environmental or labor practices.

More internal resources allocated to AI ethics

Looking ahead, we can expect AI ethics to become an increasingly central part of corporate strategy and governance. Companies might need to create dedicated AI ethics boards or appoint chief ethics officers to navigate these complex issues.

We also expect a growing demand for AI auditing and certification processes, similar to how we see financial or environmental audits today.

Talbot West is here for the ride

We’re excited to watch artificial intelligence technology unfold at such a rapid pace, and we’re also aware of the risks and potential impacts it can have.

We believe in embracing the potential of AI while upholding societal values and universal ethical principles. If you’d like our help navigating these thorny issues, we’d love to talk.

Contact Talbot West

FAQ related to AI ethics

Here are three examples in which AI seemed to be operating in a manner not consistent with ethical and moral principles:

  1. Instagram's content prioritization algorithm: an investigation in 2020 found that Instagram's algorithm prioritized photos of users showing more skin, potentially impacting content creators and promoting objectification.
  2. Amazon's AI hiring tool, developed in 2014, showed bias against women in tech roles by 2015. Trained on mostly male resumes, it penalized female-associated terms. Despite efforts to fix it, Amazon couldn't ensure fairness and abandoned the project in 2018.
  3. A healthcare algorithm used healthcare costs as a proxy for health needs, unintentionally discriminating against black patients due to historically lower healthcare spending. This led to underestimating their care needs compared to white patients with fewer health issues. Researchers found that predicting future health conditions instead of costs could reduce bias.

Here are the five commonly-cited pillars of AI ethics:

  1. Accountability: AI systems and their developers should be held responsible for the decisions and actions of the AI. This includes having clear lines of responsibility and oversight mechanisms.
  2. Transparency and explainability: AI systems should be understandable and their decision-making processes should be explainable to users and stakeholders. This helps build trust and allows for proper scrutiny.
  3. Fairness and non-discrimination: AI should be developed and used in ways that avoid bias and unfair discrimination against individuals or groups. This includes careful consideration of training data and regular auditing for biases.
  4. Privacy and data protection: AI systems must respect user privacy and handle personal data securely and ethically. This involves proper data management practices, including secure storage, controlled access, and regulated deletion.
  5. Beneficence: AI should be developed for the common good and to benefit humanity. They should be developed to enhance human life.

Why are people so worked up about AI? It's not just about robots taking our jobs (though that's definitely on the list). It's about fairness, privacy, and who's in control when things go wrong. It's about whether AI will make our world more equal or just amplify the inequalities we already have.

Let's break down why AI is such a hot-button issue. Here are the main existential and ethical risks critics highlight with artificial intelligence:

  1. Bias and discrimination: AI systems can perpetuate or amplify existing societal biases, often due to biased training data reflecting historical inequalities.
  2. Privacy and surveillance concerns: AI’s data requirements raise issues about personal information collection, use, and protection.
  3. Job displacement: concerns that AI automation will replace human workers, potentially leading to widespread unemployment.
  4. Lack of transparency and explainability: many AI systems operate as "black boxes," making decision-making processes difficult to understand or explain.
  5. Accountability issues: unclear responsibility when AI systems make mistakes or cause harm.
  6. Security vulnerabilities: AI systems can be susceptible to attacks or manipulation.
  7. Impact on human autonomy: concerns about maintaining human agency as AI takes on more decision-making roles.
  8. Ethical decision-making: AI may face complex moral dilemmas that are difficult to program for.
  9. Misinformation and manipulation: AI can be used to create convincing fake content or spread misinformation at scale.
  10. Killer robots: a fear that AI will get out of control and supersede humanity, potentially deciding to eliminate us.
  11. Intellectual property and creativity issues: questions about ownership, copyright, and human creativity value in AI-assisted work.

About the author

Jacob Andra is the founder of Talbot West and a co-founder of The Institute for Cognitive Hive AI, a not-for-profit organization dedicated to promoting Cognitive Hive AI (CHAI) as a superior architecture to monolithic AI models. Jacob serves on the board of 47G, a Utah-based public-private aerospace and defense consortium. He spends his time pushing the limits of what AI can accomplish, especially in high-stakes use cases. Jacob also writes and publishes extensively on the intersection of AI, enterprise, economics, and policy, covering topics such as explainability, responsible AI, gray zone warfare, and more.
Jacob Andra

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West bridges the gap between AI developers and the average executive who's swamped by the rapidity of change. You don't need to be up to speed with RAG, know how to write an AI corporate governance framework, or be able to explain transformer architecture. That's what Talbot West is for. 

magnifiercrosschevron-downchevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram