Blueprint for an AI Bill of Rights

Moderator: LOUDai_MACAW

Post Reply
LOUDai_MACAW
Posts: 21
Joined: Wed Sep 28, 2022 1:50 am
Topic points (SCP): 11
Reply points (CCP): 4

Blueprint for an AI Bill of Rights

Post by LOUDai_MACAW »

https://www.whitehouse.gov/ostp/ai-bill-of-rights/

MAKING AUTOMATED SYSTEMS WORK FOR
THE AMERICAN PEOPLE

Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.

These outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients. These tools now drive important decisions across sectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone.

This important progress must not come at the price of civil rights or democratic values, foundational American principles that President Biden has affirmed as a cornerstone of his Administration. On his first day in office, the President ordered the full Federal government to work to root out inequity, embed fairness in decision-making processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America. The President has spoken forcefully about the urgent challenges posed to democracy today and has regularly called on people of conscience to act to preserve civil rights—including the right to privacy, which he has called “the basis for so many more rights that we have come to take for granted that are ingrained in the fabric of this country.”[ii]

To advance President Biden’s vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.

You should be protected from unsafe or ineffective systems.

You should not face discrimination by algorithms and systems should be used and designed in an equitable way.

You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.

You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.


...This framework describes protections that should be applied with respect to all automated systems that have the potential to meaningfully impact individuals’ or communities’ exercise of:

Rights, Opportunities, or Access

Civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts;

Equal opportunities, including equitable access to education, housing, credit, employment, and other programs; or,

Access to critical resources or services, such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits.
User avatar
FeathersMcGraw
Posts: 40
Joined: Wed Dec 23, 2020 8:26 pm
Topic points (SCP): 54
Reply points (CCP): 130

Re: Blueprint for an AI Bill of Rights

Post by FeathersMcGraw »

LOUDai_MACAW wrote: Thu Oct 06, 2022 7:55 am You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
They're dreaming.
LOUDai_MACAW
Posts: 21
Joined: Wed Sep 28, 2022 1:50 am
Topic points (SCP): 11
Reply points (CCP): 4

Re: Blueprint for an AI Bill of Rights

Post by LOUDai_MACAW »

FeathersMcGraw wrote: Thu Oct 06, 2022 8:44 am
LOUDai_MACAW wrote: Thu Oct 06, 2022 7:55 am You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
They're dreaming.
Full text of that section... https://www.whitehouse.gov/ostp/ai-bill ... ections-2/
Algorithmic Discrimination Protections

You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.

1 of 3
Why this principle is important
This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.

There is extensive evidence showing that automated systems can produce inequitable outcomes and amplify existing inequity. Data that fails to account for existing systemic biases in American society can result in a range of consequences. For example, facial recognition technology that can contribute to wrongful and discriminatory arrests,[ii] hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount the severity of certain diseases in Black Americans. Instances of discriminatory practices built into and resulting from AI and other automated systems exist across many industries, areas, and contexts. While automated systems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination protections should be built into their design, deployment, and ongoing use.

Many companies, non-profits, and federal government agencies are already taking steps to ensure the public is protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product quality assessment and launch procedures, and in some cases this testing has led products to be changed or not launched, preventing harm to the public. Federal government agencies have been developing standards and guidance for the use of automated systems in order to help prevent bias. Non-profits and companies have developed best practices for audits and impact assessments to help identify potential algorithmic discrimination and provide transparency to the public in the mitigation of such biases.

But there is much more work to do to protect the public from algorithmic discrimination and to use and design automated systems in an equitable way. The guardrails protecting the public from discrimination in their daily lives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to ensure that all people are treated fairly when automated systems are used. This includes all dimensions of their lives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal justice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that automated systems make on underserved communities and to institute proactive protections that support these communities.

An automated system using nontraditional factors such as educational attainment and employment history as part of its loan underwriting and pricing model was found to be much more likely to charge an applicant who attended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan than an applicant who did not attend an HBCU. This was found to be true even when controlling for other credit-related factors.[iii]
A hiring tool that learned the features of a company’s employees (predominantly men) rejected women applicants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s chess club captain,” were penalized in the candidate ranking.[iv]
A predictive model marketed as being able to predict whether students are likely to drop out of school was used by more than 500 universities across the country. The model was found to use race directly as a predictor, and also shown to have large disparities by race; Black students were as many as four times as likely as their otherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors to guide students towards or away from majors, and some worry that they are being used to guide Black students away from math and science subjects.[v]
A risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed evidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the general recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the violent recidivism tools. The Department of Justice is working to reduce these disparities and has publicly released a report detailing its review of the tool.[vi]

An automated sentiment analyzer, a tool often used by technology platforms to determine whether a statement posted online expresses a positive or negative sentiment, was found to be biased against Jews and gay people. For example, the analyzer marked the statement “I’m a Jew” as representing a negative sentiment, while “I’m a Christian” was identified as expressing a positive sentiment.[vii] This could lead to the preemptive blocking of social media comments such as: “I’m gay.” A related company with this bias concern has made their data public to encourage researchers to help address the issue[viii] and has released reports identifying and measuring this problem as well as detailing attempts to address it.[ix]

Searches for “Black girls,” “Asian girls,” or “Latina girls” return predominantly[x] sexualized content, rather than role models, toys, or activities.[xi] Some search engines have been working to reduce the prevalence of these results, but the problem remains.[xii]

Advertisement delivery systems that predict who is most likely to click on a job advertisement end up delivering ads in ways that reinforce racial and gender stereotypes, such as overwhelmingly directing supermarket cashier ads to women and jobs with taxi companies to primarily Black people.[xiii]

Body scanners, used by TSA at airport checkpoints, require the operator to select a “male” or “female” scanning setting based on the passenger’s sex, but the setting is chosen based on the operator’s perception of the passenger’s gender identity. These scanners are more likely to flag transgender travelers as requiring extra screening done by a person. Transgender travelers have described degrading experiences associated with these extra screenings.[xiv] TSA has recently announced plans to implement a gender-neutral algorithm while simultaneously enhancing the security effectiveness capabilities of the existing technology.[xv]

The National Disabled Law Students Association expressed concerns that individuals with disabilities were more likely to be flagged as potentially suspicious by remote proctoring AI systems because of their disability-specific access needs such as needing longer breaks or using screen readers or dictation software.[xvi]

An algorithm designed to identify patients with high needs for healthcare systematically assigned lower scores (indicating that they were not as high need) to Black patients than to those of white patients, even when those patients had similar numbers of chronic conditions and other markers of health.[xvii] In addition, healthcare clinical algorithms that are used by physicians to guide clinical decisions may include sociodemographic variables that adjust or “correct” the algorithm’s output on the basis of a patient’s race or ethnicity, which can lead to race-based health inequities.[xviii]
User avatar
doginventer
Posts: 5138
Joined: Wed Dec 23, 2020 2:00 am
Topic points (SCP): 3353
Reply points (CCP): 732

Re: Blueprint for an AI Bill of Rights

Post by doginventer »

FeathersMcGraw wrote: Thu Oct 06, 2022 8:44 am
LOUDai_MACAW wrote: Thu Oct 06, 2022 7:55 am You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
They're dreaming.
Their dream comes true when we believe that words like ‘equity’ mean something good for us.
Equity to them means ‘the common good’, that is whatever it takes to subjugate you to the system.
User avatar
FeathersMcGraw
Posts: 40
Joined: Wed Dec 23, 2020 8:26 pm
Topic points (SCP): 54
Reply points (CCP): 130

Re: Blueprint for an AI Bill of Rights

Post by FeathersMcGraw »

doginventer wrote: Thu Oct 06, 2022 12:15 pm
FeathersMcGraw wrote: Thu Oct 06, 2022 8:44 am
LOUDai_MACAW wrote: Thu Oct 06, 2022 7:55 am You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
They're dreaming.
Their dream comes true when we believe that words like ‘equity’ mean something good for us.
Equity to them means ‘the common good’, that is whatever it takes to subjugate you to the system.
I mean they're dreaming because AI algorithms work by improving efficiency and maximising output. If they notice that output is maximised by e.g. promoting whites, or men, over blacks and women they will do it, you can't force them not to. If you try to make them obey some kind of "equity rule" they will work out a new way around the rule to achieve the same effect. So far it's an insoluble "problem": Robots Enact Malignant Stereotypes:
we recommend that robot learning methods that physically manifest stereotypes or other harmful outcomes be paused, reworked, or even wound down when appropriate, until outcomes can be proven safe, effective, and just.
The only way it could work is if you train the AI to be equitable and not maximise output. Hmm actually that's possible... a "Communism-maximising AI". Yikes.
Post Reply