Bias

Make the Invisible Visible: Catalyst #BiasCorrect For International Women's Day 2021 Campaign Asks Challenging Questions

Retrieved on: 
Wednesday, March 3, 2021

NEWYORK, March 3, 2021 /PRNewswire/ -- Catalyst is observing International Women's Day (IWD) 2021 by challenging companies to join its #BiasCorrect "Make the Invisible Visible" global campaign, providing virtual backgrounds that encourage courageous conversations about unconscious bias in meetings.

Key Points: 
  • NEWYORK, March 3, 2021 /PRNewswire/ -- Catalyst is observing International Women's Day (IWD) 2021 by challenging companies to join its #BiasCorrect "Make the Invisible Visible" global campaign, providing virtual backgrounds that encourage courageous conversations about unconscious bias in meetings.
  • "Make the Invisible Visible really showcases the experiences of bias women have in remote work during the Covid-19 crisis," said Lorraine Hariton, Catalyst President and CEO.
  • Catalystinvitesindividualsandcompaniestovisit the #BiasCorrect webpage on March 8,which provides resources for everyone to help understand, interrupt, and correct unconscious bias.
  • Bank of America and The Guardian Life Insurance Company of America are sponsors for this campaign.

Make the Invisible Visible: Catalyst #BiasCorrect For International Women's Day 2021 Campaign Asks Challenging Questions

Retrieved on: 
Wednesday, March 3, 2021

NEWYORK, March 3, 2021 /PRNewswire/ -- Catalyst is observing International Women's Day (IWD) 2021 by challenging companies to join its #BiasCorrect "Make the Invisible Visible" global campaign, providing virtual backgrounds that encourage courageous conversations about unconscious bias in meetings.

Key Points: 
  • NEWYORK, March 3, 2021 /PRNewswire/ -- Catalyst is observing International Women's Day (IWD) 2021 by challenging companies to join its #BiasCorrect "Make the Invisible Visible" global campaign, providing virtual backgrounds that encourage courageous conversations about unconscious bias in meetings.
  • "Make the Invisible Visible really showcases the experiences of bias women have in remote work during the Covid-19 crisis," said Lorraine Hariton, Catalyst President and CEO.
  • Catalystinvitesindividualsandcompaniestovisit the #BiasCorrect webpage on March 8,which provides resources for everyone to help understand, interrupt, and correct unconscious bias.
  • Bank of America and The Guardian Life Insurance Company of America are sponsors for this campaign.

ACM CONFERENCE SHOWCASES RESEARCH ON FAIRNESS, ACCOUNTABILITY AND TRANSPARENCY IN ALGORITHMIC SYSTEMS

Retrieved on: 
Thursday, February 25, 2021

Her work is in algorithms, particularly in data privacy, algorithmic fairness, algorithmic game theory, and online algorithms.

Key Points: 
  • Her work is in algorithms, particularly in data privacy, algorithmic fairness, algorithmic game theory, and online algorithms.
  • This year the conference is introducing plenary panel sessions to highlight key issues facing the field from disciplines and regions that have been historically underrepresented in the conference.
  • ACM strengthens the computing professions collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence.
  • ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Panther Data Solutions Launches the First Racial Bias Alert™ Tool for Enterprises

Retrieved on: 
Monday, February 22, 2021

WILLIAMSVILLE, N.Y., Feb. 22, 2021 /PRNewswire/ -- Panther Data Solutions, a new minority-owned tech company launched by Black Progress Matters' Minority-Business Incubation Program , announced today the rollout of their highly anticipated Racial Bias Alert tool .

Key Points: 
  • WILLIAMSVILLE, N.Y., Feb. 22, 2021 /PRNewswire/ -- Panther Data Solutions, a new minority-owned tech company launched by Black Progress Matters' Minority-Business Incubation Program , announced today the rollout of their highly anticipated Racial Bias Alert tool .
  • While most organizations today have racial bias response protocols and remediation strategies, Panther Data Solutions' Racial Bias Alert is the first solution to provide an immediate deterrent to racial bias in an organization.
  • Racial Bias Alert enables leading organizations to have instant-impact in eliminating racial bias in the workplace by monitoring internal communications [e.g.
  • Panther Data Solutions' Data Detect, Data Connect, Data Archive, Data Drop, Data Sentinel, and Racial Bias Alert solutions fully enable comprehensive Data as a Resource foundation for the critical information governance that is demanded of every enterprise.

Upper Adams School District Working with Pennsylvania Human Relations Commission to Address Issues of Alleged Racism and Discrimination Shared Through Student Reports

Retrieved on: 
Monday, February 1, 2021

BIGLERVILLE, Pa., Feb. 1, 2021 /PRNewswire-PRWeb/ --The Upper Adams School District today announced working with the Pennsylvania Human Relations Commission (PHRC) leadership to create a comprehensive plan to address past and present allegations of discrimination at Biglerville High School, located in the district.

Key Points: 
  • BIGLERVILLE, Pa., Feb. 1, 2021 /PRNewswire-PRWeb/ --The Upper Adams School District today announced working with the Pennsylvania Human Relations Commission (PHRC) leadership to create a comprehensive plan to address past and present allegations of discrimination at Biglerville High School, located in the district.
  • PHRC will provide Unconscious Bias Training for school faculty, staff and administrators at no cost to the School District.
  • "Every student, of every background, is at the heart of our purpose and mission every day in the Upper Adams School District.
  • The School District plans to share additional updates with the public as the comprehensive plan is finalized to address the allegations of racism and discrimination.

Recruitment Platform PredictiveHire shares its ethical framework for AI

Retrieved on: 
Tuesday, January 19, 2021

- Focuses on establishing a data-driven approach to fairness that provides an objective pathway for evaluating, challenging and enhancing fairness considerations.

Key Points: 
  • - Focuses on establishing a data-driven approach to fairness that provides an objective pathway for evaluating, challenging and enhancing fairness considerations.
  • - Includes a set of measures and guidelines to implement and maintain fairness in AI based candidate selection tools.
  • -For hiring managers and organisations, it provides an assurance as well as a template to query fairness related metrics of AI recruitment tools.
  • Predictive Hire has become one of the most trusted mobile-first AI recruitment platforms, used by companies across Australia, India, South Africa, UK and the US, with a candidate every two minutes engaging with their unique AI chat bot Phai.

Recruitment Platform PredictiveHire shares its ethical framework for AI

Retrieved on: 
Tuesday, January 19, 2021

- Focuses on establishing a data-driven approach to fairness that provides an objective pathway for evaluating, challenging and enhancing fairness considerations.

Key Points: 
  • - Focuses on establishing a data-driven approach to fairness that provides an objective pathway for evaluating, challenging and enhancing fairness considerations.
  • - Includes a set of measures and guidelines to implement and maintain fairness in AI based candidate selection tools.
  • -For hiring managers and organisations, it provides an assurance as well as a template to query fairness related metrics of AI recruitment tools.
  • Predictive Hire has become one of the most trusted mobile-first AI recruitment platforms, used by companies across Australia, India, South Africa, UK and the US, with a candidate every two minutes engaging with their unique AI chat bot Phai.

Obesity Action Coalition Takes a stand against weight bias through National Public Awareness Campaign - Stop Weight Bias

Retrieved on: 
Tuesday, January 12, 2021

TAMPA, Fla., Jan. 12, 2021 /PRNewswire/ --The Obesity Action Coalition (OAC) has launched a new national public awareness campaign, titled "Stop Weight Bias" in an effort to raise awareness about the negative impact of weight bias from childhood to adulthood and highlight the areas of life where people affected by obesity face weight bias the most, such as healthcare, employment, education, in the media and more.

Key Points: 
  • TAMPA, Fla., Jan. 12, 2021 /PRNewswire/ --The Obesity Action Coalition (OAC) has launched a new national public awareness campaign, titled "Stop Weight Bias" in an effort to raise awareness about the negative impact of weight bias from childhood to adulthood and highlight the areas of life where people affected by obesity face weight bias the most, such as healthcare, employment, education, in the media and more.
  • The Stop Weight Bias Campaign is committed to raising awareness, putting a stop to weight bias and pushing equality forward.
  • The OAC is calling on the public to "be a part of the solution" and stop weight bias.
  • To learn more about the Stop Weight Bias Campaign and view the official public service announcement, visit www.StopWeightBias.com .

Six things to consider when using algorithms for employment decisions

Retrieved on: 
Friday, December 18, 2020

We have highlighted six key points organisations must consider before implementing algorithms for hiring purposes.Bias and discrimination are a problem in human decision-making, so it is a problem in AI decision makingSo, you must assess whether AI is a necessary and proportionate solution to a problem before you start processing. This assessment should form part of your data protection impact assessment. We have written about what you need to consider when undertaking data protection impact assessments for AI in our guidance on AI and data protection.It is hard to build fairness into an algorithmUK based organisations also need to remember there is no guarantee that an algorithm, designed to meet US standards, will meet UK fairness standards.The advancement of big data and machine learning algorithms is making it harder to detect bias and discriminationThis is an area where best practice and technical approaches continue to develop. You should monitor changes and invest time and resources to ensure you continue to follow best practice and your staff remain appropriately trained.You must consider data protection law AND equalities law when developing AI systems.In several ways, data protection addresses unjust discrimination:Under the fairness principle AI systems must process personal data in ways an individual would reasonably expect.The fairness principle requires any adverse impact on individuals to be justified.The law provides aims to protect individuals’ rights and freedoms with regard to the processing of their personal data. This includes the right to privacy but also the right to non-discrimination.The law states businesses must use appropriate technical and organisational measures to prevent discrimination when processing personal data for profiling and automated decision-making.Organisations must undertake a data protection impact assessment when processing data in this way and ensure they build in data protection by design. These accountability mechanisms force organisations to consider how their processing might infringe on people’s rights and freedoms, including through discrimination and bias.So, although both address unjust discrimination, organisations must consider their obligations under both laws separately. Compliance with one will not guarantee compliance with the other.Using solely automated decisions for private sector hiring purposes is likely to be illegal under the GDPRSolely automated decision-making that has a legal or similarly significant effect is illegal under the General Data Protection Regulation (GDPR). There are three exemptions to this:you have had explicit consent from the individual,the decision is necessary to enter into a contract, orit is authorised by union or member state law.However, these are unlikely to be appropriate in the case of private sector hiring. This is because:consent is unlikely to be freely given due to the imbalance of power between the employer and the job candidate,it is unlikely that any solely automated decision-making couldn’t be replaced with a human decision-making process, andthe exemption allowing authorisation by union or member state law is not applicable to private business.Organisations should therefore consider how they can bring a human element into an AI assisted decision-making process.Algorithms and automation can also be used to address the problems of bias and discriminationAlgorithms do not just impact society; society also impacts the use of algorithms.

Key Points: 
  • We have highlighted six key points organisations must consider before implementing algorithms for hiring purposes.
    1. Bias and discrimination are a problem in human decision-making, so it is a problem in AI decision making
  • So, you must assess whether AI is a necessary and proportionate solution to a problem before you start processing. This assessment should form part of your data protection impact assessment. We have written about what you need to consider when undertaking data protection impact assessments for AI in our guidance on AI and data protection.
    1. It is hard to build fairness into an algorithm
  • UK based organisations also need to remember there is no guarantee that an algorithm, designed to meet US standards, will meet UK fairness standards.
    1. The advancement of big data and machine learning algorithms is making it harder to detect bias and discrimination
  • This is an area where best practice and technical approaches continue to develop. You should monitor changes and invest time and resources to ensure you continue to follow best practice and your staff remain appropriately trained.
    1. You must consider data protection law AND equalities law when developing AI systems.
  • In several ways, data protection addresses unjust discrimination:
    • Under the fairness principle AI systems must process personal data in ways an individual would reasonably expect.
    • The fairness principle requires any adverse impact on individuals to be justified.
    • The law provides aims to protect individuals’ rights and freedoms with regard to the processing of their personal data. This includes the right to privacy but also the right to non-discrimination.
    • The law states businesses must use appropriate technical and organisational measures to prevent discrimination when processing personal data for profiling and automated decision-making.
    • Organisations must undertake a data protection impact assessment when processing data in this way and ensure they build in data protection by design. These accountability mechanisms force organisations to consider how their processing might infringe on people’s rights and freedoms, including through discrimination and bias.
  • So, although both address unjust discrimination, organisations must consider their obligations under both laws separately. Compliance with one will not guarantee compliance with the other.
    1. Using solely automated decisions for private sector hiring purposes is likely to be illegal under the GDPR
  • Solely automated decision-making that has a legal or similarly significant effect is illegal under the General Data Protection Regulation (GDPR). There are three exemptions to this:
    • you have had explicit consent from the individual,
    • the decision is necessary to enter into a contract, or
    • it is authorised by union or member state law.
  • However, these are unlikely to be appropriate in the case of private sector hiring. This is because:
    • consent is unlikely to be freely given due to the imbalance of power between the employer and the job candidate,
    • it is unlikely that any solely automated decision-making couldn’t be replaced with a human decision-making process, and
    • the exemption allowing authorisation by union or member state law is not applicable to private business.
  • Organisations should therefore consider how they can bring a human element into an AI assisted decision-making process.
    1. Algorithms and automation can also be used to address the problems of bias and discrimination
    • Algorithms do not just impact society; society also impacts the use of algorithms.
    • This year two significant global events could lead to important changes in the use of algorithms for employment-based decision-making.
    • First, with many people losing their jobs due to the Covid-19 pandemic, more people will be applying for limited vacancies.
    • This could see employers looking at algorithms to ease the burden on HR departments.
    • The ICO has been exploring the use of algorithms and automated decision- making and the risks and opportunities they pose in an employment context.
    • Big data and machine learning (ML) algorithms are increasingly being used to make automated decisions that significantly impact many aspects of peoples lives, including decisions related to employment.
    • New uses of algorithms and automation are being developed to address some of problems of bias and discrimination in employment automation.
    • For example, algorithms can be used to detect bias and discrimination in the early stages of a systems lifecycle.
    • So, whilst we may never be able to remove the most ingrained human biases, using automation, we can improve how we make decisions.

    Conclusion


      The data protection implications of AI are embedded in the ICO’s priorities  and we have produced guidance for organisations on the use of AI and data protection and explaining the decisions made by AI. As we continue to bring more technology experts into the organisation, our work on discriminatory outcomes resulting from the use of algorithms and personal data will continue to expand and develop...
  • DataRobot Introduces Bias & Fairness Testing in Latest Version of Enterprise AI Platform

    Retrieved on: 
    Tuesday, December 15, 2020

    DataRobot , the leading enterprise AI platform, today introduced automatic Bias & Fairness Testing to identify bias in models with protected features such as gender and ethnicity, then provide guidance to resolve upstream issues and prevent bias from reoccurring in the future.

    Key Points: 
    • DataRobot , the leading enterprise AI platform, today introduced automatic Bias & Fairness Testing to identify bias in models with protected features such as gender and ethnicity, then provide guidance to resolve upstream issues and prevent bias from reoccurring in the future.
    • In support of that commitment, the company today released the Bias & Fairness Testing feature to automatically identify model bias and determine its source.
    • With Bias & Fairness Testing, users can define protected dataset features, and through a guided workflow, choose the most appropriate fairness metric to fit their specific use case.
    • Our new Bias & Fairness Testing capabilities further strengthen our customers ability to build trustworthy, explainable AI models that generate real business value.