AI Privacy, Protection, and a Bill of Rights

demo

With the rapid growth of artificial intelligence (AI) applications in our personal and business lives has come increasing concerns over our ability to control the use of personal data, our direct agency over AI-based decisions that affect our lives, and our methods of recourse in the event that such agency becomes diminished. As a result, numerous public and private organizations have taken it upon themselves to develop and issue policies and guidelines that seek to inform and protect people from the perceived harm that AI might inflict.

 

Released in early October of this year by the White House Office of Science and Technology Policy, the federal government’s Blueprint for an AI Bill of Rights provides high-level guidelines for the use of AI in both the private and public sectors and the ways in which the technology should affect individuals in their personal and professional lives. It is described in the document’s conclusion as “an overlapping set of backstops against potential harms.” It is, though, essential to keep in mind that these guidelines, which are the product of several years of work, are not binding in any way but rather join a number of other voluntary efforts across the U.S. and the rest of the world to adopt rules regarding transparency and ethics in AI. These guidelines have emerged from government agencies (e.g., the Department of Defense and the Federal Trade Commission), non-government groups, and private companies (Facebook, Google, etc.). In particular, the DoD publication on its position concerning AI technology struck an especially strategic and positive tone with then-secretary Esper’s assertion that “The adoption of AI ethical principles will enhance the department’s commitment to upholding the highest ethical standards as outlined in the DoD AI Strategy while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations.”

 

The Blueprint for an AI Bill of Rights comprises five key objectives: 

 

  • People should be protected from systems deemed “unsafe or ineffective.” 
  • People shouldn’t be discriminated against via algorithms, and AI-driven systems should be made and used “in an equitable way.” 
  • People should be kept safe “from abusive data practices” by safeguards built into AI systems and have control over how data about them is used. 
  • People should be aware of when an automated system is in use and be mindful of how it could affect them. 
  • People should be able to opt out of such systems “where appropriate” and get help from a person rather than a computer.

 

During a recent press briefing, Deputy Director of the White House Office of Science and Technology Policy Alondra Nelson said, “Much more than a set of principles, this is a blueprint to empower the American people to expect better and demand better from their technologies.”

 

A large and growing number of policies and guidelines

America is by no means alone in attempting to set citizens’ minds at ease concerning the potential uses of AI technology, as evidenced by The Global Partnership on AI that was formed in 2020, including both the EU and the US. In addition, UNESCO and the OECD have established principles for the proper use of AI. The World Economic Forum has also, for the past five years, been developing and communicating clear and accessible governance frameworks for AI in government and business. 

 

With the plethora of new guidelines and policies emerging from various public and private organizations, it is at this point unclear how such new guidance will relate to, support, or complement existing directives on AI technology, e.g., Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, the National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework, or statements issued by other organizations, such as the Equal Employment Opportunity Commission (EEOC), Department of Health and Human Services (HHS), and Consumer Financial Protection Bureau (CFPB).

 

A longstanding commitment to user protection

SparkCognition maintains constant awareness of all applicable AI-related policy guidelines and remains steadfastly committed to both the letter and spirit of these policies. We make every effort to eliminate all potential sources of bias and to ensure that all plausible use cases for our solutions are consistent with fair, equitable treatment of employees, customers, or any other individuals who might be affected by their operation. Specific examples of these efforts include:

 

  • The ability to mask employee faces/identities in the use of our Visual AI Advisor product
  • Protection of sensitive user data through the use of our Endpoint Protection and other cybersecurity products
  • Utilization of various explainability-enhancing techniques to enable complete understanding of conclusions reached by predictive maintenance and other solutions

 

To learn more about SparkCognition and our industry-leading portfolio of AI-powered solutions, visit us at https://www.sparkcognition.com/.   

Latest blogs

Earth Day Renewing Renewables and Scaling Renewable Intelligence hero image
Blog
Campbell LeFlore

Scaling Renewable Intelligence

On Monday, April 22, more than one billion people in 193 countries will participate in events celebrating the environmental movement and renewing their commitment to

Read More

SparkCognition is committed to compliance with applicable privacy laws, including GDPR, and we provide related assurances in our contractual commitments. Click here to review our Cookie & Privacy Policy.