EdMyst AI Principles

Last updated: August 1, 2023

At EdMyst, we declare the intent to use AI for innovation and trustworthiness that respects human rights and democratic values. These set standards for our uses of AI that are practical, flexible enough to stand the test of time, and aligned with international standards.

 

1.       Drive inclusive growth, sustainable development, and well-being

Engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development, and well-being.

 

2.       Human-centered values and fairness

Implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

 

3.       Transparency and explainability

Commit to transparency and responsible disclosure regarding our AI systems; provide meaningful information, appropriate to the context, and consistent with the state of art:

  • to foster a general understanding of AI systems
  • to make stakeholders aware of their interactions with AI systems
  • to enable those affected by an AI system to understand the outcome
  • to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation, or decision.

 

4.       Robustness, security, and safety

AI systems should be robust, secure, and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.

Ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.

Apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety, and bias.

 

5.       Accountability

Be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.