As an international supplier of artificial intelligence (AI) technology, we at AIQ believe AI has the potential to revolutionise whole sectors and improve lives across the globe. However, we recognise that such technology has the capability to be used in ways that are morally wrong and therefore we also recognise the critical importance of responsible development and deployment of AI, ensuring it aligns with ethical principles and benefits all members of society.
Through this statement and our work, our objective is to foster responsible AI practices, ensuring fairness, transparency, accountability, and product safety in all aspects of AI development within AIQ, including but not limited to AI algorithms, models, data collection, training, testing, deployment, and monitoring of products. By adhering to this statement, AIQ aim to build trust with clients, stakeholders, and end-users while upholding the highest ethical standards in AI development.
In order to do this, and make this statement, we have identified five (5) core principles to abide by, namely:
Accountability;
Transparency;
Fairness and Bias Mitigation;
Privacy; and
Security and Product Safety.
These overarching themes are intended to guide the manner in which AIQ approaches its work, in particular the development and deployment of AI systems. In making any decisions about the use of our AI technologies, we will use the above to determine if and how we should progress.
For the purposes of this statement, to provide more context, we have set out further information about each of these principles and what AIQ is actively doing to build these into its processes.
AI should be developed in a way that does not cause harm to the living world. Critical to achieving this is every active participant taking responsibility for their role in AI development. Clear governance structures are required to ensure all those individuals are held to account over their actions, therefore encouraging better ethical practice through peer oversight.
Our commitments:
We take full responsibility for the development, deployment, and potential impacts of our AI solutions.
We establish clear governance structures and oversight mechanisms to ensure ethical AI practices throughout our organisation (including the use of ethical impact assessments).
We continuously monitor and improve our technologies through regular audits.
We actively engage with stakeholders, including experts, civil society, and policymakers, to develop and implement ethical AI frameworks.
Whilst accountability is important, we can only be held accountable if our methodologies and models are clear and easily understood by novices.
Explainability is the term which is used to show how an AI system arrived at a particular decision, focusing on understanding the internal reasoning of the model. Techniques used to achieve this include feature analysis, saliency maps and rule extraction, all of which help diagnose errors, identify biases, and improve model performance.
Transparency on the other hand focuses on the boarder picture, namely what and how of the entire system; who built the model, what is the decision-making processes for the model, and what are the limitations of the model?
Our commitments:
We endeavour to be open about the methodologies behind our AI systems and used to train and run our AI systems. The data used to train AI systems should be traceable and of known provenance.
We strive to build transparent AI models that are understandable by humans, whenever possible.
We explain the rationale behind AI decisions and provide mechanisms for users to access and comprehend how their data is used, subject always to any confidentiality obligations and commercial agreements.
We openly communicate the limitations and potential risks associated with our AI solutions.
AI should benefit everyone, not just some. Unfortunately, bias (whether conscious or otherwise) is an inevitable consequence of human decision making, which AI models learn from. As society tries to improve its own biases, AI models should actively strive towards minimising any bias in its own decision-making, regardless of the remit or sector in which the technology is deployed. We believe that AI should be inclusive, and its development should be attainable by anyone.
Our commitments:
We prioritise the positive societal impact of AI.
We check the output of any AI system for accuracy relative to the task requirements, to ensure it does not break any of the principles in this statement.
We promote transparency in decision-making algorithms and provide mechanisms for users to challenge unfair outcomes, subject always to any confidentiality obligations and commercial agreements.
We regularly conduct algorithmic fairness audits to identify and mitigate potential biases within our AI models.
We establish mechanisms for human review and appeal of AI decisions that may be perceived as unfair or discriminatory.
We conduct awareness campaigns to educate all employees about the importance of data privacy and ethical AI.
We build into our working practices and culture as an organisation that it is important to be ethical, and we all have a role to play in its success.
We strive to use diverse data sets that represent various population groups to minimize bias and promote fair outcomes for all.
We strive to develop and deploy AI models free from exploitation, bias or discrimination based on sensitive characteristics like race, gender, religion, disability, or any group of people.
We actively combat bias in our data, algorithms, and training processes through rigorous audits and mitigation techniques.
We promote diversity within our development teams to ensure multiple perspectives and approaches to identify and mitigate potential biases.
We employ proactive techniques to detect and prevent bias throughout the AI development lifecycle, from data collection to model training and deployment.
We strive to develop explainable AI models that allow us to understand and address potential sources of bias within their decision-making processes.
We actively stay informed about the latest research in bias detection and mitigation, continuously improving our practices and adapting to new challenges.
We train all employees on ethical practices, data privacy, and relevant regulations.
Privacy in AI isn't just about legal compliance; it's about protecting individual and customer autonomy, preventing discrimination, fostering trust, mitigating security risks, and enabling responsible innovation. We respect individual and customer rights to control their data and strive to build an AI future where privacy and progress go hand-in-hand..
Our commitments:
We uphold the highest standards of data privacy adhering to all relevant legal and regulatory requirements to protect data from unauthorized access, loss, or disclosure.
We obtain informed consent from individuals before using their data for AI development and deployment and will only collect data for specific and legitimate purposes.
We collect and use the minimum amount of data necessary for AI functionality, and such data will be retained only for the required period, following applicable legal requirements and organisational policies.
When engaging with third-party AI providers or collaborators, contractual agreements we ensure data privacy and compliance.
Ceding any sort of decision-making to a technology requires a certain level of trust. Trust requires privacy and therefore security. The protection of sensitive data will be paramount in all AI deployments, complying with relevant data protection laws and regulations. No one should have their rights to privacy abused in the creation or deployment of AI systems.
Our commitments:
We uphold the highest standards of data security, adhering to all relevant legal and regulatory requirements to protect data from unauthorized access, loss, or disclosure.
We implement robust security measures to protect sensitive data from unauthorised access, use, or disclosure, including ensuring all AI solutions undergo rigorous risk assessments to ensure the safety and the integrity.
We prioritise rigorous testing and evaluation protocols before deploying any AI product, ensuring its stability, reliability, and minimal risk of failure or unintended consequences.
We implement safeguards to maintain human oversight and control over AI systems, preventing autonomous actions that could harm individuals or society.
We openly communicate potential risks and limitations of our AI products and provide clear instructions for safe and responsible use.
We actively engage in research and development to improve the safety and reliability of our AI products, adapting to evolving risks and threats.
AIQ has adopted an internal Ethical AI policy document (applicable to all employees, contractors and partners), seeking to enact clear guidelines to achieve the above principles. In particular, our senior leadership team will have oversight of this policy and will measure any engagement against the principles set out above. Compliance with this statement will be regularly reported to stakeholders.
We recognise that AI ethics is an evolving field, and we remain committed to continuous learning and improvement. We actively engage in research, development, and collaboration to advance ethical AI practices and contribute to a responsible future for this powerful technology.
We commit to contribute to and uphold industry best practice for ethics in AI, using AI for positive activities, not for harm, helping to ensure we are creating solutions, not problems.
This statement serves as a public commitment to our ethical principles. AI is a fast moving, and rapidly changing area of technology, and as such it is expected that we may need to revise this statement to account for as yet unforeseen-use cases or changes in international legislation. We will always be guided by the key principles listed above and will regularly review to see if we should add additional items.
Should you have any questions regarding this statement, please do not hesitate to contact us at: [general@aiqintelligence.ae]