RESPONSIBLE AI GUIDELINES
1.
BENEFICIAL AI
We aim to build AI systems that benefit users and society. We promote advances in business operations through the adoption of ethical AI. We embrace the discussion on the societal impact of AI and the future of work. We aim to avoid AI’s negative societal and business impacts, such as misinformation, bias amplification, and promotion of societal divides. We aim at AI that is fair, inclusive, and non discriminative. We do not develop systems that are meant to be used for malicious purposes. For consequential and life-critical applications, we promote the review of our AI algorithms by our customers or experts commissioned by our customers.
2.
HUMAN-CENTRIC AI
We believe that AI systems should assist human beings in their work, supporting and enhancing decision-making processes in businesses. Our systems allow for a review of the outputs of AI algorithms. For planning applications, this usually means the results can be reviewed before being put into practice. Where real-time decision-making is involved, we allow for human monitoring and auditing. We build AI that provides tools for users rather than building fully autonomous agents. We believe that AI should increase human agency and that accountability needs to remain with human beings.
3.
ALIGNED AI
AI needs to align with human values and objectives. In the business context, this means that algorithms should follow the objectives of users and/or other stakeholders. Representing human and business objectives is an integral part of our continuous algorithm engineering. This includes ensuring that ethical objectives can be followed. With search and optimization algorithms, this means that the judgment of what a “good solution” is, can be controlled, e.g., in the form of an objective function. In machine learning, training data is analyzed for bias. Where possible and appropriate, we make use of transparent and explainable AI.
4.
PRIVACY-PRESERVING AI
AI needs to align with human values and objectives. In the business context, this means that algorithms should follow the objectives of users and/or other stakeholders. Representing human and business objectives is an integral part of our continuous algorithm engineering. This includes ensuring that ethical objectives can be followed. With search and optimization algorithms, this means that the judgment of what a “good solution” is, can be controlled, e.g., in the form of an objective function. In machine learning, training data is analyzed for bias. Where possible and appropriate, we make use of transparent and explainable AI.
5.
RELIABLE AI
Especially for consequential and life-critical applications, we are committed to delivering AI applications of high quality and reliability. This means that it must be made sure that such systems meet their desired effects. To that end, we use good software engineering practices to design, develop, and test algorithms. With machine learning algorithms, this especially includes the analysis of training data for bias. Machine learning algorithms are tested for unreasonable or other unwanted results. User interfaces make algorithmic results transparent to users. When operating AI-based software, audit trails or other software capabilities provide means of monitoring the software to remain reliable under changing conditions.
6.
SAFE AI
INFORM designs and develops AI algorithms for different markets. Their impact is usually clearly delimited to the business domain in which they operate, with clearly defined interfaces to surrounding business domains. This is normally always given for search and optimization algorithms as well as for focused AI use cases. Where Large Language Models or similar AI logic is used for which safety issues may arise, INFORM will contain the impact of such model. Where containment to the business domain is not evident from the system design, the AI system will be subjected to internal review for potential impacts, e.g., for excluding malicious API calls, code injection, jailbreaking, or other malicious practices.