Artificial Intelligence Principles
Building on our Code of Ethics and our cloud-native AI platform, US AI is fully committed to the responsible development and use of Artificial Intelligence within our company, our partners, and our customers. Whether it is a capability developed by US AI or by our customers using our products, our AI is aligned to the betterment of people and the world by our following four principles of Usefulness, Safety, Accountability, and Interpretability. By combining our Responsible AI Policy, Code of Ethics, Secure By Design philosophy, and Human-Centered Design techniques, our customers and broader community can rest assured that the best-of-class capabilities we bring to support their mission and business critical functions meets the standards our modern world requires.
PRINCIPLE 1: USEFULLNESS
Defined as the combination of utility (needed features) and usability (ease of use), usefulness serves as a key component in Responsible AI efforts to ensure that benefits are accessible and inclusive to all, risks and limitations are clearly identified and disclosed, and the performance and output of AI models are subject to regular review.
By incorporating Section 508 compliant design and feedback from diverse groups in Human-Centered Design efforts for AI features, users are able to identify biases and provide additional feedback to ensure that AI usage does not provide benefits to one group to the detriment of another. This pertains not only to the outputs of responsible AI, but also the processes used to create the AI capabilities.
The usefulness of responsible AI also requires that US AI and our customers have a shared understanding of the limitations and risks associated with the use of AI and the context in which these capabilities are deployed. US AI will always disclose where a limitation of data, models, or outputs may not fit the intended purpose or process for a user, and we expect the same of our partners and customers.
Even if an AI capability perfectly fits the intended use case, responsible AI requires that the performance and outputs of AI models and processes are regularly reviewed for performance and expected outputs. US AI will regularly conduct reviews and audits of AI models including for any accuracy or bias changes, and will assist its customers with reviewing their own AI work by providing those evaluation tools. By checking the fitness of purposes of the model, US AI and its partners can monitor for potential drift introduced into or by the model for the particular use case.
PRINCIPLE 2: SECURITY AND SAFETY
As a FedRAMP High Cloud Service Provider, security and safety when using AI responsibly is paramount to providing the benefits of AI while reducing any vulnerabilities introduced by bad actors or insider mistakes whether it is through the environment, the data, or the models themselves.
When using US AI products to develop and deploy responsible AI capabilities, users are provided with default security in our platform through automated scanning, self-healing architecture, and individual workspaces and tenants both for teams and customer organizations as a whole. All work done with US AI is contained in environments managed by the user, and no connections and data exchanges will be established without user consent or initiation.
In addition to external security, responsible AI also requires that users operate in a safe manner by respecting privacy, the quality of the data used, and how that data is used or reused. US AI only trains general use AI capabilities on publicly available data, and customer custom AI capabilities on the customer’s data with consent. No data will be transferred outside of that customer’s environment to power additional models or capabilities, and all efforts will be made to prevent misuse of others’ private data.
While the final output of a model depends on the quality data and how it is trained, responsible AI begins with the foundational models themselves. In order to maximize the benefits of other responsible AI principles such as accountability and trust, US AI begins with high quality, well evaluated open source models that have been vetted and engaged with by both the technology and academic AI communities. While the foundational models may be open source, US AI and its customers may develop closed source models based on additional training and data which will only be made available through the appropriate channels and workspaces governed by these principles.
PRINCIPLE 3: ACCOUNTABILITY AND TRANSPARENCY
As with any other technology, responsible AI requires accountability and transparency through robust oversight mechanisms including a Software Bill of Materials, governance structures, and human oversight of the AI process.
As demonstrated in recent years, the supply chain for software including responsible AI has shown to be a major target for malicious activity and vulnerabilities in otherwise secure systems. As part of US AI’s commitment to secure software development, US AI will work with its customers to provide the AI Software Bill of Materials (SBOM) listing out data sources, models applied, summary information about those models, and additional details about responsible AI processes.
While the information provided in the AI SBOM is critical for accountability, there must also be a robust governance process in place to engage with that information and make decisions accordingly. At US AI, responsible AI relies on existing appropriate governance structures such as the FedRAMP Significant Change Request process rather than waiting for new frameworks to be developed. Combined with standard software and change management processes such as Change Control Boards and Steering Committees, responsible AI has governance included from the start.
At the heart of all concerns and issues with irresponsible AI usage is the abdication of responsibility and oversight of AI by humans. At US AI, there will always be a human at the helm of the governance process, and the ability of users to provide feedback to those governing the creation and use of responsible AI. US AI will monitor for the creation and use of self-modifying AI capabilities that do not require human intervention, and will require that responsible AI is only used to provide assistance to users and human decision makers for all aspects of the AI, including creation, operations, and shutdown.
PRINCIPLE 4: INTERPRETABILITY AND TRUST
At US AI, the relationship between the user and responsible AI capabilities is the biggest element of unlocking the potential for AI to help people. Interpretability and trust are the key principles making this relationship work for the creators, users, and anyone else affected by or benefiting from the AI capabilities.
In order for AI to be adopted in a broader sense, responsible AI must be understood by all stakeholders regardless of technical ability or background whether it is business users, data scientists, or the general public who are impacted by AI use. US AI supports the effort to democratize AI creation and usage by providing robust AI platforms and tools where teams can work together to create and train AI models while safely sharing important details and functions. This also includes the opportunity for the general public to provide feedback and challenge models through our Customer Service Center.
As AI use cases become more common and implemented, users’ perceptions of reliability of the AI models and training will be key to the long term trust and success of responsible AI. As mentioned in our first principle of usefulness, US AI will share the performance and output review data of AI models with the creators and communities around the models as well as customers who rely on those models for their capabilities. This includes both summary information as well as audit results internally and externally. Repeated evaluations and updates will build the trust that serves not only as the foundation of human relationships, but our relationship with technology such as responsible AI.
Overall, the responsibility for using AI in the proper context, safely, free of bias, and as a conscious decision lies with us individually and collectively. As AI capabilities grow and new frameworks are introduced, we will need human direction to confirm and make changes both to the responsible AI and the principles governing its use. US AI reserves the right to shut down or pause the development or use of any AI that is contrary to the above principles. By following the principles listed above, US AI, our partners, and our customers can harness the power of AI to make the world a better place for everyone.