NVIDIA recently joined the US Artificial Intelligence Security Institute Consortium (AISIC), which is part of the National Institute of Standards and Technology’s efforts to promote the safe and reliable development and deployment of artificial intelligence. As a member of AISIC, NVIDIA will collaborate with NIST and other consortium members to advance the consortium’s goals and objectives.

NVIDIA has been actively involved in several development initiatives aimed at promoting the security of artificial intelligence. The company supported the Biden administration’s 2023 voluntary AI security commitments and also announced a $30 million contribution to the US National Science Foundation’s National Artificial Intelligence Research Resource pilot program. Additionally, NVIDIA developed open source software, NeMo Guardrails, designed to ensure the accuracy, appropriateness, and security of large language model responses.

Through AISIC, NIST aims to foster knowledge sharing, applied research, and evaluation activities to foster innovation in reliable artificial intelligence. AISIC members bring technical expertise in various fields related to AI management, systems development, psychometrics, among others. By participating in working groups, NVIDIA plans to use its computing resources and best practices to implement an AI risk management framework and AI model transparency. The company will also leverage several NVIDIA-developed open-source AI security tools such as red teaming tools that support AISIC’s goals.

AISIC is focused on creating tools, methodologies, standards for advancing the development and application of AI in a safe manner. With its focus on safety regulations around data privacy and ethical considerations around AI decision making. With this goal in mind AISIC aims to provide guidance on how organizations should approach data governance for their machine learning models.

In conclusion, NVIDIA’s participation in AISIC builds on its past work with stakeholders to ensure responsible development and deployment of AI technology. Through its collaboration with AISIC members such as leading AI creators academics government researchers civil society organizations etc., it hopes to advance knowledge sharing applied research evaluation activities that foster innovation in reliable artificial intelligence while leveraging its computing resources best practices implementing an AI risk management framework transparency tooling etc

By Editor

Leave a Reply