├── CHARTER.md ├── LICENSE ├── README.md ├── SECURITY.md ├── code-of-conduct.md ├── landscape ├── AI-related_threats.md ├── Contributors.md ├── Executive_Summary.md ├── External_Resources.md ├── Introduction.md ├── Landscape.md ├── OpenSSF_Projects.md ├── Personas.md ├── Provenance_and_Legal_Issues.md └── ai_ml_landscape.md ├── meeting_notes └── 2023 OpenSSF WG for AI ML Agenda & Meeting Notes.md ├── mvsr.md └── telemetry ├── AI_Chips_and_Hardware.md ├── AI_Community_and_Collaboration.md ├── AI_Regulations.md ├── AI_Research_and_Development.md ├── AI_Security_and_Telemetry.md ├── AI_Standards_and_Best_Practices.md ├── AI_and_Its_Impact.md ├── AI_and_Its_Impact_Google_Docs.md ├── Appendix.md ├── Conclusion.md ├── Introduction.md ├── Security_in_AI_Applications.md ├── Understanding_Large_Language_Models_LLMs.md ├── Understanding_Vision_Language_Action_Models.md ├── Understanding_Vision_Language_Models_VLMs.md ├── Understanding_the_AI_Supply_Chain.md └── ai_ml_telemetry.md /CHARTER.md: -------------------------------------------------------------------------------- 1 | ## Technical Charter for Open Source Security Foundation 2 | 3 | ## AI/ML Security Working Group 4 | 5 | ### Adopted 4 March, 2024 6 | 7 | This Technical Charter sets forth the responsibilities and procedures for technical contribution to, and oversight of, the OpenSSF AI/ML Security Working Group open source community, which has been established as a Working Group (the "Technical Initiative") under the Open Source Security Foundation (the “OpenSSF”). All contributors (including committers, maintainers, and other technical positions) and other participants in the Technical Initiative (collectively, “Collaborators”) must comply with the terms of this Technical Charter and the OpenSSF Charter. 8 | 9 | ## 1. Mission and Scope of the Technical Initiative 10 | 11 | We are an incubating WG is going through the initial life cycle phases to address open source software security for AIML workloads. The developing scope involves analyzing open source AIML data sets, data models and OSS code mashups used in AIML to articulate the specific security concerns and controls for the subset of OSS AIML workloads. This is important because the accelerated adoption of AIML has an OSS and security component that is not well understood, and OpenSSF can play an industry leading position. 12 | 13 | This committee interlocks with multiple groups to eliminate duplication and to 14 | serve as the central place for collating any recommendations for using AI 15 | securely ("security for AI" vision) and for using AI to improve security of 16 | other OSS software products ("AI for security" vision). 17 | 18 | Our [Mission, Vision, Strategy, and Roadmap (MVSR)](https://github.com/ossf/ai-ml-security/blob/main/mvsr.md) document lists significant interlocks and current plans. 19 | 20 | ## 2. Technical initiative roles 21 | 22 | - a. The primary points of contact are the lead and co-lead of the Technical Initiative, who are listed in the [README](https://github.com/ossf/ai-ml-security/blob/main/README.md). 23 | 24 | - b. The Technical Initiative generally will involve Collaborators and Contributors. The Technical Initiative may adopt or modify additional roles so long as the roles are documented in the Technical Initiative’s repository. Unless otherwise documented: 25 | 26 | - i. Contributors include anyone in the technical community that contributes effort, ideas, code, documentation, or other artifacts to the Technical Initiative; 27 | 28 | - ii. Collaborators are Contributors who have earned the ability to modify ("commit") text, source code, documentation or other artifacts in the Technical Initiative’s repository or direct the agenda or working activities of the Technical Initiative; and 29 | 30 | - iii. A Contributor may become a Collaborator by majority approval of the existing Collaborators. A Collaborator may be removed by majority approval of the other existing Collaborators. 31 | 32 | - iv. Maintainers are the initial Collaborators defined at the creation of the Technical Initiative. The Maintainers will determine the process for selecting future Maintainers. A Maintainer may be removed by two-thirds approval of the other existing Maintainers, or a majority of the other existing Collaborators. 33 | 34 | - d. Participation in the Technical Initiative through becoming a Contributor, Collaborator, or Maintainer is open to anyone, whether an OpenSSF member or not, so long as they abide by the terms of this Technical Charter. 35 | 36 | - e. The Technical Initiative collaboratively manages all aspects of oversight relating to the Technical Initiative, which may include: 37 | 38 | - i. coordinating the direction of the Technical Initiative; 39 | 40 | - ii. approving, organizing or removing activities and projects; 41 | 42 | - iii. establish community norms, workflows, processes, release requirements, and templates for the operation of the Technical Initiative; 43 | 44 | - iv. establish a fundraising model, and approve or modify a Technical Initiative budget, subject to OpenSSF Governing Board approval; 45 | 46 | - v. appointing representatives to work with other open source or open standards communities; 47 | 48 | - f. The Technical Initiative lead is responsible for 49 | 50 | - i. facilitating discussions, seeking consensus, and where necessary, voting on technical matters relating to the Technical Initiative; and 51 | 52 | - ii. coordinating any communications regarding the Technical Initiative. 53 | 54 | - iii. approving and implementing policies and processes for contributing (to be published in the Technical Initiative repository) and coordinating with the Linux Foundation to resolve matters or concerns that may arise as set forth in Section 6 of this Technical Charter; 55 | 56 | - g. The Technical Initiative co-lead is supporting the lead in their duties. In the absence of the lead, they can 57 | 58 | - i. facilitate discussions, seek consensus, and where necessary, vote on technical matters relating to the Technical Initiative; and 59 | 60 | - ii. coordinate any communications regarding the Technical Initiative. 61 | 62 | ## 3. Voting 63 | 64 | - a. While the Technical Initiative aims to operate as a consensus-based community, if any decision requires a vote to move the Technical Initiative forward, the Technical Initiative will vote on a one vote per member basis. 65 | 66 | - b. Quorum for Technical Initiative meetings requires at least fifty percent of Collaborators to be present. 67 | 68 | 69 | - c. Decisions by vote at a meeting require a majority vote of those in attendance, provided quorum is met. Decisions made by electronic vote without a meeting require a majority vote of Collaborators. 70 | 71 | - d. In the event a vote cannot be resolved by the Technical Initiative, the group lead may refer the matter to the TAC for assistance in reaching a resolution. 72 | 73 | ## 4. Compliance with Policies 74 | 75 | - a. This Technical Charter is subject to the OpenSSF Charter and any rules or policies established for all Technical Initiatives. 76 | 77 | - b. The Technical Initiative participants must conduct their business in a professional manner, subject to the Contributor Covenant Code of Conduct 2.0, available at [https://www.contributor-covenant.org/version/2/0/code_of_conduct](https://www.contributor-covenant.org/version/2/0/code_of_conduct/). The TSC may adopt a different code of conduct ("CoC") for the Technical Initiative, subject to approval by the TAC. 78 | 79 | - c. All Collaborators must allow open participation from any individual or organization meeting the requirements for contributing under this Technical Charter and any policies adopted for all Collaborators by the TSC, regardless of competitive interests. Put another way, the Technical Initiative community must not seek to exclude any participant based on any criteria, requirement, or reason other than those that are reasonable and applied on a non-discriminatory basis to all Collaborators in the Technical Initiative community. All activities conducted in the Technical Initiative are subject to the Linux Foundation’s Antitrust Policy, available at [https://www.linuxfoundation.org/antitrust-policy](https://www.linuxfoundation.org/antitrust-policy/). 80 | 81 | - d. The Technical Initiative will operate in a transparent, open, collaborative, and ethical manner at all times. The output of all Technical Initiative discussions, proposals, timelines, decisions, and status should be made open and easily visible to all. Any potential violations of this requirement should be reported immediately to the TAC. 82 | 83 | ## 5. Community Assets 84 | 85 | - a. The Linux Foundation will hold title to all trade or service marks used by the Technical Initiative ("Technical Initiative Trademarks"), whether based on common law or registered rights. Technical Initiative Trademarks may be transferred and assigned to LF Technical Initiatives to hold on behalf of the Technical Initiative. Any use of any Technical Initiative Trademarks by Collaborators in the Technical Initiative will be in accordance with the trademark usage policy of the Linux Foundation, available at [https://www.linuxfoundation.org/trademark-usage](https://www.linuxfoundation.org/trademark-usage/), and inure to the benefit of the Linux Foundation. 86 | 87 | - b. The Linux Foundation or Technical Initiative must own or control the repositories, social media accounts, and domain name registrations created for use by the Technical Initiative community. 88 | 89 | - c. Under no circumstances will the Linux Foundation be expected or required to undertake any action on behalf of the Technical Initiative that is inconsistent with the policies or tax-exempt status or purpose, as applicable, of the Linux Foundation. 90 | 91 | ## 6. Intellectual Property Policy 92 | 93 | - a. Collaborators acknowledge that the copyright in all new contributions will be retained by the copyright holder as independent works of authorship and that no contributor or copyright holder will be required to assign copyrights to the Technical Initiative. 94 | 95 | - b. Except as described in Section 6.c., all contributions to the Technical Initiative are subject to the following: 96 | 97 | - i. All new inbound code contributions to the Technical Initiative must be made using the Apache License, Version 2.0, available at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0) (the "Technical Initiative License"). 98 | 99 | - ii. All new inbound code contributions must also be accompanied by a Developer Certificate of Origin ([http://developercertificate.org](http://developercertificate.org)) sign-off in the source code system that is submitted through a TSC-approved contribution process which will bind the authorized contributor and, if not self-employed, their employer to the applicable license; 100 | 101 | - iii. All outbound code will be made available under the Technical Initiative License. 102 | 103 | - iv. Documentation will be received and made available by the Technical Initiative under the Creative Commons Attribution 4.0 International License, available at [http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/). 104 | 105 | - v. To the extent a contribution includes or consists of data, any rights in such data shall be made available under the CDLA-Permissive 1.0 License. 106 | 107 | - vi. The Technical Initiative may seek to integrate and contribute back to other open source projects ("Upstream Projects"). In such cases, the Technical Initiative will conform to all license requirements of the Upstream Projects, including dependencies, leveraged by the Technical Initiative. Upstream Project code contributions not stored within the Technical Initiative’s main code repository will comply with the contribution process and license terms for the applicable Upstream Project. 108 | 109 | - c. The Technical Initiative may approve the use of an alternative license or licenses for inbound or outbound contributions on an exception basis. To request an exception, please describe the contribution, the alternative open source license(s), and the justification for using an alternative open source license for the Technical Initiative. License exceptions must be approved by a two-thirds vote of the entire Governing Board. 110 | 111 | - d. Contributed files should contain license information, such as SPDX short form identifiers, indicating the open source license or licenses pertaining to the file. 112 | 113 | ## 7. Amendments 114 | 115 | - a. This charter may be amended by a two-thirds vote of Collaborators and is subject to approval by the TAC. 116 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AI/ML Security WG 2 | 3 | This is the GitHub repository of the [OpenSSF](https://openssf.org) Artificial Intelligence / Machine Learning (AI/ML) Security Working Group (WG). The OpenSSF Technical Advisory Council (TAC) approved its creation on 2023-09-05. 4 | 5 | The AI/ML Security Working group is officially a [sandbox level](https://github.com/ossf/tac/blob/main/process/working-group-lifecycle.md) working group within the OpenSSF. 6 | 7 | ## Objective 8 | 9 | This WG explores the security risks associated with Large Language Models (LLMs), Generative AI (GenAI), and other forms of artificial intelligence (AI) and machine learning (ML), and their impact on open source projects, maintainers, their security, communities, and adopters. 10 | 11 | This group in collaborative research and peer organization engagement to explore topics related to AI and security. This includes security for AI development (e.g., supply chain security) but also using AI for security. We are covering risks posed to individuals and organizations by improperly trained models, data poisoning, privacy and secret leakage, prompt injection, licensing, adversarial attacks, and any other similar risks. 12 | 13 | This group leverages prior art in the AI/ML space,draws upon both security and AI/ML experts, and pursues collaboration with other communities (such as the CNCF's AI WG, LFAI & Data, AI Alliance, MLCommons, and many others) who are also seeking to research the risks presented by AL/ML to OSS in order to provide guidance, tooling, techniques, and capabilities to support open source projects and their adopters in securely integrating, using, detecting and defending against LLMs. 14 | 15 | ## Vision 16 | 17 | We envision a world where AI developers and practitioners can easily identify and use good practices to develop products using AI in a secure way. In this world, AI can produce code that is secure and AI usage in an application would not result in downgrading security guarantees. 18 | 19 | These guarantees extend over the entire lifecycle of the model, from data collection to using the model in production applications. 20 | 21 | The AI/ML security working group wants to serve as a central place to collate any recommendation for using AI securely ("security for AI") and using AI to improve security of other products ("AI for security"). 22 | 23 | ## Scope 24 | 25 | Some areas of consideration this group explores: 26 | * **Adversarial attacks**: These attacks involve introducing small, imperceptible changes to the data input data to an AI/ML model which may cause it to misclassify or provide inaccurate outputs. Adversarial attacks can target both supervised and unsupervised learning algorithms. Models themselves may also be used to deliver or perform attacks. 27 | * **Model inversion attacks**: These attacks involve using the output of an AI/ML model to infer information about the training data used to create the model. This can be used to steal sensitive information or create a copy of the original data set. 28 | * **Poisoning attacks**: In these attacks, the attacker introduces malicious data into the training set used to train an AI/ML model. This can cause the model to make intentionally incorrect predictions or be biased towards desired outcomes. 29 | * **Evasion attacks**: These attacks involve modifying the input data to an AI/ML model to evade detection or classification. Evasion attacks can target models used for image recognition, natural language processing, and other applications. 30 | * **Data extraction attacks**: In these attacks, the attacker attempts to steal data or information from an AI/ML model by exploiting vulnerabilities in the model or its underlying infrastructure. This is sometimes termed as ‘jailbreaking’. 31 | * **Point in time data sets**: Large Language Models often lack recent context, where models have a knowledge cutoff date. A good example can be seen [here](https://twitter.com/decodebytes/status/1644063555283570701), where ChatGPT repeatedly recommends use of a deprecated library. 32 | * **Social Engineering**: AI Agents are capable of accessing the internet and communicating with humans. A recent example of this occurred where GPT-4 was able to hire humans to solve CAPTCHA. When challenged if GPT was a robot, it replied with “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” With projects such as AutoGPT, it is also possible to grant Agents access to a command line interface alongside internet access, so it's not too far a stretch to see Agents performing social engineering tasks (phishing etc) combined with orchestrated attacks launched from the CLI or via scripts coded on the fly to gain system access via known exploits. Agents such as this could be used to automate package hijacking , domain takeover attacks etc. 33 | * **Threat democratization**: AI agents will allow actors to emulate the scale of attacks previously seen with nation states. Going forward, the proverbial corner shop may need the same defenses as the pentagon. Target value needs to be reassessed. 34 | * **Accidental threats**: In the course of integrating AI for accelerating and improving software development and operations, AI models may leak secrets, open all ports on a firewall, or behave in an insecure manner as a result of improper training, tuning, or final configuration. 35 | * **Prompt injection attacks**: These attacks involve directly or indirectly injecting additional text into a prompt to influence the model’s output. As a result, it could lead to prompt leaking disclosing sensitive or confidential information. 36 | * **Membership inference attack**: Process determining if specific data was part of the model’s training dataset. It is most relevant in the context of deep learning models and used to extract sensitive or private information included in the training dataset. 37 | * **Model vulnerability management**: Identifying techniques, mechanisms, and practices to apply modern vulnerability managment identification, remediation, and management practices into the model use and model development ecosystem. 38 | * **Model integrity**: Developing mechanisms and tooling to provide secure software supply chain practices, assurances, provenance, and attestable metadata for models. 39 | 40 | Anyone is welcome to join our open discussions. 41 | 42 | ## WG Leadership 43 | 44 | ### Co-Chairs: 45 | 46 | - Jay White - GitHub [@camaleon2016](https://github.com/camaleon2016) 47 | - Mihai Maruseac - GitHub [@mihaimaruseac](https://github.com/mihaimaruseac) 48 | 49 | ## How to Participate 50 | 51 | - We have bi-weekly meetings via Zoom. To join, please see the [OpenSSF Public Calendar](https://calendar.google.com/calendar/u/0/r?cid=czYzdm9lZmhwNWk5cGZsdGI1cTY3bmdwZXNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ) 52 | - [2025 Meeting Notes for the AIML WG](https://docs.google.com/document/d/1X7lCvAHY0x7HMaCQx-7KKPjSBPQ6v02TynQpOPXnXFI/edit) 53 | - [2024 Meeting Notes for the AIML WG](https://docs.google.com/document/d/1YNP-XJ9jpTjM6ekKOBgHH8-avAws2DVKeCpn858siiQ/edit) 54 | - Informal chat is welcome on the [OpenSSF Slack channel #wg-ai-ml-security](https://openssf.slack.com/archives/C0587E513KR) (these disappear over time) 55 | - Mailing list [openssf-wg-ai-ml-security](https://lists.openssf.org/g/openssf-wg-ai-ml-security) 56 | - Drive: https://drive.google.com/drive/folders/1zCkQ_d98AMCTkCq00wuN0dFJ6SrRZzNh 57 | 58 | ## Current Work 59 | 60 | We welcome contributions, suggestions and updates to our projects. To contribute to work on GitHub, please fill in an issue or create a pull request. 61 | 62 | ### Projects: 63 | 64 | The AI/ML WG has voted to approve the following projects: 65 | 66 | | Name | Purpose | Creation issue | 67 | | ------------- | -------------------------------- | ------------------------------------------------------- | 68 | | Model signing | Cryptographic signing for models | [#10](https://github.com/ossf/ai-ml-security/issues/10) | 69 | | Security-Focused Guide for AI Code Assistant Instructions | Security of AI code assistants code generation | [#936](https://github.com/ossf/wg-best-practices-os-developers/pull/936) | 70 | 71 | More details about the projects: 72 | 73 | * Project: **Model Signing Project** 74 | * Detailed purpose: Focused on establishing signing patterns and practices through Sigstore to provide verifiable claims about the integrity and provenance of models through machine learning pipelines. It is focused on establishing a cryptographic signing specification for artificial intelligence and machine learning models, addressing challenges such as very large models that can be used separately, and the signing of multiple disparate file formats. 75 | * Mailing list: https://lists.openssf.org/g/openssf-sig-model-signing 76 | * Slack: [#sig-model-signing](https://openssf.slack.com/archives/C074GBM5VL0) 77 | * Meeting information 78 | * [Meeting Link](https://zoom-lfx.platform.linuxfoundation.org/meeting/99042564666?password=4f479771-1ddf-4345-b005-f11484c40c0d) (you must have a login to [LFX platform](https://lfx.linuxfoundation.org/) to use 79 | * Every other Wednesday 16:00 UTC Refer to the [OpenSSF calendar](https://openssf.org/getinvolved/) 80 | * [Meeting Notes](https://docs.google.com/document/d/18oAsfhfKJurH-YTUFe520CAZS3lkORX1WnZmBv4Llkc/edit) 81 | * Project: **Security-Focused Guide for AI Code Assistant Instructions** 82 | * Detailed purpose: A collaboration between the AI/ML Security and the Best Practices Working Groups to improve the security of code generated by AI code assistants by creating custom prompts or custom instructions. 83 | * Published Document: [Security-Focused Guide for AI Code Assistant Instructions](https://best.openssf.org/Security-Focused-Guide-for-AI-Code-Assistant-Instructions) 84 | * Working Document: [Security-Focused Guide for AI Code Assistant Instructions - Working Document](https://github.com/ossf/wg-best-practices-os-developers/blob/main/docs/Security-Focused-Guide-for-AI-Code-Assistant-Instructions.md) 85 | 86 | ### Upcoming work 87 | 88 | This WG is currently exploring establishment of an AI Vulnerability Disclosure SIG. Please refer to the group's meeting notes for more information. 89 | 90 | See also [the MVSR document](https://github.com/ossf/ai-ml-security/blob/main/mvsr.md), which also contains other AI/ML working groups we are interlocking with. 91 | 92 | ## Licenses 93 | 94 | Unless otherwise specifically noted, software released by this working group is released under the [Apache 2.0 license](LICENSES/Apache-2.0.txt), and documentation is released under the [CC-BY-4.0 license](LICENSES/CC-BY-4.0.txt). 95 | Formal specifications would be licensed under the [Community Specification License](https://github.com/CommunitySpecification/1.0). 96 | 97 | ## Charter 98 | 99 | Like all OpenSSF Working Groups, this group reports to the [OpenSSF Technical Advisory Council (TAC)](https://github.com/ossf/tac). For more information see this Working Group [Charter](https://github.com/ossf/ai-ml-security/blob/main/doc/CHARTER.md). 100 | 101 | ## Antitrust Policy Notice 102 | 103 | Linux Foundation meetings involve participation by industry competitors, and it is the intention of the Linux Foundation to conduct all of its activities in accordance with applicable antitrust and competition laws. It is therefore extremely important that attendees adhere to meeting agendas, and be aware of, and not participate in, any activities that are prohibited under applicable US state, federal or foreign antitrust and competition laws. 104 | 105 | Examples of types of actions that are prohibited at Linux Foundation meetings and in connection with Linux Foundation activities are described in the Linux Foundation Antitrust Policy available at . If you have questions about these matters, please contact your company counsel, or if you are a member of the Linux Foundation, feel free to contact Andrew Updegrove of the firm of Gesmer Updegrove LLP, which provides legal counsel to the Linux Foundation. 106 | -------------------------------------------------------------------------------- /SECURITY.md: -------------------------------------------------------------------------------- 1 | # Security 2 | 3 | Per the 4 | [Linux Foundation Vulnerability Disclosure Policy](https://www.linuxfoundation.org/security), 5 | if you find a vulnerability in a project maintained by the OpenSSF, 6 | please report that directly to the project maintaining that code. 7 | 8 | If you've been unable to find a way to report it, 9 | or have received no response after repeated attempts, please contact the 10 | OpenSSF security contact email, security @ openssf . org. 11 | 12 | Thank you. 13 | -------------------------------------------------------------------------------- /code-of-conduct.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | We as members, contributors, and leaders pledge to make participation in our 6 | community a harassment-free experience for everyone, regardless of age, body 7 | size, visible or invisible disability, ethnicity, sex characteristics, gender 8 | identity and expression, level of experience, education, socio-economic status, 9 | nationality, personal appearance, race, religion, or sexual identity 10 | and orientation. 11 | 12 | We pledge to act and interact in ways that contribute to an open, welcoming, 13 | diverse, inclusive, and healthy community. 14 | 15 | ## Our Standards 16 | 17 | Examples of behavior that contributes to a positive environment for our 18 | community include: 19 | 20 | - Demonstrating empathy and kindness toward other people 21 | - Being respectful of differing opinions, viewpoints, and experiences 22 | - Giving and gracefully accepting constructive feedback 23 | - Accepting responsibility and apologizing to those affected by our mistakes, 24 | and learning from the experience 25 | - Focusing on what is best not just for us as individuals, but for the 26 | overall community 27 | 28 | Examples of unacceptable behavior include: 29 | 30 | - The use of sexualized language or imagery, and sexual attention or 31 | advances of any kind 32 | - Trolling, insulting or derogatory comments, and personal or political attacks 33 | - Public or private harassment 34 | - Publishing others' private information, such as a physical or email 35 | address, without their explicit permission 36 | - Other conduct which could reasonably be considered inappropriate in a 37 | professional setting 38 | 39 | ## Enforcement Responsibilities 40 | 41 | Community leaders are responsible for clarifying and enforcing our standards of 42 | acceptable behavior and will take appropriate and fair corrective action in 43 | response to any behavior that they deem inappropriate, threatening, offensive, 44 | or harmful. 45 | 46 | Community leaders have the right and responsibility to remove, edit, or reject 47 | comments, commits, code, wiki edits, issues, and other contributions that are 48 | not aligned to this Code of Conduct, and will communicate reasons for moderation 49 | decisions when appropriate. 50 | 51 | ## Scope 52 | 53 | This Code of Conduct applies within all community spaces, and also applies when 54 | an individual is officially representing the community in public spaces. 55 | Examples of representing our community include using an official e-mail address, 56 | posting via an official social media account, or acting as an appointed 57 | representative at an online or offline event. 58 | 59 | ## Enforcement 60 | 61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 62 | reported to the community leaders responsible for enforcement at 63 | OpenSFF. 64 | 65 | All complaints will be reviewed and investigated promptly and fairly. 66 | 67 | All community leaders are obligated to respect the privacy and security of the 68 | reporter of any incident. 69 | 70 | ## Enforcement Guidelines 71 | 72 | Community leaders will follow these Community Impact Guidelines in determining 73 | the consequences for any action they deem in violation of this Code of Conduct: 74 | 75 | ### 1. Correction 76 | 77 | **Community Impact**: Use of inappropriate language or other behavior deemed 78 | unprofessional or unwelcome in the community. 79 | 80 | **Consequence**: A private, written warning from community leaders, providing 81 | clarity around the nature of the violation and an explanation of why the 82 | behavior was inappropriate. A public apology may be requested. 83 | 84 | ### 2. Warning 85 | 86 | **Community Impact**: A violation through a single incident or series 87 | of actions. 88 | 89 | **Consequence**: A warning with consequences for continued behavior. No 90 | interaction with the people involved, including unsolicited interaction with 91 | those enforcing the Code of Conduct, for a specified period of time. This 92 | includes avoiding interactions in community spaces as well as external channels 93 | like social media. Violating these terms may lead to a temporary or 94 | permanent ban. 95 | 96 | ### 3. Temporary Ban 97 | 98 | **Community Impact**: A serious violation of community standards, including 99 | sustained inappropriate behavior. 100 | 101 | **Consequence**: A temporary ban from any sort of interaction or public 102 | communication with the community for a specified period of time. No public or 103 | private interaction with the people involved, including unsolicited interaction 104 | with those enforcing the Code of Conduct, is allowed during this period. 105 | Violating these terms may lead to a permanent ban. 106 | 107 | ### 4. Permanent Ban 108 | 109 | **Community Impact**: Demonstrating a pattern of violation of community 110 | standards, including sustained inappropriate behavior, harassment of an 111 | individual, or aggression toward or disparagement of classes of individuals. 112 | 113 | **Consequence**: A permanent ban from any sort of public interaction within 114 | the community. 115 | 116 | ## Attribution 117 | 118 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], 119 | version 2.0, available at 120 | . 121 | 122 | Community Impact Guidelines were inspired by [Mozilla's code of conduct 123 | enforcement ladder](https://github.com/mozilla/diversity). 124 | 125 | [homepage]: https://www.contributor-covenant.org 126 | 127 | For answers to common questions about this code of conduct, see the FAQ at 128 | . Translations are available at 129 | . 130 | -------------------------------------------------------------------------------- /landscape/AI-related_threats.md: -------------------------------------------------------------------------------- 1 | # AI-related_threats 2 | -------------------------------------------------------------------------------- /landscape/Contributors.md: -------------------------------------------------------------------------------- 1 | # Contributors 2 | -------------------------------------------------------------------------------- /landscape/Executive_Summary.md: -------------------------------------------------------------------------------- 1 | # Executive_Summary 2 | -------------------------------------------------------------------------------- /landscape/External_Resources.md: -------------------------------------------------------------------------------- 1 | # External_Resources 2 | -------------------------------------------------------------------------------- /landscape/Introduction.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | -------------------------------------------------------------------------------- /landscape/Landscape.md: -------------------------------------------------------------------------------- 1 | # Landscape -------------------------------------------------------------------------------- /landscape/OpenSSF_Projects.md: -------------------------------------------------------------------------------- 1 | # OpenSSF_Projects 2 | -------------------------------------------------------------------------------- /landscape/Personas.md: -------------------------------------------------------------------------------- 1 | # Personas 2 | -------------------------------------------------------------------------------- /landscape/Provenance_and_Legal_Issues.md: -------------------------------------------------------------------------------- 1 | # Provenance_&_Legal_Issues 2 | -------------------------------------------------------------------------------- /landscape/ai_ml_landscape.md: -------------------------------------------------------------------------------- 1 | # The AI/ML OSS Security Landscape 2 | 3 | This doc is currently in draft [here](https://docs.google.com/document/d/1AyivzKsERoIZcyr4XrH6CrNeUoYHhpiswThHS0XrbSU/edit#heading=h.j7gx4ey3nk3k). 4 | 5 | * [Executive_Summary](Executive_Summary.md) 6 | * [Introduction](Introduction.md) 7 | * [Landscape](Landscape.md) 8 | * [AI-related_threats](AI-related_threats.md) 9 | * [Provenance_&_Legal_Issues](Provenance_and_Legal_Issues.md) 10 | * [OpenSSF_Projects](OpenSSF_Projects.md) 11 | * [Personas](Personas.md) 12 | * [External_Resources](External_Resources.md) 13 | * [Contributors](Contributors.md) 14 | 15 | -------------------------------------------------------------------------------- /meeting_notes/2023 OpenSSF WG for AI ML Agenda & Meeting Notes.md: -------------------------------------------------------------------------------- 1 |

2023 OpenSSF WG for AI/ML Agenda and Meeting Notes

2 | 3 | 4 | **Meeting Link:** [https://zoom.us/j/94945593977](https://zoom.us/j/94945593977#success) (as per the meeting invitation) 5 | 6 | Note: You will need a login to [https://lfx.linuxfoundation.org/](https://lfx.linuxfoundation.org/) for this link to work. 7 | 8 | (In the event of IT difficulties, links dropping out etc., we still have the **old** link [here](https://zoom.us/j/94945593977)) 9 | 10 | **Time: **Every other Monday, 18:00 UTC 11 | 12 | **WG Proposal Link:** 13 | 14 | **Slack channel:** [#wg_ai_ml_security](https://openssf.slack.com/archives/C0587E513KR) 15 | 16 | **Mailing list:** ​​[https://lists.openssf.org/g/openssf-wg-ai-ml-security](https://lists.openssf.org/g/openssf-wg-ai-ml-security) 17 | 18 | **Repo:** [https://github.com/ossf/ai-ml-security](https://github.com/ossf/ai-ml-security) 19 | 20 | **Shared Drive**: [https://drive.google.com/corp/drive/folders/1zCkQ_d98AMCTkCq00wuN0dFJ6SrRZzNh](https://drive.google.com/corp/drive/folders/1zCkQ_d98AMCTkCq00wuN0dFJ6SrRZzNh) 21 | 22 | **Facilitators**: Mihai Maruseac, Jay White 23 | 24 | Please use the[ 2024 Meeting Notes](https://docs.google.com/document/d/1YNP-XJ9jpTjM6ekKOBgHH8-avAws2DVKeCpn858siiQ/edit?usp=sharing) 25 | 26 |

2023-12-11

27 | 28 | 29 |

Attendees:

30 | 31 | 32 | Put an “X” in the “Present” column if present 33 | 34 | 35 | 36 | 37 | 39 | 41 | 43 | 44 | 45 | 47 | 49 | 51 | 52 | 53 | 55 | 57 | 59 | 60 | 61 | 63 | 65 | 67 | 68 | 69 | 71 | 73 | 75 | 76 | 77 | 79 | 81 | 83 | 84 | 85 | 87 | 89 | 91 | 92 | 93 | 95 | 97 | 99 | 100 |
Present 38 | Name 40 | Organization 42 |
X 46 | Mihai Maruseac 48 | Chair, Google GOSST 50 |
X 54 | Jay White 56 | Chair, Microsoft 58 |
X 62 | Dana Wang 64 | OpenSSF 66 |
X 70 | Victor Lu 72 | Independent 74 |
X 78 | David A. Wheeler 80 | OpenSSF 82 |
X 86 | Jorge VARGAS 88 | Independent 90 |
X 94 | Yotam Perkal 96 | Rezilion 98 |
101 | 102 | 103 |

Introduction/Welcome New Friends!

104 | 105 | 106 | 107 | 108 | * 109 | 110 |

Agenda/Meeting Notes

111 | 112 | 113 | 114 | 115 | * Note: OpenSSF Day Japan & AIxCC Kickoff was last week - some may be recovering 116 | * AixCC - [https://aicyberchallenge.com/](https://aicyberchallenge.com/) - kickoff was last week 117 | * DARPA’s Artificial Intelligence Cyber Challenge (AIxCC) will bring together the best and brightest in AI and cybersecurity to defend the software on which all Americans rely. 118 | * AIxCC is excited to have Anthropic, Google, Microsoft, OpenAI, the Linux Foundation, the Open Source Security Foundation, Black Hat USA, and DEF CON as collaborators in this effort. 119 | * $ millions in prize money 120 | * More details will be coming soon 121 | * David A. Wheeler is involved, but cannot share some details at this time due to NDA. However, we want many people to compete in this competition! 122 | * Informational - OpenSSF TAC election timeline: 123 | 1. Nominations Open: NOW 124 | 2. Nominations CLOSE: Dec 15 125 | 3. Voting Starts: December 16 126 | 4. Voting Stops: December 30 127 | 5. New members seated: January 1 \ 128 | To request a ballot that we sent through [OpaVote please fill out this google form](https://forms.gle/7suYexAnPxndvX856). \ 129 | To run for the TAC: 130 | 1. [SCIR Member GB Nomination Form](https://forms.gle/ZZkC6zK3T7Ww43uC9) 131 | 2. [TAC Community Seat Self-Nomination Form](https://docs.google.com/forms/d/e/1FAIpQLSdMkN_H3zVFW7NfZzsanF5isga3PNVUQj7-8VPlVPhb2F2iYQ/viewform) 132 | * Had meeting with LF AI Data 133 | * Plan to have joint meetings, agreed on this last Thursday 134 | * Anyone have any other business to add? 135 | * : threat modeling for data, models 136 | * Start from OWASP, work with Dana 137 | * Plans for future conferences in 2024 138 | * Plans for OSS NA? 139 | * Model transparency 140 | * Panel discussion 141 | * Supply Chain track 142 | * AIs: 143 | * Mihai working on making the repo more readable 144 | * Expect PR soon 145 | * Making the Repo Human Readable: - Sal 146 | * What elements do we want to include on the repo? I’ll be borrowing some style from [this repo](https://github.com/finos/zenith) to focus it on projects/papers 147 | * OS specific recommendations doc - Sal 148 | * Identify targets for security slam with AI - Sal 149 | * Create MSVR doc - Jay 150 | * Update Charter doc from original mission statement - Dan 151 | * Try to find owners for docs - Nigel 152 | * 153 | * 154 | * Reach out to other working groups 155 | * Cloud Security Alliance AI WG 156 | * Montreal institute for Ethics in AI 157 | * We will NOT meet 2023-12-25. 158 | * What is the next step? Code / data / model 159 | * OWASP list: [https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v05.pdf](https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v05.pdf) 160 | * Henry is systematically threat modeling for data 161 | * Audrey (OWASP) looking at model 162 | * What about minimum viable security (MVP)? 163 | * We will need to define this in this group 164 | * Need to identify the problems 165 | * OWASP only covers LLM 166 | * Maybe we should get a meeting to discuss to cover the gaps? 167 | * A lot of these are research tasks, we need solutions, and in most cases we have no solutions. 168 | * David: You might look here as a starting point/ input my presentation [https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt](https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt) 169 | * Includes a simple taxonomy of threats & some discussion about what’s known 170 | * Do [Infer] the wrong thing (“Evasion”) 171 | * Technical term: Adversarial inputs / adversarial examples 172 | * Learn the wrong thing (“Poisoning”) 173 | * Technical term: data poisoning/Trojan attack 174 | * Reveal the wrong thing (“Lose Confidentiality”), Further broken down into 3 sub-categories 175 | * Extraction: the attacker extracts the parameters or structure of the model from observations of the model’s predictions [Tabassi 2019] 176 | * (Membership) inference: the attacker uses target model query results to determine if specific data points belong to the same distribution as the training dataset [Tabassi 2019] 177 | * (Model) inversion: the attacker is able to reconstruct (some) data used to train the model, including private and/or secret data [Tabassi 2019] 178 | * Mihai: There are probably teams working on it, but I’ll need to look 179 | * Jay: It’s been put on the backburner in some sense, trying to limit what the systems can do 180 | * David: That’s great when you can, e.g., “don’t give it the launch codes” - but limiting what the systems can do really limits what the systems can do 181 | * Accidents can happen, leading to big problems. Example: a folder included a lot of private information, so you could see private/secret information. Prompts can often lead to this kind of revelation. 182 | * There are AI/ML specific risks, and then there are general risks. 183 | * David: Agreed, but people want to give ML systems tools that blur the line, e.g., give LLM access to a Python interpreter. 184 | * It’s reasonable to divide model, data for training the model, and the code for running the model (system that uses the inference system). However, it gets rather mixed/muddled in implementation. 185 | * David: I prefer the taxonomy listed above - users don’t care _where_ the problem happened 186 | * Next meeting: 2024-01-08. David will present his presentation on AI/ML and security for group feedback 187 | 188 |

189 | 190 | 191 |

Attendees:

192 | 193 | 194 | Put an “X” in the “Present” column if present 195 | 196 | 197 | 198 | 199 | 201 | 203 | 205 | 206 | 207 | 209 | 211 | 213 | 214 | 215 | 217 | 219 | 221 | 222 | 223 | 225 | 227 | 229 | 230 | 231 | 233 | 235 | 237 | 238 | 239 | 241 | 243 | 245 | 246 | 247 | 249 | 251 | 253 | 254 | 255 | 257 | 259 | 261 | 262 | 263 | 265 | 267 | 269 | 270 | 271 | 273 | 275 | 277 | 278 |
Present 200 | Name 202 | Organization 204 |
x 208 | Mihai Maruseac 210 | Chair, Google GOSST 212 |
x 216 | Jay White 218 | Chair, Microsoft 220 |
x 224 | Mads R. Havmand 226 | Chainalysis 228 |
x 232 | Laurent Simon 234 | Google 236 |
x 240 | Pedro Ferracini 242 | Mercado Libre OSPO 244 |
x 248 | Aubrey King 250 | 252 |
x 256 | Jeff Borek 258 | IBM 260 |
X 264 | Dana Wang 266 | OpenSSF 268 |
X 272 | Victor Lu 274 | Independent 276 |
279 | 280 | 281 |

Apologies:

282 | 283 | 284 | 285 | 286 | * 287 | 288 |

Introduction/Welcome New Friends!

289 | 290 | 291 | 292 | 293 | * Welcome Laurent Simon, Aubrey, Dana, Mads 294 | * Aubrey from OWASP Top 10 for LLMs 295 | 296 |

Old Actions

297 | 298 | 299 | 300 | 301 | * Mihai to reach out to Maximilan Huber for licensing 302 | * Please reach out under [maximilian.huber@tngtech.com](mailto:maximilian.huber@tngtech.com) 303 | * Include Jay in the reach out emails, see what legal we can include 304 | * Mihai to reach out to Sarah for the charter and related docs 305 | * Mihai working on making the repo more readable 306 | * Expect PR soon 307 | * Making the Repo Human Readable: - Sal 308 | * What elements do we want to include on the repo? I’ll be borrowing some style from [this repo](https://github.com/finos/zenith) to focus it on projects/papers 309 | * OS specific recommendations doc - Sal 310 | * Identify targets for security slam with AI - Sal 311 | * Create MSVR doc - Jay 312 | * Update Charter doc from original mission statement - Dan 313 | * Try to find owners for docs - Nigel 314 | * 315 | * 316 | * Need more 317 | * Governance: working on charter and related docs is a priority 318 | * After this, we can sync with Montreal Institute for Ethics in AI, etc. 319 | 320 |

Topics

321 | 322 | 323 | 324 | 325 | * New meeting time, hopefully stable 326 | * [From Sarah]: 327 | * AI/ML +OSS+ Security. Need Venn diagram of "our sweet spot" to be industry leaders in OpenSSF. Otherwise, we risk not being really impactful amongst AI/ML and Security. 328 | * [1:39 PM] Definition: "venn diagram" AI/ML + OSS + Security 329 | * Improve secure development using OpenSSF tools/guides 330 | * Improve secure consumption using OpenSSF tools/guides. 331 | * Sarah and Jay chatted about this before Thanskgiving break 332 | * Model transparency 333 | * Talk at ScaleByTheBay and [GitHub SLSA Bay Area Meetup](https://resources.github.com/github-slsa-meetup-nov16/) 334 | * Need to identify industry partners to work together on standardizing what needs to be included in the provenance 335 | * MUST vs SHOULD 336 | * Datasets, pretrained models, hashing scheme 337 | * Still working on deconfliction with LF Data & AI WG 338 | * Meeting this thursday (30th Nov) 339 | * Need to update charter, refine and fill in current gaps 340 | * This is at 6AM Pacific, ask Mihai to be added to invite 341 | * Minimum security baseline 342 | * Laurent: some existing security baseline [https://mvsp.dev](https://mvsp.dev) 343 | * 344 | * 30th Nov: Jay will be doing a panel discussion at OASIS AI Panel (organized by OASIS and Cisco) 345 | * AI vulnerability disclosures 346 | * 11 AM Pacific 347 | * Need to register: [https://aisecuritysummit.org/](https://aisecuritysummit.org/) 348 | * Holiday break incoming 349 | * 11th Dec is last member of the year 350 | * We should have a 360 outline for going into January 351 | * Plans for OSS NA? 352 | * Model transparency 353 | * Panel discussion 354 | * Build up on the momentum for the next conference season 355 | * Figure ways to fully integrate with the other teams 356 | * Maybe include this in the SupplyChain track? 357 | * Telemetry paper () 358 | * Need to reach out to Sal again 359 | * Or try dusting it off and bringing up to date 360 | * Start a new doc with what we want to do in this WG and what we complement with other groups 361 | * SLSA, GUAC, Supply chain, S2C2F 362 | * Best practices 363 | * Complement Top 10 OWASP 364 | * Create guide together 365 | * End User Groups has done great job for threat modelling 366 | * Will need to be extended to include data/model parts 367 | * Need to also define what we are not touching, what is outside of scope 368 | * Risk of scope creep 369 | * OWASP meetings 370 | * [Link to meeting wiki page](https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Meetings) 371 | * Meeting recordings also posted on YouTube : [https://www.youtube.com/@LLMTop10](https://www.youtube.com/@LLMTop10) 372 | 373 |

New actions

374 | 375 | 376 | 377 | 378 | * [Jay] Reach out to Dana for MVSP / baseline 379 | * [Mihai/Jay] Check/make sure we have a Google Drive for the WG 380 | * [Victor] Template/start for the docs about threat modeling, 381 | 382 |

2023-11-8

383 | 384 | 385 |

Attendees:

386 | 387 | 388 | Put an “X” in the “Present” column if present 389 | 390 | 391 | 392 | 393 | 395 | 397 | 399 | 400 | 401 | 403 | 405 | 407 | 408 | 409 | 411 | 413 | 415 | 416 | 417 | 419 | 421 | 423 | 424 | 425 | 427 | 429 | 431 | 432 | 433 | 435 | 437 | 439 | 440 | 441 | 443 | 445 | 447 | 448 |
Present 394 | Name 396 | Organization 398 |
x 402 | Mihai Maruseac 404 | Chair, Google GOSST 406 |
x 410 | Sarah Evans 412 | Dell Technologies 414 |
x 418 | Matt Rutkowski 420 | IBM (he/him) 422 |
x 426 | Cheuk Ting Ho 428 | OpenSSF 430 |
x 434 | Maximilian Huber 436 | TNG Technology Consulting 438 |
x 442 | Eoin Wickens 444 | HiddenLayer 446 |
449 | 450 | 451 |

Apologies:

452 | 453 | 454 | 455 | 456 | * Kathleen Goeschel, Red Hat 457 | 458 |

Introduction/Welcome New Friends!

459 | 460 | 461 | 462 | 463 | * Welcome Max, Eoin, and Cheuk 464 | 465 |

Old Actions

466 | 467 | 468 | 469 | 470 | * Making the Repo Human Readable: - Sal 471 | * What elements do we want to include on the repo? I’ll be borrowing some style from [this repo](https://github.com/finos/zenith) to focus it on projects/papers 472 | * Anyone willing to be an expert for the OSI AI licence definition? 473 | * This would be an informal, internal AmA style with their legal experts. 474 | * Maximilian Huber comes from the license compliance side! 475 | * Please reach out under maximilian.huber@tngtech.com 476 | * OS specific recommendations doc - Sal 477 | * Identify targets for security slam with AI - Sal 478 | * Create MSVR doc - Jay 479 | * Update Charter doc from original mission statement - Dan 480 | * Try to find owners for docs - Nigel 481 | * 482 | * 483 | * Need more 484 | 485 |

Topics

486 | 487 | 488 | 489 | 490 | * NIST U.S. ARTIFICIAL INTELLIGENCE SAFETY INSTITUTE [https://openssf.slack.com/archives/C0587E513KR/p1699310525828439](https://openssf.slack.com/archives/C0587E513KR/p1699310525828439) 491 | * Link to news: [https://www.commerce.gov/news/press-releases/2023/11/direction-president-biden-department-commerce-establish-us-artificial](https://www.commerce.gov/news/press-releases/2023/11/direction-president-biden-department-commerce-establish-us-artificial) 492 | * Link to LOI Submission: [https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute](https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute) 493 | * We need volunteers to take on writing the letter 494 | * Model transparency 495 | * Repo: [https://github.com/google/model-transparency](https://github.com/google/model-transparency) 496 | * Blog post: [https://security.googleblog.com/2023/10/increasing-transparency-in-ai-security.html](https://security.googleblog.com/2023/10/increasing-transparency-in-ai-security.html) 497 | * Talk at PackaginCon: [https://docs.google.com/presentation/d/1YAQ0nUE7e-liWzeVeqRGiICpQLXFnQDKlkt2b-Tb0Ag/edit?pli=1#slide=id.g2949fd23bf8_0_3072](https://docs.google.com/presentation/d/1YAQ0nUE7e-liWzeVeqRGiICpQLXFnQDKlkt2b-Tb0Ag/edit?pli=1#slide=id.g2949fd23bf8_0_3072) 498 | * Eoin has been working on a group for this, chatting with Laurent 499 | * HiddenLayer, NVidia 500 | * Standard for signing all files in a model 501 | * Extend to multiple PKIs 502 | * Working on scaling hashing and ensuring integrity of all files that compose a model 503 | * How to handle a mix of proprietary and open models? 504 | * Still working on deconfliction with LF Data & AI WG 505 | * Need to update charter, refine and fill in current gaps 506 | * SLSA Bay Area GitHub Meetup: [https://resources.github.com/github-slsa-meetup-nov16/](https://resources.github.com/github-slsa-meetup-nov16/) 507 | * Two approaches we can take: 508 | * Helping industry 509 | * Helping developers / data scientists 510 | * There’s a lot of improvement we can do here: training, tooling, documentation 511 | * E.g. improvements to Jupyter Notebook 512 | * Scorecards for ML workflows? 513 | * We need to decide if we do this work in this WG or find another one 514 | * Sarah will work with Jay to put these in the Charter 515 | * Security telemetry 516 | * Governance committee 517 | * Work with TAC 518 | * Review membership for associate level applications 519 | * There is Montreal Institute for Ethics in AI 520 | * Should sync with us 521 | * **Makes working on charter and related docs a priority** 522 | 523 |

New actions

524 | 525 | 526 | 527 | 528 | * Mihai to reach out to Max for licensing 529 | * Mihai to reach out to Sarah for the charter and related docs 530 | 531 |

2023-10-25

532 | 533 | 534 |

Attendees:

535 | 536 | 537 | Put an “X” in the “Present” column if present 538 | 539 | 540 | 541 | 542 | 544 | 546 | 548 | 549 | 550 | 552 | 554 | 556 | 557 | 558 | 560 | 562 | 564 | 565 | 566 | 568 | 570 | 572 | 573 | 574 | 576 | 578 | 580 | 581 | 582 | 584 | 586 | 588 | 589 | 590 | 592 | 594 | 596 | 597 | 598 | 600 | 602 | 604 | 605 | 606 | 608 | 610 | 612 | 613 |
Present 543 | Name 545 | Organization 547 |
X 551 | Nigel Brown 553 | Stacklok 555 |
X 559 | Pedro Ferracini 561 | Mercado Libre OSPO 563 |
X 567 | Mihai Maruseac 569 | Chair, Google GOSST 571 |
X 575 | Victor Lu 577 | Independent 579 |
X 583 | David A. Wheeler 585 | Linux Foundation 587 |
X 591 | Kathleen Goeschel 593 | Red Hat 595 |
X 599 | Enoch Kaxada 601 | 603 |
X 607 | Matt Rutkowski 609 | IBM 611 |
614 | 615 | 616 |

Introduction/Welcome New Friends!

617 | 618 | 619 | 620 | 621 | * Kathleen Goeschel, Red Hat 622 | * Enoch Kaxada - learned about OpenSSF, looking into learning more about these meetings 623 | 624 |

Old Actions

625 | 626 | 627 | 628 | 629 | * Making the Repo Human Readable: - Sal 630 | * What elements do we want to include on the repo? I’ll be borrowing some style from [this repo](https://github.com/finos/zenith) to focus it on projects/papers 631 | * Anyone willing to be an expert for the OSI AI licence definition? 632 | * This would be an informal, internal AmA style with their legal experts. 633 | * OS specific recommendations doc - Sal 634 | * Identify targets for security slam with AI - Sal 635 | * Create MSVR doc - Jay 636 | * Update Charter doc from original mission statement - Dan 637 | * Try to find owners for docs - Nigel 638 | * Need more 639 | 640 |

Topics

641 | 642 | 643 | 644 | 645 | * FYI: LF Member Summit is today, we may not have some people today 646 | * (Technical issues - audio issues on Zoom) 647 | * Still need to walk through Old actions 648 | * Need to get experts on legal for OSI AI 649 | * Need to identify owners of documentation 650 | * Any other business (AOB) 651 | 652 |

New actions

653 | 654 | 655 | 656 | 657 | * Mihai: Google has been working on doing SLSA provenance & signing with Sigstore for ML, using GitHub 658 | * [https://github.com/google/model-transparency/](https://github.com/google/model-transparency/) 659 | * Expect to announce tomorrow, 2023-10-25 660 | * David: Presentation about security and ML 661 | * [https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt](https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt) 662 | * Will be updated, workshop 663 | * Need to: 664 | * Update WG repo so that it looks properly when linking to it from the workshop 665 | * More information, documents, MSVR doc 666 | * What other topics to include in the presentation? 667 | * email to dwheeler at linuxfoundation . org 668 | * Meeting time - do we want to move it? 669 | * Let’s wait briefly to see who else joins, then consider rescheduling 670 | * Meet with LF AI 671 | * Talk to LF AI TAC on Thu Nov 30, 2023 2pm - 3pm (GMT) 15 min 672 | * OpenSSF brainstorming collaboration with LF AI & Data 673 | * Couple of slides (1 week before) 674 | * Who are we? 675 | * How can we contribute to your projects? 676 | * We need to identify this WG’s specific outputs 677 | * What’s important, what do we think we can actually accomplish? 678 | * It’s okay/necessarily to work incrementally 679 | * One option: Guidance for software developers who are using ML to generate code - what can they do to increase the likelihood of generating/deploying *secure* code? (Better prompts, how to examine for common ML mistakes, etc.) 680 | * Mihai: I have AI experience, Supply chain security. I’m interested in strengthening the AI supply chain - not just signing pipeline, but that the resulting ML model is safe 681 | * Victor Lu: Other groups seem to be focused on a consulting focus. OpenSSF is focused more on a fundamental solution perspective, that’s unique. 682 | * Kathleen: group focused on integrity of data used to train the ML; working in supply chain security at RedHat; focus on foundational models 683 | * Mihai: Google has doc on security for models - Secure AI Framework - [https://blog.google/technology/safety-security/introducing-googles-secure-ai-framework/](https://blog.google/technology/safety-security/introducing-googles-secure-ai-framework/) 684 | 685 |

2023-10-11

686 | 687 | 688 |

Attendees:

689 | 690 | 691 | Put an “X” in the “Present” column if present 692 | 693 | 694 | 695 | 696 | 698 | 700 | 702 | 703 | 704 | 706 | 708 | 710 | 711 | 712 | 714 | 716 | 718 | 719 | 720 | 722 | 724 | 726 | 727 | 728 | 730 | 732 | 734 | 735 | 736 | 738 | 740 | 742 | 743 | 744 | 746 | 748 | 750 | 751 | 752 | 754 | 756 | 758 | 759 | 760 | 762 | 764 | 766 | 767 |
Present 697 | Name 699 | Organization 701 |
X 705 | Nigel Brown 707 | Chair, stacklok 709 |
X 713 | Mihai Maruseac 715 | Google GOSST 717 |
x 721 | Christine Abernathy 723 | F5 725 |
x 729 | Sarah Evans 731 | Dell Technologies 733 |
X 737 | Dan Appelquist 739 | Snyk 741 |
x 745 | Mike Lieberman 747 | Kusari 749 |
x 753 | Dana Wang 755 | OpenSSF 757 |
x 761 | Amanda Martin 763 | Linux Foundation 765 |
768 | 769 | 770 |

Introduction

771 | 772 | 773 | 774 | 775 | * New Attendees 776 | 777 |

Old Actions

778 | 779 | 780 | 781 | 782 | * Making the Repo Human Readable: - Sal 783 | * What elements do we want to include on the repo? I’ll be borrowing some style from [this repo](https://github.com/finos/zenith) to focus it on projects/papers 784 | * Anyone willing to be an expert for the OSI AI licence definition? 785 | * This would be an informal, internal AmA style with their legal experts. 786 | * OS specific recommendations doc - Sal 787 | * Identify targets for security slam with AI - Sal 788 | * Create MSVR doc - Jay 789 | * Update Charter doc from original mission statement - Dan 790 | * Try to find owners for docs - Nigel 791 | * Need more 792 | 793 |

Topics

794 | 795 | 796 | 797 | 798 | * Voting results for co-leads 799 | * THANK YOU to everyone who was willing to be lead. We are VERY grateful! We had an awesome set of candidates, and we’re lucky to have any of them. 800 | * The winning co-leads per [https://github.com/ossf/ai-ml-security/issues/5](https://github.com/ossf/ai-ml-security/issues/5) are: 801 | * Jay White - GitHub @camaleon2016 802 | * Mihai Maruseac - GitHub @[mihaimaruseac](https://github.com/mihaimaruseac) 803 | * Started on TensorFlow, working on [SAIF](https://blog.google/technology/safety-security/introducing-googles-secure-ai-framework/), lots of overlap so it’s good to be involved here! 804 | * Welcome aboard! 805 | * Congratulations new co-leads! 806 | * HUGE thanks to Nigel Brown for successfully getting us to this point - please stick around! 807 | * We might choose to re-vote on the meeting time, the “winning” seems to cause some problems 808 | * LF AI - Meeting with Ibrahim 809 | * Alejandro is main point of contact 810 | * Talk to LF AI TAC on Thu Nov 30, 2023 2pm - 3pm (GMT) 15 min 811 | * OpenSSF brainstorming collaboration with LF AI & Data 812 | * Couple of slides (1 week before) 813 | * Who are we? 814 | * How can we contribute to your projects? 815 | * Who is to do this? 816 | * Mihai can do it 817 | * David A. Wheeler: We need to keep working on connecting OpenSSF + LF AI & Data better 818 | * MVSR progress 819 | * OpenSSF - think it’s okay 820 | * [ossf/ai-ml-security/mvsr.md](https://github.com/ossf/ai-ml-security/blob/main/mvsr.md) 821 | * Charter progress 822 | * Dan: No progress but I've got the action - will happen before next week. 823 | * Report back from section owners 824 | * 825 | * 826 | * Should we link to these draft documents from the GitHub README? 827 | * We should probably migrate these to GitHub for easier understanding of what’s changing 828 | * David A. Wheeler: Updated draft presentation on my AI/ML presentation here, comments welcome: [https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt](https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt) - email to dwheeler at linuxfoundation . org 829 | * Meeting time challenges. 830 | * Options: redo polls, alternate timezones 831 | * Old Actions 832 | * Licensing? 833 | * Not sure what ask is 834 | * Christine: I’m helping to draft OSI AI document, to be shared next week 835 | * David: At least sharing the OSI draft would be great, then we can work with OSI or raise issues or whatever 836 | * Any other business (AOB) 837 | 838 |

2023-09-27

839 | 840 | 841 |

Attendees:

842 | 843 | 844 | Put an “X” in the “Present” column if present 845 | 846 | 847 | 848 | 849 | 851 | 853 | 855 | 856 | 857 | 859 | 861 | 863 | 864 | 865 | 867 | 869 | 871 | 872 | 873 | 875 | 877 | 879 | 880 | 881 | 883 | 885 | 887 | 888 | 889 | 891 | 893 | 895 | 896 | 897 | 899 | 901 | 903 | 904 | 905 | 907 | 909 | 911 | 912 | 913 | 915 | 917 | 919 | 920 | 921 | 923 | 925 | 927 | 928 | 929 | 931 | 933 | 935 | 936 | 937 | 939 | 941 | 943 | 944 |
Present 850 | Name 852 | Organization 854 |
X 858 | Nigel Brown 860 | Chair, stacklok 862 |
X 866 | Jeff Borek 868 | IBM 870 |
X 874 | Pedro Ferracini 876 | Mercado Libre OSPO 878 |
x 882 | Christine Abernathy 884 | F5 886 |
X 890 | Munawar Hafiz 892 | OpenRefactory 894 |
X 898 | David A. Wheeler 900 | Linux Foundation 902 |
X 906 | Naasief Edross 908 | Google 910 |
X 914 | Mark Sturdevant 916 | IBM 918 |
X 922 | Csaba Zoltani 924 | Nokia 926 |
X 930 | Sal Kimmich 932 | 934 |
x 938 | Sarah Evans 940 | Dell Technologies 942 |
945 | 946 | 947 |

Apologies:

948 | 949 | 950 | 951 | 952 | * Mihai Maruseac (Google GOSST): traveling 953 | * Sanket Naik 954 | * Have a conflict at the meeting time tomorrow 955 | * Reviewed the AI Security Telemetry document and Slack channel threads. Awaiting feedback on the “purpose” section in the document to proceed further on section 1. 956 | 957 |

Introduction

958 | 959 | 960 | 961 | 962 | * New Attendees 963 | 964 |

Old Actions

965 | 966 | 967 | 968 | 969 | * OS specific recommendations doc - Sal 970 | * Identify targets for security slam with AI - Sal 971 | * Create MSVR doc - Jay 972 | * Update Charter doc from original mission statement - Dan 973 | * Try to find owners for docs - Nigel 974 | * Need more 975 | 976 |

Topics

977 | 978 | 979 | 980 | 981 | * Voting results 982 | * [https://github.com/ossf/ai-ml-security/issues/3](https://github.com/ossf/ai-ml-security/issues/3) co-leads 983 | * [https://github.com/ossf/ai-ml-security/issues/2](https://github.com/ossf/ai-ml-security/issues/2) lead 984 | * ~~Congratulations new co-leads!~~ 985 | * Unfortunately, we accidentally had two separate voting processes. Our apologies to all. It’s not clear how to “fix” this irregularity. 986 | * We decided to Re run the process, with votes due in 2 weeks. That seems the safest. 987 | * Meeting time [https://www.when2meet.com/?21348484-Fx6Gv](https://www.when2meet.com/?21348484-Fx6Gv) 988 | * 1 hour earlier seems to be favourite 989 | * Agree? 990 | * David A. Wheeler: The WG repo <[https://github.com/ossf/ai-ml-security](https://github.com/ossf/ai-ml-security)> is very sparse. Please create pull requests to turn the very sparse README, etc., so it points to existing work, explains what’s going on, etc. [https://github.com/ossf/ai-ml-security/issues/4](https://github.com/ossf/ai-ml-security/issues/4) requests auditions (it’s okay if there are links to Google documents, we just want to make sure newcomers can find things). 991 | * Stub docs in github 992 | * [Ai_ml_telemetry.md](https://github.com/ossf/ai-ml-security/blob/main/telemetry/ai_ml_telemetry.md) 993 | * [ai_ml_landscape.md](https://github.com/ossf/ai-ml-security/blob/main/landscape/ai_ml_landscape.md) 994 | * MSVR 995 | * - OpenSSF-wide MVSR 996 | * [ai-ml-security/mvsr.md](https://github.com/ossf/ai-ml-security/mvsr.md) 997 | * Charter 998 | * David’s [presentation idea](https://lists.openssf.org/g/openssf-wg-ai-ml-security/topic/idea_i_give_my_presentation/101606838?p=,,,20,0,0,0::recentpostdate/sticky,,,20,2,0,101606838,previd%3D1695767014421714791,nextid%3D1694722639795369434&previd=1695767014421714791&nextid=1694722639795369434) - any thoughts? My hope is that it’ll encourage others to join this group. 999 | * Objections? No objections heard. 1000 | * We want to improve our GitHub repo, so that when new visitors arrive they’ll learn more about the WG. 1001 | * Need to interact with other groups: Best Practices WG, SBOMs, etc. Hard to find a group that doesn’t have some interaction with AI/ML 1002 | * We need to ensure LF AI & Data foundation participates! 1003 | * We have some, want more. Let’s talk with Ibrahim Haddad (Executive Director) to make sure this happens. 1004 | * Nigel will talk with Ibrahim. 1005 | * One challenge: many participants are users, not experts & developers. 1006 | * SME = subject matter expert 1007 | * A big challenge: Having the data to analyze 1008 | * Do we have a comment on [Federal Register :: Artificial Intelligence and Copyright](https://www.federalregister.gov/documents/2023/08/30/2023-18624/artificial-intelligence-and-copyright) 1009 | * Sounds like something we should have 1010 | * Comment period that ends in 21 days. (10/18/2023) - alerting public 1011 | * Policy committee, share - Nigel 1012 | * Probably not enough time for us to create a cogent combined response. However, if someone writes a comment, please feel free to share with the group so others writing responses can consider them. 1013 | * Security Slam 1014 | * Is this related to [https://github.com/ossf/wg-securing-critical-projects](https://github.com/ossf/wg-securing-critical-projects) ? 1015 | * Report back from section owners (we’re out of time, there has been progress, we’ll pick this up next meeting) 1016 | * 1017 | * 1018 | * AOB 1019 | * Sal: We’re finalizing a definition of “open source definition of AI” at OSI, contact Sal K. 1020 | 1021 |

New actions

1022 | 1023 | 1024 | 1025 | 1026 | * Re-run the voting - David 1027 | * Making the Repo Human Readable: - Sal 1028 | * What elements do we want to include on the repo? I’ll be borrowing some style from [this repo](https://github.com/finos/zenith) to focus it on projects/papers 1029 | * Change the meeting slot - Nigel 1030 | * Requested and moved 1031 | * Anyone willing to be an expert for the OSI AI license definition? This would be an informal, internal AmA style with their legal experts. 1032 | * Can we define a clear point of contact across LF groups? 1033 | * Talk to Ibrahim - Nigel 1034 | * [Federal Register :: Artificial Intelligence and Copyright](https://www.federalregister.gov/documents/2023/08/30/2023-18624/artificial-intelligence-and-copyright) - Policy committee, share - Nigel 1035 | * Shared via Brian (couldn’t find another address) 1036 | 1037 |

2023-09-20

1038 | 1039 | 1040 |

Canceled

1041 | 1042 | 1043 | 1044 | 1045 | * Some are in Spain 1046 | 1047 |

2023-09-13

1048 | 1049 | 1050 |

Attendees:

1051 | 1052 | 1053 | 1054 | 1055 | * Nigel Brown (Chair, stacklok ) 1056 | * David Edelsohn (IBM) 1057 | * Pedro Ferracini (Mercado Libre OSPO) 1058 | * Mihai Maruseac (Google GOSST) 1059 | * Christine Abernathy (F5) 1060 | * Sanket Naik (Palosade) 1061 | * Munawar Hafiz (OpenRefactory) 1062 | * Victor Lu (Independent) 1063 | 1064 |

Introduction

1065 | 1066 | 1067 | 1068 | 1069 | * New Attendees 1070 | 1071 |

Old Actions

1072 | 1073 | 1074 | 1075 | 1076 | * OS specific recommendations doc - Sal 1077 | * Identify targets for security slam with AI - Sal 1078 | * Create MSVR doc - Jay 1079 | * Update Charter doc from original mission statement - Dan 1080 | * Check new meeting time in slack - Nigel 1081 | * Failed 1082 | * ~~Try ~~~~ ~~<- scrap this 1083 | * Here is the link to the meeting planner [https://www.when2meet.com/?21348484-Fx6Gv](https://www.when2meet.com/?21348484-Fx6Gv) 1084 | * Try to find owners for docs - Nigel 1085 | * Need more 1086 | 1087 |

Topics

1088 | 1089 | 1090 | 1091 | 1092 | * Discussion “how AI risk differ from traditional software risk” 1093 | * Victor to lead 1094 | 1095 | 1096 | 1097 | 1098 | Summary: There are many AI/ML Topics we need to work on at OpenSSF and there 1099 | 1100 | 1101 | will NOT be much overlap with work by other foundations such as LF AI 1102 | 1103 | 1104 | The content of the above document can be merged into Landscape White paper 1105 | 1106 | * Feedback from Sanket: 1107 | * Keeping in mind that the audience is OSS devs and OSS consumers, the deliverables could be: 1108 | 1. Common reference architecture / building blocks for ML that will be used across OSSF 1109 | 2. Address the threats for each block 1110 | 3. Provide best practices (in current state) to protect against threats for each block 1111 | 4. Provide guidance to keep in mind for future dev 1112 | * Define AI security terminology 1113 | * Differentiate LFAI security mandate from OpenSSF AI/ML WG 1114 | * Propose mitigation 1115 | * Report back from section owners 1116 | * 1117 | * 1118 | * Old actions 1119 | * Reminder about the vote 1120 | * [https://github.com/ossf/ai-ml-security/issues/3](https://github.com/ossf/ai-ml-security/issues/3) co-leads 1121 | * [https://github.com/ossf/ai-ml-security/issues/2](https://github.com/ossf/ai-ml-security/issues/2) lead 1122 | * Nominations by 15th September 2023 please 1123 | * Do we need more time for nominations? 1124 | * Meeting 20th - I’m unavailable 1125 | * Cancel? 1126 | * We will cancel 1127 | * ~~Does anyone else want to chair?~~ 1128 | * AOB 1129 | 1130 |

New actions

1131 | 1132 | 1133 | 1134 | 1135 | * 1136 | 1137 |

2023-09-06

1138 | 1139 | 1140 |

Attendees:

1141 | 1142 | 1143 | 1144 | 1145 | * Nigel Brown (Chair, stacklok ) 1146 | * Dan Appelquist (TAC, Snyk) 1147 | * Sanket Naik (Palosade) 1148 | * Randall T. Vasquez (LF) 1149 | * David Edelsohn (IBM) 1150 | * Sal Kimmich (EscherCloud) 1151 | * Jason Keirstead (Pobal Cyber) 1152 | * Jay White (Microsoft) 1153 | 1154 |

Apologies

1155 | 1156 | 1157 | 1158 | 1159 | * Mihai Maruseac (Google GOSST) 1160 | 1161 |

Introduction

1162 | 1163 | 1164 | 1165 | 1166 | * New Attendees 1167 | 1168 |

Old Actions

1169 | 1170 | 1171 | 1172 | 1173 | * Blog - unassigned for now 1174 | 1175 |

Topics

1176 | 1177 | 1178 | 1179 | 1180 | * Tac, Sept 5th 1181 | * We are now officially an [incubating](https://github.com/ossf/tac/blob/main/process/working-group-lifecycle.md) working group 1182 | * Still need more focus, but acknowledge this will come with time 1183 | * We need election and co-leads 1184 | * New time slot and frequency 1185 | * Various reasons 1186 | * Proposed bi-weekly on Mondays at 15:00 (UK BST) 1187 | * Alternative Thursday at 16:00 UK (11:00 US Eastern) 1188 | * Switching to github .md 1189 | * [https://github.com/ossf/ai-ml-security](https://github.com/ossf/ai-ml-security) 1190 | * Review PRs rather than manual 1191 | * Any volunteers to do this 1192 | * Landscape 1193 | * Telemetry 1194 | * Charter.md in README.md 1195 | * MVSR 1196 | * Review 1197 | * 1198 | * 1199 | * AOB 1200 | 1201 |

Notes

1202 | 1203 | 1204 | 1205 | 1206 | * Jay: there should be a co-lead. 1207 | * Dan: agree. 1208 | * Nigel: Will set up a vote. 1209 | * Jay: 1st order of business - we need to drill down on the MVSR, charter… not a 2nd or 3rd thing. We need to do that first. 1210 | * Jay: MVSR - Mission, Vision, Strategy and Roadmap 1211 | * Strategy - we do have some idea - let’s get something down on paper 1212 | * Mission and vision we have something right now 1213 | * Dan: [https://github.com/ossf/ai-ml-security](https://github.com/ossf/ai-ml-security) 1214 | * David: we need to nail down what we’re trying to achieve 1215 | * Jay: We have a mission … once we have a mission & vision then strategy gets clearer - then roadmap is according… 1216 | * David: we need have definitive purpose 1217 | * Dan: we have landscape and telemetry documents… 1218 | * Victor 1219 | * On telemetry document – I can take the lead on this section… (4) we can use slack or zoom calls to coordinate. 1220 | * Suggest people take the lead on each section… 1221 | * We discuss moving the Telemetry document over to github – with each section in its own Markdown document. 1222 | * Alternatively we keep it in Google docs… 1223 | * Telemetry document: 1224 | * Every section to have an owner. 1225 | * Changes to be presented back to main group. 1226 | * We keep it in google docs for now. 1227 | * 16 sections including the appendix: 1228 | * 1. Intro – Santek 1229 | * 2. Victor 1230 | * 3. “AI Security and Telemetry”– - Dan A. 1231 | * 4. [missing] - Victor? 1232 | * 5. (regulation) missing owner 1233 | * 6. Ai Standards & best practices - missing owner 1234 | * 1235 | * Meeting time change - proposing **15:00 London Time (BST) on Mondays**.… (bi-weekly) 1236 | * We’ll put this in the chat and give people a chance to object… 1237 | * Yotam: MITRE vulnerability risks database - can we get them to come speak to the group? [agreed] Also - 2 others presented at Defcon about open source LLMs - i will ping them as well. 1238 | * I will e-mail the MITRE person and we can find a date. 1239 | 1240 |

Old actions

1241 | 1242 | 1243 | 1244 | 1245 | * OS specific recommendations doc - Sal 1246 | * Identify targets for security slam with AI - Sal 1247 | 1248 |

Actions

1249 | 1250 | 1251 | 1252 | 1253 | * Reinstate the vote for lead and co-leads of group - Nigel 1254 | * [https://github.com/ossf/ai-ml-security/issues/3](https://github.com/ossf/ai-ml-security/issues/3) co-leads 1255 | * [https://github.com/ossf/ai-ml-security/issues/2](https://github.com/ossf/ai-ml-security/issues/2) lead 1256 | * Nominations by 15th September 2023 please 1257 | * Create MSVR doc - Jay 1258 | * Update Charter doc from original mission statement - Dan 1259 | * Check new meeting time in slack - Nigel 1260 | * Try to find owners for docs - Nigel 1261 | 1262 |

2023-08-30

1263 | 1264 | 1265 |

Attendees:

1266 | 1267 | 1268 | 1269 | 1270 | * Nigel Brown (Chair, stacklok ) 1271 | * Sarah Evans (Dell Technologies) 1272 | * Pedro Ferracini (Mercado Libre OSPO) 1273 | * David A. Wheeler (LF) 1274 | * Mark Sturdevant (IBM) 1275 | * Sanket Naik (Palosade) 1276 | * Christine Abernathy (F5) 1277 | * Jeff Borek (IBM) 1278 | * Munawar Hafiz (OpenRefactory) 1279 | 1280 |

Apologies

1281 | 1282 | 1283 | 1284 | 1285 | * Mihai Maruseac (Google GOSST) 1286 | 1287 |

Introduction

1288 | 1289 | 1290 | 1291 | 1292 | * New Attendees 1293 | 1294 |

Old Actions

1295 | 1296 | 1297 | 1298 | 1299 | * Blog - unassigned for now 1300 | 1301 |

Topics

1302 | 1303 | 1304 | 1305 | 1306 | * Tac, Sept 5th [LF AI & DATA | ML security committee + OpenSSF proposal · Issue #188 · ossf/tac · GitHub](https://github.com/ossf/tac/issues/188) 1307 | * Lots of enthusiasm, light on work 1308 | * Will report back to OpenSSF TAC, the good news is that we’re coordinating with the LF AI & Data 1309 | * We’ll see what TAC has to say 1310 | * Need more focus on OPEN SOURCE aspects of this, but need to keep context. 1311 | * Review 1312 | * Review 1313 | * AOB 1314 | 1315 |

New actions

1316 | 1317 | 1318 | 1319 | 1320 | * OS specific recommendations doc - Sal 1321 | * Identify targets for security slam with AI - Sal 1322 | * We need a change summary for the docs - Nigel 1323 | 1324 |

2023-08-23

1325 | 1326 | 1327 |

Attendees:

1328 | 1329 | 1330 | 1331 | 1332 | * Nigel Brown (Chair, stacklok ) 1333 | * Pedro Ferracini (Mercado Libre OSPO) 1334 | * Sarah Evans (Dell Technologies) 1335 | * Christine Abernathy (F5) 1336 | * Dan Appelquist (Snyk) / TAC 1337 | * Andreas Fehlner (ONNX) 1338 | * Yotam Perkal (Rezilion) 1339 | * Jason Keirstead (Cyware) 1340 | * Munawar Hafiz (OpenRefactory) 1341 | * Allen Stewart (Microsoft) 1342 | * Csaba Zoltani (Nokia) 1343 | * Jay White (Microsoft) 1344 | 1345 |

Apologies

1346 | 1347 | 1348 | 1349 | 1350 | * Mihai Maruseac (Google GOSST) 1351 | 1352 |

Introduction

1353 | 1354 | 1355 | 1356 | 1357 | * New Attendees 1358 | 1359 |

Old Actions

1360 | 1361 | 1362 | 1363 | 1364 | * Blog - unassigned for now 1365 | 1366 |

Topics

1367 | 1368 | 1369 | 1370 | 1371 | * Possibly a very short meeting - little movement on the docs 1372 | * Review 1373 | * Review 1374 | * Dan To take ownership of 1375 | * AOB 1376 | * Whitehouse - mobilisation plan re-fresh - 1377 | * DARPA contest - around AI security - open source use of AI - DARPA came to OpenSSF… [Announced](https://openssf.org/press-release/2023/08/09/openssf-to-support-darpa-on-new-ai-cyber-challenge-aixcc/) at BlackHat. DARPA to pay for 2 OpenSSF staff members who will run this competition - intentionally kept apart from other things OpenSSF is doing. 1378 | * Question about the association of this with OpenSSF - unclear on the roadmap… 1379 | * 1380 | 1381 |

New actions

1382 | 1383 | 1384 | 1385 | 1386 | * Go back to the TAC 1387 | * Contextualize the DARPA initiative - Sarah - Dan to support 1388 | * Blog post on telemetry ← Sal 1389 | 1390 |

2023-08-16

1391 | 1392 | 1393 |

Attendees:

1394 | 1395 | 1396 | 1397 | 1398 | * Nigel Brown (Chair, stacklok ) 1399 | * Pedro Ferracini (Mercado Libre OSPO) 1400 | * Mark Sturdevant (IBM) 1401 | * Csaba Zoltani (Nokia) 1402 | * Andres Orbe (Alpha-Omega) 1403 | * Sal Kimmich (GadflyAI) 1404 | * Jay White (Microsoft) 1405 | * Altaz Valani (DevSecOpsMentor.com) 1406 | * Sanket Naik (Palosade) 1407 | * Christine Abernathy (F5) 1408 | * Parag Patil (Palosade) 1409 | * Sarah Evans (Dell Technologies) 1410 | * Prachi Jadhav (Stacklok) 1411 | 1412 |

Apologies

1413 | 1414 | 1415 | 1416 | 1417 | * Mihai Maruseac (Google GOSST) 1418 | 1419 |

Introduction

1420 | 1421 | 1422 | 1423 | 1424 | * New Attendees 1425 | 1426 |

Old Actions

1427 | 1428 | 1429 | 1430 | 1431 | * Blog - unassigned for now 1432 | 1433 |

Topics

1434 | 1435 | 1436 | 1437 | 1438 | * Question: Has anyone signed an ML model? 1439 | * Up/downloaded from e.g. Hugging Face 1440 | * SPDX 3.0? Sigstore? 1441 | * Review 1442 | * Review 1443 | * AOB 1444 | * Yotam blackhat 1445 | * Hugging face vulnerability 1446 | * AI attack 1447 | * Mitre - ML vulnerabilites 1448 | 1449 |

New actions

1450 | 1451 | 1452 | 1453 | 1454 | * Collaboration Touch Point: There is a regularly meeting AI SBoMs for SPDX Every Wednesday, August 16**⋅**5:00 – 5:45pm EST: [https://zoom.us/j/92452702075](https://zoom.us/j/92452702075) 1455 | * Collaboration touch point - FINOS [https://www.finos.org/](https://www.finos.org/) 1456 | * Check selling pitch of mistral.ai - they got > 200 000 000 $ funding with selling pitch alone. It involves whitebox LLM 1457 | * Check with refact.ai if their LLM is whitebox. Their software is opensource and might be run in a docker container. Basically it’s a plugin like a Copilot 1458 | 1459 |

2023-08-09

1460 | 1461 | 1462 |

Attendees:

1463 | 1464 | 1465 | 1466 | 1467 | * Nigel Brown (Chair, stacklok ) 1468 | * Pedro Ferracini (Mercado Libre OSPO) 1469 | * Christine Abernathy (F5) 1470 | * Cheuk Ho (OpenSSF) 1471 | * Michael Scovetta (Microsoft) 1472 | * Mark Sturdevant (IBM) 1473 | * Amanda Martin (LF) 1474 | * Allen Stewart (Microsoft) 1475 | * David Edelsohn (IBM) 1476 | * Brian Knight (Microsoft) 1477 | * Altaz Valani 1478 | * Victor Lu 1479 | 1480 |

Apologies

1481 | 1482 | 1483 | 1484 | 1485 | * Mihai Maruseac (Google GOSST) 1486 | 1487 |

Introduction

1488 | 1489 | 1490 | 1491 | 1492 | * Hello Pedro, Cheuk 1493 | 1494 |

Old Actions

1495 | 1496 | 1497 | 1498 | 1499 | * Blog - unassigned for now 1500 | * Deliverable matrix in landscape doc - Sal 1501 | 1502 |

Topics

1503 | 1504 | 1505 | 1506 | 1507 | * LF/AI update 1508 | * Meeting Thursday, 4pm BST [https://lists.lfaidata.foundation/g/mlsecurity-committee/calendar](https://lists.lfaidata.foundation/g/mlsecurity-committee/calendar) 1509 | * All welcome 1510 | * Review 1511 | * Review 1512 | * AOB 1513 | 1514 |

New actions

1515 | 1516 | 1517 | 1518 | 1519 | * Create an issue (Nigel) 1520 | * [https://github.com/ossf/tac/issues/188](https://github.com/ossf/tac/issues/188) 1521 | 1522 |

2023-08-02

1523 | 1524 | 1525 |

Attendees:

1526 | 1527 | 1528 | 1529 | 1530 | * Nigel Brown (Chair, stacklok ) 1531 | * Sal Kimmich (GadflyAI) 1532 | * Christine Abernathy (F5) 1533 | * Mark Sturdevant (IBM) 1534 | * Fridolin Pokorny (Independent) 1535 | * Maya Costantini (Red Hat) 1536 | * Andreas Fehlner (ONNX, Trusted AI Committee of LF AI&Data) 1537 | * David Espejo (Union.ai) 1538 | * Allen Stewart (Microsoft) 1539 | 1540 |

Introduction

1541 | 1542 | 1543 | 1544 | 1545 | * New Attendees 1546 | 1547 |

Old Actions

1548 | 1549 | 1550 | 1551 | 1552 | * Blog - unassigned for now 1553 | * Deliverable matrix in landscape doc - Sal 1554 | * Common function for LF AI and Openssf AI ML - Nigel 1555 | * Waiting 1556 | * Get hugging face contact to Christine - Nigel 1557 | * Waiting 1558 | 1559 |

Topics

1560 | 1561 | 1562 | 1563 | 1564 | * LF/AI update 1565 | * Review 1566 | * No objections to this plan. 1567 | * Review 1568 | * Review 1569 | * Assign sections for [AI Security Telemetry ](https://docs.google.com/document/d/1J8M1F5ev9tXzMpA3dAFXqXYs2t-T10xt0_ObXnqNN04/edit?usp=sharing) 1570 | * AOB 1571 | 1572 |

New actions

1573 | 1574 | 1575 |

2023-07-26

1576 | 1577 | 1578 |

Attendees:

1579 | 1580 | 1581 | 1582 | 1583 | * Nigel Brown (Chair, stacklok ) 1584 | * Dan Appelquist (Snyk) 1585 | * Sal Kimmich (EscherCloudAI) 1586 | * Pedro Ferracini (Mercado Libre OSPO) 1587 | * Sanket Naik (Palosade) 1588 | * Mark Sturdevant (IBM) 1589 | * Christine Abernathy (F5) 1590 | * Michael Gildein (IBM) 1591 | * Mihai Maruseac (Google GOSST) 1592 | * Sarah Evans (Dell Technologies) 1593 | * Brian Behlendorf (LF) 1594 | * Victor Lu (Independent) 1595 | 1596 |

Introduction

1597 | 1598 | 1599 | 1600 | 1601 | * New Attendees 1602 | 1603 |

Old Actions

1604 | 1605 | 1606 | 1607 | 1608 | * Blog - unassigned for now 1609 | * Deliverable [matrix in content backlog](https://docs.google.com/document/d/1B8UlF-CQUN9092DIjSjDEEBJUcR-QN8q-Sno_Eq-_SU/edit?usp=sharing) - Sal 1610 | * Common function for LF AI and Openssf AI ML - Nigel 1611 | * Waiting 1612 | * Get hugging face contact to Christine - Nigel 1613 | * Waiting 1614 | 1615 |

Topics

1616 | 1617 | 1618 | 1619 | 1620 | * Meeting overhead 1621 | * Review 1622 | * AOB 1623 | * Deliverable [matix in content backlog](https://docs.google.com/document/d/1B8UlF-CQUN9092DIjSjDEEBJUcR-QN8q-Sno_Eq-_SU/edit?usp=sharing), and suggested blog posts 1624 | * Last week we had a call - Linux Foundation AI workstream attended 1625 | * Yotam: added some context in the landscape. 1626 | * Sal: Will add a specific deliverable for Security Telemetry, and get a whitepaper writing group for it. This is our highest priority deliverable, and can help to outline future community contributions 1627 | 1628 |

New actions

1629 | 1630 | 1631 | 1632 | 1633 | * In alignment with the above work, the below content backlog would be a great way to republish from LF blog - or to publish directly for the "[Cyberscape Zine 2.0](https://www.gadfly.ai/post/call-for-submissions-cyberscape-zine-2-0-voices-and-visualizations-on-ai-greetings-creators)" contest, here are 10 article ideas focusing on AI and security: 1634 | * 1. The Landscape of AI and Cybersecurity 1635 | * 2. Securing AI Systems: Challenges and Solutions 1636 | * 3. AI in Cybersecurity: Friend or Foe? 1637 | * 4. The Ethics of AI in Security: Balancing Safety and Privacy 1638 | * 5. Adversarial Attacks on AI: An Emerging Threat 1639 | * 6. AI and Data Privacy: A Complex Relationship 1640 | * 7. The Role of AI in Detecting and Preventing Cyber Attacks 1641 | * 8. AI in Secure Communication: The Future of Privacy 1642 | * 9. Securing the AI Supply Chain: Challenges and Solutions 1643 | * 10. The Future of AI and Security: Predictions and Possibilities 1644 | * Review Yotam’s additions to the Landscape document 1645 | * Yotam to share learnings from Defcon 1646 | * Discussion on deliverables 1647 | * We produce guidelines and proposals 1648 | * Possible to win “august of AI” on hackernoon.com <- $1000 for best article. 1649 | * Provenance & Security Telemetry for AI systems … maybe an “**industry whitepaper**” 1650 | * Sarah - licensing and how evolving the licensing is; security - how do you find out if your data contains vulnerabilities - having a workflow in place to mitigate: a **whitepaper** would be very helpful. I would participate. 1651 | * Nigel: I would like to see a document that says “these are the limitations - these are the problems you can’t solve” 1652 | * Sal: SBOMs - usually composable - different weights / epochs … we have an opportunity - show people who are building pipelines - work with the top 50 open source LLMs. “Specific data protections / telemetry / communication.” Feedback loops from people doing the same in [OpenSSF member orgs]. 1653 | * Nigel: I think LF would be interested in this. 1654 | * ?Action: Sal to work on a first draft? 1655 | 1656 |

2023-07-19

1657 | 1658 | 1659 |

Attendees:

1660 | 1661 | 1662 | 1663 | 1664 | * Nigel Brown (Chair, stacklok ) 1665 | * Pedro Ferracini (Mercado Libre OSPO) 1666 | * Zach Steindler (GitHub, TAC) 1667 | * Mihai Maruseac (Google GOSST) 1668 | * Mark Sturdevant (IBM) 1669 | * Luke Hinds (stacklok, OpenSSF GB) 1670 | * David Espejo (Union.ai) 1671 | 1672 |

Introduction

1673 | 1674 | 1675 | 1676 | 1677 | * New Attendees 1678 | * Hello Mark, Pedro 1679 | 1680 |

Old Actions

1681 | 1682 | 1683 | 1684 | 1685 | * Blog - unassigned for now 1686 | * Deliverable matrix in landscape doc - Sal 1687 | * Git issue for chair nominations - see [https://github.com/ossf/ai-ml-security/issues/1](https://github.com/ossf/ai-ml-security/issues/1) 1688 | 1689 |

Topics

1690 | 1691 | 1692 | 1693 | 1694 | * [Nominations](https://github.com/ossf/ai-ml-security/issues/1) for chair? 1695 | * Deferred after input from Brian 1696 | * LF working group - Nigel 1697 | * Meeting 1698 | * Alejandro 1699 | * Openssf overview volunteer 1700 | * Ideas (Christine’s) 1701 | * Hugging Face 2FA 1702 | * wg-securing-software repos did a survey of security capabilities software repositories should have: [https://github.com/ossf/wg-securing-software-repos/tree/main/survey/2022](https://github.com/ossf/wg-securing-software-repos/tree/main/survey/2022) 1703 | * Not open source (neither is github) 1704 | * Scorecard extension 1705 | * This would be good 1706 | * Project Oak - only an idea 1707 | * We’d like attestations on modelcards 1708 | * Review 1709 | * Discuss a section and its issues 1710 | 1711 |

New actions

1712 | 1713 | 1714 | 1715 | 1716 | * Common function for LF AI and Openssf AI ML - Nigel 1717 | * Discuss coordination with OWASP 1718 | * Get hugging face contact to Christine - Nigel 1719 | 1720 |

2023-07-12

1721 | 1722 | 1723 |

Attendees:

1724 | 1725 | 1726 | 1727 | 1728 | * Nigel Brown (Chair, stacklok ) 1729 | * Zach Steindler (GitHub, TAC) 1730 | * Dan Appelquist (Snyk) 1731 | * David A. Wheeler (Linux Foundation) 1732 | * Sal Kimmich (EscherCloud) 1733 | * David Edelsohn (IBM) 1734 | * Anna Jung (VMware) 1735 | * Luke Hinds (stacklok) 1736 | * Michael Scovetta (Microsoft) 1737 | * Jason Keirstead (Pobal Cyber) 1738 | * 1739 | 1740 |

Apologies

1741 | 1742 | 1743 | 1744 | 1745 | * Mihai Maruseac (Google GOSST) 1746 | 1747 |

Introduction

1748 | 1749 | 1750 | 1751 | 1752 | * Welcome Andrew 1753 | * 1754 | 1755 |

Old Actions

1756 | 1757 | 1758 | 1759 | 1760 | * ~~Luke to find sponsor on TAC~~ Zac has joined 1761 | * ~~Git MD repo - Nigel~~ (David created one) 1762 | * Blog - unassigned for now 1763 | 1764 |

Topics

1765 | 1766 | 1767 | 1768 | 1769 | * Nominations for chair? 1770 | * Do we want to vote before TAC approval? 1771 | * Will set it up (see [https://github.com/ossf/ai-ml-security/issues/1](https://github.com/ossf/ai-ml-security/issues/1) ) 1772 | * Mechanism 1773 | * David Wheeler will set up the repo, then 1774 | * Sal will fill out the repo with group’s info, and add a github issue for voting 1775 | * Title suggestions for repo: AI-ML-Security 1776 | * [TAC feedback](https://docs.google.com/document/d/1706vJpuyq4NpHpVYsOTeU90j5RpoJREX7MRlhAo-CW4/edit) 1777 | * Liked the scope doc 1778 | * Zac Steindler (TAC) will join us on the 19th 1779 | * ~~Generally negative~~ It was not as negative as I thought. I stand corrected (Nigel) 1780 | * Don’t feel we are different enough 1781 | * General hesitation 1782 | * ~~Crob didn’t have time - nobody else stepped up~~ Zac did step up and came along 1783 | * Chicken and egg situation 1784 | * Yep, they said 1785 | * Options? 1786 | * Carry on regardless of approval? 1787 | * Hard to allocate resources 1788 | * Merge/move to [ML Security Committee](https://wiki.lfaidata.foundation/display/DL/ML+Security+Committee)? 1789 | * Review 1790 | * Progress? 1791 | * We need to focus on the deliverables - will do a matrix 1792 | 1793 |

New actions

1794 | 1795 | 1796 | 1797 | 1798 | * Create a repo - David 1799 | * [https://github.com/ossf/ai-ml-security](https://github.com/ossf/ai-ml-security) 1800 | * LF working group - Nigel to report back next week 1801 | * Deliverable matrix in landscape doc - Sal 1802 | * Git issue for chair nominations - see [https://github.com/ossf/ai-ml-security/issues/1](https://github.com/ossf/ai-ml-security/issues/1) 1803 | 1804 |

Notes

1805 | 1806 | 1807 | 1808 | 1809 | * Sal: We need a relationship with the TAC, Timeline inside of the next month 1810 | * Michael: it should be easier to create a working group - 1811 | * Zach: that sounds like a special interest group - working groups are unbounded… need common terminology 1812 | * Nigel: SIG needs to be under a wg… 1813 | * David: usually 1814 | * David E: let’s talk about AI/ML security and stop talking about process…. 1815 | 1816 |

2023-07-05

1817 | 1818 | 1819 |

Attendees:

1820 | 1821 | 1822 | 1823 | 1824 | * Nigel Brown (Chair, stacklok ) 1825 | * Zachary Newman (Chainguard) 1826 | * Gary White (Verizon) 1827 | * Sal Kimmich (EscherCloud) 1828 | * Michael Scovetta (Microsoft) 1829 | * Christine Abernathy (F5) 1830 | * Michael Gildein (IBM) 1831 | * David Espejo (Union.ai) 1832 | * Jay White (Microsoft) 1833 | * Brian Knight (Microsoft) 1834 | * Sanket Naik (Palosade) 1835 | * David Edelsohn (IBM) 1836 | 1837 |

Apologies

1838 | 1839 | 1840 | 1841 | 1842 | * Mihai Maruseac (Google GOSST) 1843 | 1844 |

Introduction

1845 | 1846 | 1847 | 1848 | 1849 | * Hello Gary! 1850 | 1851 |

Old Actions

1852 | 1853 | 1854 | 1855 | 1856 | * New meeting link - done 1857 | * Luke to find sponsor on TAC 1858 | * Git MD repo - Nigel 1859 | * Blog - unassigned for now 1860 | * Deliverables - main topic 1861 | * Ownership - main topic 1862 | 1863 |

Topics

1864 | 1865 | 1866 | 1867 | 1868 | * Scope document agree? 1869 | * Correct Link: 1870 | * Owners: Sal, Jay, Christine 1871 | * Do we have clear blue sky between us and other groups? 1872 | * Yes 1873 | * We want to promote cooperation between groups 1874 | * Developer using AI for development trying to secure consumption 1875 | * AI Pipeline Developer trying to secure production 1876 | * **What is unique about the mission: ** 1877 | * LFAI supports AI and Data open source projects through incubation, education, and best practices. However, their community is focused on exactly what most developer foundations must be: project acceleration, not security. OpenSSF is the foundation of security expertise, and we need to develop a cohort of security-first engineering practices. This can be done in tandem with LFAI, but it’s simple: the end users are LFAI, but the ability to mobilize security education, intervention and open source supply chain hardening for this evolving sector is clearly within the remit, and expertise, of OpenSSF. 1878 | * Very frankly, if we can get a sponsor for a security SIG/WIG in LFAI, that’s just as effective as out of OpenSSF, but until Linux is doing something real for open source AI pipeline production security, we’re losing ground on vulnerabilities every single, sunny, bureaucratic day. 1879 | * TAC and tact - next week’s meeting 1880 | * Show them the scope doc 1881 | * Any other messages? 1882 | * We want to be a sig or wg - but not nothing. 1883 | * WG more autonomy 1884 | * Do we keep going? 1885 | * Regardless of approval? - Some will, yes but with fewer resources 1886 | * Form a splinter cell? 1887 | * Review 1888 | * Consolidate interested parties from 6/21 meeting into the contributors/owners 1889 | * Added in the doc 1890 | * Should we submit an abstract overviewing this work to OpenSSF Day? 1891 | * What and who to submit? 1892 | * Panel 1893 | * Security implementation with openssf scorecard 1894 | * Sal, Yotam 1895 | 1896 |

New actions

1897 | 1898 | 1899 | 1900 | 1901 | * Sal Openssf day talk submission 1902 | * Regular show and tell 1903 | * Nominations for next week 1904 | * Luke to produce a charter 1905 | 1906 |

2023-06-28

1907 | 1908 | 1909 |

Attendees:

1910 | 1911 | 1912 | 1913 | 1914 | * Nigel Brown (Chair, stacklok ) 1915 | * David Espejo (Union.ai) 1916 | * Michael Scovetta (Microsoft) 1917 | * Jay White (Microsoft) 1918 | * Victor Lu 1919 | * Sanket Naik (Palosade) 1920 | * Sal Kimmich 1921 | * Daniel Appelquist (Snyk) 1922 | * Alexander Beaver (RIT) 1923 | * David A. Wheeler (LF) 1924 | * Luke Hinds (SRIC/GB) 1925 | * Christine Abernathy (F5) 1926 | * Pieter van Noordennen (Slim.AI) 1927 | * Mihai Maruseac (Google GOSST) 1928 | * Prachi Jadhav 1929 | * Allen Stewart (Microsoft) 1930 | 1931 |

Introduction

1932 | 1933 | 1934 | 1935 | 1936 | * New Attendees 1937 | 1938 |

Old Actions

1939 | 1940 | 1941 | 1942 | 1943 | * Luke to find sponsor on TAC 1944 | 1945 |

Topics

1946 | 1947 | 1948 | 1949 | 1950 | * New meeting link 1951 | * TAC proposal [https://github.com/ossf/tac/issues/175](https://github.com/ossf/tac/issues/175) [feedback](https://docs.google.com/document/d/18BJlokTeG5e5ARD1VFDl5bIP75OFPCtzf77lfadQ4f0/edit#heading=h.2aalagtx9xzh) 1952 | * We need a TAC sponsor 1953 | * We need a more explicit charter 1954 | * Define contents 1955 | * We need to explicitly define our crossover/overlaps with 1956 | * https://github.com/ossf/wg-best-practices-os-developers 1957 | * [https://wiki.lfaidata.foundation/display/DL/ML+Security+Committee](https://wiki.lfaidata.foundation/display/DL/ML+Security+Committee) - Brian Behlendorf had worked out some meetings with them. 1958 | * [https://owasp.org/www-project-top-10-for-large-langua ge-model-applications/#](https://owasp.org/www-project-top-10-for-large-language-model-applications/#) 1959 | * Others? 1960 | * [https://futurenetworks.ieee.org/roadmap/aiml-working-group](https://futurenetworks.ieee.org/roadmap/aiml-working-group) 1961 | * [https://cloudsecurityalliance.org/research/working-groups/artificial-intelligence/](https://cloudsecurityalliance.org/research/working-groups/artificial-intelligence/) 1962 | * [https://www.airsgroup.ai/](https://www.airsgroup.ai/) ? 1963 | * [https://www.ncsc.gov.uk/blog-post/introducing-our-new-machine-learning-security-principles](https://www.ncsc.gov.uk/blog-post/introducing-our-new-machine-learning-security-principles) ? 1964 | * [https://owasp.org/www-project-machine-learning-security-top-10/](https://owasp.org/www-project-machine-learning-security-top-10/) ? 1965 | * [https://www.etsi.org/technologies/securing-artificial-intelligence](https://www.etsi.org/technologies/securing-artificial-intelligence) ? 1966 | * [https://cs.lbl.gov/what-we-do/machine-learning/secure-machine-learning/](https://cs.lbl.gov/what-we-do/machine-learning/secure-machine-learning/) ? 1967 | * 1968 | * Sal & Jay will work this, we need to be more specific. 1969 | * Framework will cover OSSF umbrella, subscope, external groups - and clearly define distinct gap coverage in security for this group 1970 | * [SIG Scope: Defining the Gap in AI/ML Security](https://docs.google.com/document/d/11tXIecCx-PHaLGJwqT_o31WbUVXHz7mR2auxArYwnjQ/edit?usp=sharing) 1971 | * We could separate “where are the lanes” 1972 | * David A. Wheeler: Maybe LF AI works on “securing AI components & legal issues in their use”, while we work on “how to securely bring AI into larger systems” 1973 | * Our first step is fact-finding. 1974 | * It’s better go on fact-finding first so we can figure out how what the WG should do. 1975 | * David A. Wheeler did fill in parts of security landscape [https://docs.google.com/document/d/1AyivzKsERoIZcyr4XrH6CrNeUoYHhpiswThHS0XrbSU/edit](https://docs.google.com/document/d/1AyivzKsERoIZcyr4XrH6CrNeUoYHhpiswThHS0XrbSU/edit) 1976 | * Big discussion about legal issues 1977 | * At least ask models to record “where did they get the data from” including licensing etc. - traceability / provenance 1978 | * Lots of people want clear information on what are the legal/licensing of the results. Problem is, there are conflicting interpretations. 1979 | * Trace this as a “traceability challenge” instead of a legal issue. 1980 | * Describe the threats, use cases, models. 1981 | * ,OSI is working on white papers on legal interpretations. 1982 | * Dan: I’m fine with reframing this as provenance/traceability 1983 | * David: Should this be part of the landscape paper - [https://docs.google.com/document/d/1AyivzKsERoIZcyr4XrH6CrNeUoYHhpiswThHS0XrbSU/edit](https://docs.google.com/document/d/1AyivzKsERoIZcyr4XrH6CrNeUoYHhpiswThHS0XrbSU/edit) 1984 | * Focus on “legal issues still being worked, so for now let’s focus on traceability” 1985 | * Work - Initially a doc (possibly a versioned whitepaper) 1986 | * Add a ‘licensing AI’ section? (Where should this be done?) Probably worth partnering with [OSI AI efforts around licensing](https://deepdive.opensource.org/) that helps developers to implement best compliance practices around AI 1987 | * Add an ethical section? 1988 | 1989 |

New actions

1990 | 1991 | 1992 | 1993 | 1994 | * Change link and notify 1995 | * Deliverable #1: Define the scope of various groups to prevent undesired overlap 1996 | * Owners: Sal, Jay, Christine 1997 | * Contributors: 1998 | * Talk to other groups, figure out what they're doing, what they aren't doing, where we (OpenSSF) should focus. 1999 | * (_half page_, according to David W) - a few sentences explaining the scope of THIS group, a few sentences about the scope of other groups, and making it clear they don’t seriously overlap. 2000 | * ??? description of current AI landscape within and beyond LF, speaking to the specific gap that this working group fills in representing security first development in AI/ML 2001 | * 2002 | * Deliverable #2: [AI Landscape Document](https://docs.google.com/document/d/1AyivzKsERoIZcyr4XrH6CrNeUoYHhpiswThHS0XrbSU/edit#heading=h.j7gx4ey3nk3k) 2003 | * Owner: 2004 | * Contributors: Sanket Naik, Dan, David A. Wheeler, Mihai 2005 | * Including threat models for data traceability 2006 | * _ACTION_: Consolidate interested parties from 6/21 meeting into the contributors/owners above. 2007 | 2008 |

2023-06-21

2009 | 2010 | 2011 |

Attendees:

2012 | 2013 | 2014 | 2015 | 2016 | * Nigel Brown (Chair, stacklok ) 2017 | * Luke Hinds (SCIR, stacklok) 2018 | * Zachary Newman (Chainguard) 2019 | * Jay White (Microsoft) 2020 | * Andres Orbe (Alpha-Omega) 2021 | * Brian Behlendorf (LF/OpenSSF) 2022 | * Parag Patil (Palosade) 2023 | * Mihai Maruseac (Google GOSST) 2024 | * David Espejo (Union.ai) 2025 | * David A. Wheeler (Linux Foundation) 2026 | * Michael Scovetta (Microsoft, Alpha-Omega) 2027 | * Christine Abernathy (F5) 2028 | * Munawar Hafiz (OpenRefactory) 2029 | * Jeffrey Borek (IBM) 2030 | * Victor Lu (Independent) 2031 | * Prachi Jadhav (stacklok) 2032 | * Allen Stewart (Microsoft) 2033 | * Sal Kimmich (EscherCloud) 2034 | * 2035 | 2036 |

Introduction

2037 | 2038 | 2039 | 2040 | 2041 | * Parag Patil (Palosade) 2042 | * Excited to be here!!! Lot to learn from the community :) 2043 | * Zack Newman 2044 | * Excited to be here :) 2045 | * David Espejo (Union.ai) 2046 | * Andres Orbe (Alpha-Omega) 2047 | * Sal 2048 | * Jeff Borek 2049 | * Allen 2050 | * Excited to work with the team 2051 | 2052 |

Old Actions

2053 | 2054 | 2055 | 2056 | 2057 | * Brian report on [LF AI Security committee](https://wiki.lfaidata.foundation/display/DL/ML+Security+Committee) (also [ML Security Committee](https://lfaidata.foundation/projects/ml-security-committee/)) , re: crossover 2058 | * LF “AI & Data Foundation” - Ibrahim Haddad is GM (this is different from Pytorch foundation, also in LF) 2059 | * They have an “ML Security Committee” - led by Alejandro Saucedo 2060 | * Brian Behlendorf will meet with Alejandro. 2061 | * We want to discuss “how can we be helpful to each other & avoid overlap?” 2062 | * It’s not clear they’re focused on applications. 2063 | * It’s such a young space that the boundaries are not solid 2064 | * One possibility: OpenSSF works on “how to securely bring in & use AI/ML in applications” while LF AI works on “how to select training data, train, develop the AI components”. That may not be the right division of work, but the idea would be to find ways to avoid duplication & instead work together. 2065 | * What are their work products? Brian: not sure, we’d need talk to them. They’re probably chartered to develop advice. 2066 | * Luke to find sponsor on TAC 2067 | * Should we wait on discussions from LF AI & Data? 2068 | 2069 |

Topics

2070 | 2071 | 2072 | 2073 | 2074 | * Work - Initially a doc (possibly a versioned whitepaper) 2075 | * What sections? 2076 | * Who would be interested in working on what sections? 2077 | * TAC sponsor 2078 | * TAC proposal [https://github.com/ossf/tac/issues/175](https://github.com/ossf/tac/issues/175) 27th June 2079 | * Proto doc 2080 | * Timebox first release 2081 | 2082 |

Blog Article

2083 | 2084 | 2085 | 2086 | 2087 | * Release this before the white paper to announce the working group, what we’re going to work on, solicit additional help, etc. 2088 | * Target date: _______ 2089 | * Content Owner: _______ 2090 | 2091 | Seed the stage, first create a blog about the collaboration of the paper. 2092 | 2093 |

Purpose: ID Why the OpenSSF has created an AI/ML WG, what we intend to do, what we’d like people to do to participate, how we intend to operate, what products/resources we intend to create, how we intend to work with the LF AI & Data Foundation.

2094 | 2095 | 2096 |

Potential Document Sections

2097 | 2098 | 2099 | Purpose of the document – let’s be clear on the goals/expected takeaways 2100 | 2101 | 2102 | 2103 | * This is in the doc now. 2104 | 2105 | Title: “Why is there an OpenSSF AI WG?” 2106 | 2107 | White Paper 2108 | 2109 | 2110 | 2111 | * Plan for quarterly releases? 2112 | 2113 | Sections 2114 | 2115 | 2116 | 2117 | * Executive Overview 2118 | * Luke 2119 | * Jay (not a starter but definitely a finisher) 2120 | * Introduction 2121 | * Landscape, eco-system overview (LF MI/AL group, OWASP etc). 2122 | * Yotam 2123 | * Christine 2124 | * Jay (not a starter but definitely a finisher) 2125 | * Victor 2126 | * Allen 2127 | * Jeff B 2128 | * AI-related threats (as different from “ordinary” threats) 2129 | * Luke 2130 | * David Wheeler 2131 | * Christine 2132 | * Allen 2133 | * Prachi 2134 | * Parag Patil/Sanket Naik (Palosade) 2135 | * Victor 2136 | * How existing OpenSSF projects (and non-OpenSSF where they make sense) can be leveraged. 2137 | * Luke 2138 | * Jay (not a starter but definitely a finisher) 2139 | * Yotam 2140 | * Parag Patil/Sanket Naik (Palosade) 2141 | * Personas 2142 | * As a maintainer, you should… 2143 | * As an AI security engineer, you should … 2144 | * As a consumer of AI-enabled things, you should … 2145 | * External resources (e.g. CNCF, OWASP AI top 10, MITRE, etc.) 2146 | * Christine 2147 | * Victor 2148 | * Jeff B 2149 | * Resources 2150 | * Contributors (list of all who have contributed) 2151 | 2152 |

New actions

2153 | 2154 | 2155 | 2156 | 2157 | * Git MD repo - Nigel 2158 | * Proto document - multiple 2159 | * Blog - unassigned for now 2160 | 2161 |

2023-06-14

2162 | 2163 | 2164 |

Attendees:

2165 | 2166 | 2167 | 2168 | 2169 | * Nigel Brown (Chair, stacklok ) 2170 | * Brian Knight (Microsoft) 2171 | * David A. Wheeler (Linux Foundation) 2172 | * David Espejo (Union.ai) 2173 | * Zack Newman (Chainguard, lurking :) ) 2174 | * Michael Scovetta (Microsoft) 2175 | * Daniel Appelquist (Snyk) 2176 | * Jay White (Microsoft) 2177 | * Saswata Basu (Mastercard) 2178 | * Sanket Naik (Palosade) 2179 | * Allen Stewart (Microsoft) 2180 | * Munawar Hafiz (OpenRefactory) 2181 | 2182 |

Apologies

2183 | 2184 | 2185 | 2186 | 2187 | * Mihai Maruseac (Google GOSST) 2188 | * Luke Hinds (Stacklok) 2189 | 2190 |

Introduction

2191 | 2192 | 2193 | 2194 | 2195 | * New Attendees 2196 | 2197 |

Old Actions

2198 | 2199 | 2200 | 2201 | 2202 | * Brian to check if regulatory belongs here 2203 | * Luke to find sponsor on TAC - [https://github.com/ossf/tac/issues/175](https://github.com/ossf/tac/issues/175) 2204 | 2205 |

Topics

2206 | 2207 | 2208 | 2209 | 2210 | * Welcome new friends! 2211 | * Saswata Basu (Mastercard) - we’re looking at how to apply this. We’re looking at ways to use AI/ML to manage massive amounts of data. 2212 | * Daniel Appelquist (Snyk) 2213 | * 2214 | * Brian B: I’m in China, all *anyone* wants to talk about is security & AI 2215 | * Need to interact with the LF AI/Big Data Security WG 2216 | * LF AI & Data which has several projects this proposal seems to overlap with, starting with the ML Security Committee: [https://lfaidata.foundation/projects/ml-security-committee](https://lfaidata.foundation/projects/ml-security-committee/) 2217 | * Brian Behlendorf will talk with Ibrahim (GM of it) about that 2218 | * We’ll explore options, one possibility is work out different scopes. the goal is to eliminate duplication. 2219 | * David to present his work on AI/ML security ([slides](https://dwheeler.com/secure-class/presentations/Secure-Software-10-Misc.ppt)) 2220 | * Another good talk: https://www.youtube.com/watch?v=P7XT4TWLzJw 2221 | * Review the briefly. 2222 | * Work - Initially a doc (possibly a versioned whitepaper) 2223 | * What sections? 2224 | * Who would be interested in working on what sections? 2225 | * 2226 | * TAC proposal - next time 2227 | 2228 |

New actions

2229 | 2230 | 2231 | 2232 | 2233 | * Brian to talk to LF AI Security committee and look for crossover 2234 | 2235 |

2023-06-07

2236 | 2237 | 2238 |

Attendees:

2239 | 2240 | 2241 | 2242 | 2243 | * Nigel Brown (Chair nigel.brown@whitepool.co.uk, stacklok) 2244 | * David A. Wheeler (Linux Foundation) 2245 | * Michael Scovetta (Microsoft) 2246 | * Michael Gildein (IBM) 2247 | * Mihai Maruseac (Google, GOSST) 2248 | * Amanda Martin (Linux Foundation) 2249 | * Jay White (Microsoft) 2250 | * Anna Jung (VMware) 2251 | * Sarah Meiklejohn (Google) 2252 | * Brian Behlendorf (Linux Foundation) 2253 | * Victor Lu 2254 | * Christine Abernathy (F5) 2255 | * Prachi 2256 | * Luke Hinds (Stacklok) 2257 | 2258 |

Introduction

2259 | 2260 | 2261 | 2262 | 2263 | * New Attendees (new friends) 2264 | * Sarah Meiklejohn (Google) 2265 | * 2266 | 2267 |

Old Actions

2268 | 2269 | 2270 | 2271 | 2272 | * Debate the mission statement 2273 | * Anyone have any immediate comments 2274 | * Some requests for clarification, we made a number of proposed changes. 2275 | 2276 |

Topics

2277 | 2278 | 2279 | 2280 | 2281 | * (Sudden Zoom meeting crashes. Restarting work.) 2282 | * Agree or defer the 2283 | * Discuss the streams of work - who would be interested in working on what 2284 | * Example [OWASP Top 10 for Large Language Model Applications](https://owasp.org/www-project-top-10-for-large-language-model-applications/#) 2285 | * Regulatory probably out of scope. 2286 | * Problem: the OWASP list is stretched, it really isn’t about LLMs or AI/ML at all, it’s just an attempt to list 10 things. A lot is just “general good software” 2287 | * We put the initial proposed streams of work in the “mission” statement (broadened now to help people understand it). 2288 | * Discuss the timeslot 2289 | * David: At some point I’d be happy to share my presentation on AI/ML security. [https://dwheeler.com/secure-class/presentations/Secure-Software-10-Misc.ppt](https://dwheeler.com/secure-class/presentations/Secure-Software-10-Misc.ppt) 2290 | * We could do that next week 2291 | * Should we send proposal to the TAC & request creating an official WG? 2292 | * One more meeting and we’re at the official 5 meetings 2293 | * Let’s propose to the TAC after meeting 5 2294 | 2295 |

New actions

2296 | 2297 | 2298 | 2299 | 2300 | * Brian to check if regulatory belongs here 2301 | * Luke to find sponsor on TAC 2302 | 2303 |

2023-05-31

2304 | 2305 | 2306 |

Attendees:

2307 | 2308 | 2309 | 2310 | 2311 | * Nigel Brown (Chair nigel.brown@whitepool.co.uk ) 2312 | * Christine Abernathy (F5) 2313 | * Anna Jung (VMware) 2314 | * Jay White (Microsoft) 2315 | * Sanket Naik (Palosade) 2316 | 2317 | Please try [https://zoom.us/j/97349085860?pwd=S0JKamdpZGFSVVJOK25QNkhHZUhhdz09](https://zoom.us/j/97349085860?pwd=S0JKamdpZGFSVVJOK25QNkhHZUhhdz09) 2318 | 2319 |

Apologies

2320 | 2321 | 2322 | 2323 | 2324 | * Mihai Maruseac (Google GOSST) 2325 | * Dan Appelquist 2326 | * Luke Hinds (stacklok) 2327 | 2328 |

Introduction

2329 | 2330 | 2331 | 2332 | 2333 | * New Attendees 2334 | 2335 |

Old Actions

2336 | 2337 | 2338 | 2339 | 2340 | * Next meeting poll - set this slot. Want async communications to be first class. 2341 | * Create a mission statement 2342 | 2343 |

Topics

2344 | 2345 | 2346 | 2347 | 2348 | * Agree and or complete the 2349 | * Discuss the streams of work - who would be interested in working on what 2350 | * Discussed a little in passing but no conclusions until we agree a mission statement. 2351 | * Discuss the timeslot 2352 | * Not done, but will likely move at some point. 2353 | 2354 |

New actions

2355 | 2356 | 2357 | 2358 | 2359 | * Let the ferment for a while - let people have their say. 2360 | 2361 |

2023-05-26

2362 | 2363 | 2364 | Note: There was a scheduled meeting at this time, but this time wasn’t the result of the Doodle poll. The attendees agreed that we need to let the poll run its course to find the best meeting time, and so this wasn’t a real meeting. We did briefly discuss meeting time logistics. 2365 | 2366 | It’s been a challenge to find a common time, so we _may_ need to re-run the poll with more options. In particular, there’s a strong preference by some to avoid Fridays. AI/ML is worldwide, making any one time hard. David A. Wheeler suggested that we may want to rotate between two times to make it easier for people in different geographical locations to participate. 2367 | 2368 | David A. Wheeler hinted that he’d be sharing some information in the new OpenSSF AI/ML Slack channel <[#wg_ai_ml_security](https://openssf.slack.com/archives/C0587E513KR)>. For the record, here they are: 2369 | 2370 | 2371 | 2372 | * “Miscellaneous: Artificial Intelligence / Machine Learning (AI/ML), Science of Security, Malicious Tools (diverse double-compiling (DDC)), [Are you in] Control, Vulnerability Disclosure.” This is a slide deck from David A. Wheeler’s graduate class on developing secure software, including a lot of information about securing systems that include AI/ML: [https://dwheeler.com/secure-class/presentations/Secure-Software-10-Misc.ppt](https://dwheeler.com/secure-class/presentations/Secure-Software-10-Misc.ppt) 2373 | * Unfortunately there are a lot of “approaches” in the academic literature for securing AI/ML systems, particularly to counter adversarial inputs, that sound good but don’t work in practice. David knows of no way to fully counter adversarial inputs in a way that resists strong attack (he would _love_ to learn of one). 2374 | * If you must do something to counter adversarial inputs & it’s okay if an attacker can easily thwart it, the “Adversarial Robustness Toolbox” can help today: [https://adversarial-robustness-toolbox.org/](https://adversarial-robustness-toolbox.org/) 2375 | * If you’re serious about countering adversarial inputs, human-in-the-loop and the dual language model pattern are the only techniques I know of. These are limiting. I’d love to learn of more options. For more about the dual language model pattern, see: “Prompt Injection Explained” by Simon Willison, [https://simponwillison.net/2023/May/2/prompt-injection-explained/](https://simponwillison.net/2023/May/2/prompt-injection-explained/) 2376 | 2377 |

2023-05-24

2378 | 2379 | 2380 |

Attendees:

2381 | 2382 | 2383 | 2384 | 2385 | * Nigel Brown (Chair nigel.brown@whitepool.co.uk ) 2386 | * Sanket Naik (Palosade) 2387 | * Mihai Maruseac (Google GOSST) 2388 | * Jay White (Microsoft) 2389 | * Prachi Jadhav (Stacklok) 2390 | * Laurent Simon (Google, GOSST) 2391 | * Luke Hinds (stacklok) 2392 | 2393 |

Introduction

2394 | 2395 | 2396 | 2397 | 2398 | * New Attendees 2399 | 2400 |

Old Actions

2401 | 2402 | 2403 | 2404 | 2405 | * Next meeting poll - set this slot. Want async communications to be first class. 2406 | * Create a slack channel [#wg_ai_ml_security](https://openssf.slack.com/archives/C0587E513KR) 2407 | * Create a mail group [https://lists.openssf.org/g/openssf-wg-ai-ml-security](https://lists.openssf.org/g/openssf-wg-ai-ml-security) 2408 | * Consolidated notes here 2409 | 2410 |

Topics

2411 | 2412 | 2413 | 2414 | 2415 | * Discuss the streams of work composition 2416 | * Identify some owners 2417 | * How to address 2418 | 2419 |

New actions

2420 | 2421 | 2422 | 2423 | 2424 | * Meeting links 2425 | * Luke Hinds to check with Brian B if a whitepaper can be delivered from this group 2426 | * Define and overall mission statement for the group 2427 | * Look at moving the meeting because there are still conflicts 2428 | 2429 |

19th May 2023

2430 | 2431 | 2432 |

Attendees:

2433 | 2434 | 2435 | 2436 | 2437 | * Nigel Brown (Chair nigel.brown@whitepool.co.uk ) 2438 | * Christine Abernathy (F5) 2439 | * Mihai Maruseac (Google) - Google Open Source Security Team, working on GUAC & AI 2440 | * Michael Scovetta (Microsoft) - OSS Security, AI Security, Alpha-Omega, Identifying Security Threats WG - michael.scovetta@microsoft.com 2441 | * David A. Wheeler (Linux Foundation) (part) - focus on security, but have also worked in AI/ML, including supporting the Joint AI Center (JAIC) 2442 | * Matt Rutkowski (IBM) (mrutkows@us.ibm.com) 2443 | * Luke Hinds (Stacklok) - been on OpenSSF TAC, now on OpenSSF GB luke@stacklok.com 2444 | * Yolanda Robla (Stacklok) 2445 | * Maia Hamin (Atlantic Council) mhamin@atlanticcouncil.org 2446 | * Anna Jung (VMware) 2447 | * Jay White (Microsoft) - also leads Dashboard SIG 2448 | * Brian Behlendorf (OpenSSF/LF) - did give talk on AI 2449 | * Cindy Sutherland (Lockheed Martin) 2450 | * Prachi Jadhav (Stacklok) 2451 | * Ken Arora 2452 | 2453 |

Introduction

2454 | 2455 | 2456 | 2457 | 2458 | * Attendees 2459 | * Who are you? 2460 | * What do you want to get out of this WG? (notes above) 2461 | * Request: In the future, can we deconflict this with other OpenSSF meetings? (Dashboard in this case) - Khahil White can run a Doodle poll to help do this, that ok? 2462 | * FYI: David A. Wheeler teaches a course on developing secure software; one of his slide decks has a summary of some information on security & AI/ML: 2463 | * See: [https://dwheeler.com/secure-class/presentations/Secure-Software-10-Misc.ppt](https://dwheeler.com/secure-class/presentations/Secure-Software-10-Misc.ppt) 2464 | * One discouraging challenge: There are a lot of techniques that don’t work if you’re trying to prevent subversion of AI/ML systems. A lot of academics like to publish techniques that might work, but whether or not they actually work isn’t necessary for publication. 2465 | * CNSC keynote that Brian gave on AI and OSS security: https://www.youtube.com/watch?v=VU6OzuHuWQo&t=2s 2466 | * FYI: SBOM formats are adding information about AI/ML 2467 | * Matt R - cyclone DX SBOM and AI group has some prior work “Model card++” 2468 | * David W - SPDX also has an AI group, which is trying to capture information such as training data sources 2469 | * Target [About OpenSSF](https://openssf.org/about/) 2470 | * AI Security and OSS code and communities: 2471 | * The OpenSSF mission states _“Developers can easily learn secure development practices and are proactively guided by their tools to apply those practices and automatically informed when action is needed to prevent, remediate, or mitigate security issues.” _ 2472 | * One claim: among developers who have been using Codex since it went into beta later this year, the programming AI is said to have written 40% of the code checked into GitHub is now AI-generated and unmodified 2473 | * Possible source: [https://the-decoder.com/github-ceo-thinks-ai-will-write-majority-of-code-in-just-five-years/](https://the-decoder.com/github-ceo-thinks-ai-will-write-majority-of-code-in-just-five-years/) 2474 | * [https://github.blog/2022-06-21-github-copilot-is-generally-available-to-all-developers/](https://github.blog/2022-06-21-github-copilot-is-generally-available-to-all-developers/) 2475 | * _"Now Github CEO Thomas Dohmke is giving a glimpse of usage data on Codex: among developers who have been using Codex since it went into beta later this year, the programming AI is said to have written 40 percent of the code. So for every 100 lines of code, 40 are AI-generated."_ 2476 | * Administrativa 2477 | * Dedicate slack channel for this? 2478 | * Can easily create one in OpenSSF 2479 | * wg_ai_ml_security? 2480 | * Where to store/share docs? 2481 | * Notes, etc. => Google Docs, linked from here. 2482 | * Work product / docs => GitHub Repo 2483 | * Should we be a WG / SIG? 2484 | 2485 |

Potential Topics

2486 | 2487 | 2488 | 2489 | 2490 | * Proviso - at some point AI will surpass humans. All bets are off 2491 | * Currently this is like having an army of (over-caffeinated) toddlers 2492 | * Will be an army of Einsteins 2493 | * Always 5 years away 2494 | * Security of AI models 2495 | * Poisoning attacks on LLMs 2496 | * How can it be attacked? (covered in [Trusted AI – LFAI & Data](https://lfaidata.foundation/projects/trusted-ai/) ) 2497 | * [Microsoft docs on AI security / threat modeling / failure modes / etc](https://learn.microsoft.com/en-us/security/engineering/threat-modeling-aiml): 2498 | * Should we look at writing some best practices or guidelines on how open source developers can safely use AI, without leaking confidential information or introducing exploits into their software. 2499 | * How can AI attack general infrastructure? 2500 | * How is this different from any other scripting? 2501 | * How to modify AI generated code? 2502 | * How is AI best practice different from human attacker best practice? 2503 | * Everyone needs best practice now - this lowers barrier 2504 | * Quantitatively different, but is it qualitatively different? 2505 | * What can we do? 2506 | * Doc search - use AI/ChatGPT 2507 | * Compile more best practices 2508 | * Located where? 2509 | * Cut through hype? 2510 | * Code? 2511 | * Tool automation 2512 | * Spot AI phishing 2513 | * Spot network features 2514 | * Spot supply chain issues 2515 | 2516 |

Potential Threads

2517 | 2518 | 2519 | Legal ramifications of chatgpt et al - snippet code. 2520 | 2521 | * There are court courses that may resolve parts of this in the US (but only in the US) 2522 | 2523 | * This is an area that many others are working on. We need to focus on security-focused areas, not generic issues like legal issues. 2524 | 2525 | * AI leveraged to attack OSS communities (spoofing as a contributor) 2526 | 2527 | * AI privacy concerns (leaking information to LLMs) 2528 | 2529 | * AI used to improve security research (vulnerabilities) / AI as a Red Team (AiaaRT) 2530 | 2531 | * AI produces insecure code. ([case in point](https://pbs.twimg.com/media/FuizbxaX0AElDa1?format=jpg&name=large)) 2532 | 2533 | Recommendation: Focus on security. 2534 | 2535 | * Links to other working groups 2536 | 2537 | Brian: avoid anthropomorphism, at least to make it clear that humans are in the end in control and accountable. 2538 | 2539 | David: I suggest separating: 2540 | 2541 | #1 How to wisely use AI/ML to generate secure code & secure supply chain (e.g., use AI/ML to detect vulnerabilities, how to take steps to reduce the likelihood of vulnerabilities in AI/ML generated code, how to counter traditional attacks that are being amplified by AI/ML such as spoofing, etc.). 2542 | 2543 | #2 (perhaps never get there) - how to develop AI/ML systems that are themselves secure (that’s a research topic that is generally unsolved - it’s reasonable to do that, someone needs to, but that’s a MUCH bigger task - requires serious research funding for example) 2544 | 2545 | Michael Scovetta: For all the reasons listed above, I think we should work on #2, we’re best placed for it (even though it’s hard!). 2546 | 2547 | Maia: I like the split of #1 vs. #2. I like focusing on #1 at first, the OSS community has some specific equities b/c code is open (re: vuln scanning) and tool development 2548 | 2549 | How do we secure the AI/ML supply chain? E.g., counter poisoning of training models. David: Although unsolved broadly that is definitely something that has real solutions, we could work on that, perhaps as “item #3: How to select & process training data to counter data poisoning” 2550 | 2551 | Mihai: Have experimented with adding AI training to SLSA. 2552 | 2553 | Nigel Brown: I can work to organize into categories, discuss different ones on different weeks.**​** 2554 | 2555 |

Actions

2556 | 2557 | 2558 | 2559 | 2560 | * Next meeting - weekly 2561 | * Deconflict dashboard meeting (and other OpenSSF meetings) - David will ask operations (Khalil) to set up a Doodle poll for all who attended today, to work out a good meeting time. Meet weekly. 2562 | * Please fill out this Doodle poll for a good meeting time for AI/ML: 2563 | * [https://doodle.com/meeting/participate/id/bqlBpBpa](https://doodle.com/meeting/participate/id/bqlBpBpa) 2564 | * Let's create a slack channel (ai_ml_security) and we can rename to wg_ once it's officially a working group? 2565 | * Slack Channel is wg_ai_ml_security. You can visit it here: https://app.slack.com/client/T019QHUBYQ3/C0587E513KR 2566 | * We also need a mail group 2567 | * Consolidate 2568 | 2569 |

Meeting Rules

2570 | 2571 | 2572 | 2573 | 2574 | * All participants in OpenSSF meetings are subject to the OpenSSF Code of Conduct. See: [https://openssf.org/community/code-of-conduct/](https://openssf.org/community/code-of-conduct/) 2575 | -------------------------------------------------------------------------------- /mvsr.md: -------------------------------------------------------------------------------- 1 | # MVSR Artificial Intelligence/Machine Learning (AI/ML) WG 2 | Our mission is to enable security for Open Source AI workflows (training 3 | pipelines, AI deployments) and enable using AI for security of Open Source 4 | software. We also maintain some focus on experimentation for tools and 5 | techniques at the intersection of AI and security. 6 | 7 | Our vision is to be the central place for collating any recommendations for 8 | using AI securely ("security for AI" vision) and for using AI to improve 9 | security of other OSS software products ("AI for security" vision). 10 | 11 | The developing scope involves analyzing open source AI/ML data sets, data models 12 | and OSS code mashups used in AI/ML to articulate the specific security concerns 13 | and controls for the subset of OSS AI/ML workloads. This is important because 14 | the accelerated adoption of AI/ML has an OSS and security component that is not 15 | well understood, and OpenSSF can play an industry leading position. 16 | 17 | Currently, we have a [model signing project](https://github.com/ossf/ai-ml-security/issues/10) 18 | with the aim of creating a solution that can be used to sign ML models during 19 | training and verify the integrity of these models before deploying them in 20 | applications. This is at the foundation for any supply chain statement we can 21 | make about AI/ML, and we are planning to extend this to datasets too. 22 | Furthermore, we're exploring expanding the model signing work to cover in-toto, 23 | witness, C2PA and other deeper integrations. 24 | 25 | We are planning to potentially explore a “security slam” for existing OpenSSF 26 | tooling to see how it protects/applies to OSS AIML workloads; develop OSS 27 | security patterns for use cases where OSS AIML components are integrated in the 28 | supply chain. This is similar to the discussion we've had about how OpenSSF 29 | projects can be used in the context of NIST SP 800-218A standard for GenAI and 30 | Dual-Use foundation models. 31 | 32 | Finally, we are interlocking with various other groups, since new AI/ML 33 | communities are continuously springing up. A summary of these groups and how we 34 | relate to them is given below: 35 | 36 | * **[Coalition for Secure AI (COSAI)](https://www.oasis-open.org/2024/07/18/introducing-cosai/)** 37 | * _AI/ML Security work being done_: One of the workstreams is on supply-chain security for AI. The chairs of this group are both participating in this workstream. 38 | * _Difference_: COSAI is more focused on enterprise adoption, not just OSS. 39 | * _Partnership/Collaboration Opportunity_: Supply-chain security for AI is a common interest. 40 | * **[OWASP Foundation](https://owasp.org/)** 41 | * _AI/ML Security work being done_: The OWASP foundation more broadly aims to improve the security of software through its community-led open source software projects. They have an [AI Security Guide](https://owasp.org/www-project-ai-security-and-privacy-guide/). [OWASP Project Machine Learning Security Top 10](https://owasp.org/www-project-machine-learning-security-top-10/) provides developer centered information about the top known cybersecurity risks for open source machine learning, with a description, example attack scenario, and a suggestion of how to prevent. 42 | * _Difference_: Content does not provide indepth technical recommendations for practical implementation within their security documentation. OWASP Large Language Model Applications Top 10 provides the same developer centered information for LLMs. These are all vulnerability descriptions, not developer best practices. 43 | * _Partnership/Collaboration Opportunity_: Education and outreach opportunities are critical here. Developers have to understand both how security, vulnerability and bugs impact their software security stance, which the OWASP Top 10s do well. Building technical best practices to prevent vulnerabilities is where OSSF can be an excellent partner in getting critical information about these unique vulnerabilities. 44 | * **[The LFAI Security Committee](https://wiki.lfaidata.foundation/display/DL/ML+Security+Committee)** 45 | * _AI/ML Security work being done_: Focus on AI and ML security 46 | * _Difference_: LFAI does not focus on systemic problems of AI/ML and the Open Source Supply Chain, which is where OSSF’s WG would have the most critical impact. LFAI supports AI and Data open source projects through incubation, education, and best practices. However, their community is focused on exactly what most developer foundations must be: project acceleration, not security. OpenSSF is the foundation of security expertise, and we need to develop a cohort of security-first engineering practices. This can be done in tandem with LFAI, but it’s simple: the end users are LFAI, but the ability to mobilize security education, intervention and open source supply chain hardening for this evolving sector is clearly within the remit, and expertise, of OpenSSF. 47 | * _Partnership/Collaboration Opportunity_: Clear candidates for coordination for best practices for both end users, open source maintainers, and contributor communities. 48 | * **[AI Alliance](https://thealliance.ai/)** 49 | * _AI/ML Security work being done_: This group has an AI Trust and Safety group that is focused on understanding potential trust and safety issues associatied with AI and developing mitigation strateigies for these. 50 | * _Difference_: This group is more focused on the Safety and Trustworthiness aspects of AI, with a smaller focus on Security. 51 | 52 | For a full list, please see [this spreadsheet](https://docs.google.com/spreadsheets/d/1XOzf0LwksHnVeAcgQ7qMAmQAhlHV2iEf4ICvUwOaOfo/edit?gid=0#gid=0). 53 | For updates from each interlock, occuring at every meeting, please see [the meeting notes](https://docs.google.com/document/d/1YNP-XJ9jpTjM6ekKOBgHH8-avAws2DVKeCpn858siiQ/edit?tab=t.0). 54 | -------------------------------------------------------------------------------- /telemetry/AI_Chips_and_Hardware.md: -------------------------------------------------------------------------------- 1 | # AI_Chips_and_Hardware 2 | -------------------------------------------------------------------------------- /telemetry/AI_Community_and_Collaboration.md: -------------------------------------------------------------------------------- 1 | # AI_Community_and_Collaboration 2 | -------------------------------------------------------------------------------- /telemetry/AI_Regulations.md: -------------------------------------------------------------------------------- 1 | # AI_Regulations 2 | -------------------------------------------------------------------------------- /telemetry/AI_Research_and_Development.md: -------------------------------------------------------------------------------- 1 | # AI_Research_and_Development 2 | -------------------------------------------------------------------------------- /telemetry/AI_Security_and_Telemetry.md: -------------------------------------------------------------------------------- 1 | # AI_Security_and_Telemetry 2 | 3 | ## AI Security 4 | - Threat Detection and Prevention: 5 | - Identifying potential threats to AI systems. 6 | - Implementing measures to prevent attacks such as adversarial attacks, data poisoning, and model inversion. 7 | 8 | - Secure AI Model Development: 9 | - Best practices for developing AI models with security in mind. 10 | - Techniques for ensuring the integrity and confidentiality of training data and models. 11 | - [More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence](https://www.computer.org/csdl/journal/tk/2022/06/09158374/1m1eAPbg4JW) 12 | 13 | - Privacy Concerns: 14 | - Addressing privacy issues related to the data used and generated by AI systems. 15 | - Techniques for differential privacy and federated learning. 16 | 17 | - Regulatory Compliance: 18 | - Ensuring AI systems comply with relevant security and privacy regulations. 19 | - Documentation and reporting requirements. 20 | 21 | - Incident Response: 22 | - Strategies for responding to security incidents involving AI systems. 23 | - Post-incident analysis and improvement. 24 | 25 | ## Telemetry 26 | - Data Collection: 27 | - Methods for collecting telemetry data from AI systems. 28 | - Types of data collected (e.g., performance metrics, usage statistics, error logs). 29 | 30 | - Data Transmission: 31 | - Secure transmission of telemetry data. 32 | - Protocols and encryption methods used. 33 | 34 | - Monitoring and Analytics: 35 | - Real-time monitoring of AI system performance and health. 36 | - Analyzing telemetry data to detect anomalies and trends. 37 | 38 | - System Optimization: 39 | - Using telemetry data to optimize AI system performance. 40 | - Identifying bottlenecks and areas for improvement. 41 | 42 | - Security Telemetry: 43 | - Specific telemetry data related to security events and threats. 44 | - Correlation of telemetry data with security incidents. 45 | 46 | ## Integration of Security and Telemetry 47 | - Proactive Security Measures: 48 | - Using telemetry data to predict and prevent potential security threats. 49 | 50 | - Automated Responses: 51 | - Implementing automated responses to detected security incidents based on telemetry data. 52 | 53 | - Continuous Improvement: 54 | - Using feedback from telemetry to continuously improve AI security measures. 55 | 56 | ## Case Studies and Best Practices 57 | - Industry Examples: 58 | - Real-world examples of AI security and telemetry in action. 59 | 60 | - Best Practices: 61 | - Guidelines and recommendations for implementing effective AI security and telemetry systems. 62 | - [Securing the AI Software Supply Chain 63 | ](https://research.google/pubs/securing-the-ai-software-supply-chain/) 64 | 65 | ## Tools and Technologies 66 | - Software and Platforms: 67 | - Tools for collecting, transmitting, and analyzing telemetry data. 68 | 69 | - Security Solutions: 70 | - Security tools specifically designed for AI systems. 71 | - [Garak](https://github.com/leondz/garak) 72 | 73 | ## Future Trends and Research 74 | - Emerging Threats: 75 | - Discussion on potential future threats to AI systems. 76 | - Innovative Solutions: 77 | - New technologies and methods for enhancing AI security and telemetry. 78 | -------------------------------------------------------------------------------- /telemetry/AI_Standards_and_Best_Practices.md: -------------------------------------------------------------------------------- 1 | # AI_Standards_and_Best_Practices 2 | -------------------------------------------------------------------------------- /telemetry/AI_and_Its_Impact.md: -------------------------------------------------------------------------------- 1 | # AI_and_Its_Impact 2 | -------------------------------------------------------------------------------- /telemetry/AI_and_Its_Impact_Google_Docs.md: -------------------------------------------------------------------------------- 1 | # AI_and_Its_Impact_Google_Docs 2 | -------------------------------------------------------------------------------- /telemetry/Appendix.md: -------------------------------------------------------------------------------- 1 | # Appendix 2 | -------------------------------------------------------------------------------- /telemetry/Conclusion.md: -------------------------------------------------------------------------------- 1 | # Conclusion 2 | -------------------------------------------------------------------------------- /telemetry/Introduction.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | -------------------------------------------------------------------------------- /telemetry/Security_in_AI_Applications.md: -------------------------------------------------------------------------------- 1 | # Security_in_AI_Applications 2 | -------------------------------------------------------------------------------- /telemetry/Understanding_Large_Language_Models_LLMs.md: -------------------------------------------------------------------------------- 1 | # Understanding_the_AI_Technical_Stack:_Large_Language_Models_LLMs 2 | -------------------------------------------------------------------------------- /telemetry/Understanding_Vision_Language_Action_Models.md: -------------------------------------------------------------------------------- 1 | # Understanding_the_AI_Technical_Stack:_Vision_Language_Action_Models 2 | -------------------------------------------------------------------------------- /telemetry/Understanding_Vision_Language_Models_VLMs.md: -------------------------------------------------------------------------------- 1 | # Understanding_the_AI_Technical_Stack:_Vision_Language_Models_VLMs 2 | -------------------------------------------------------------------------------- /telemetry/Understanding_the_AI_Supply_Chain.md: -------------------------------------------------------------------------------- 1 | # Understanding_the_AI_Supply_Chain 2 | -------------------------------------------------------------------------------- /telemetry/ai_ml_telemetry.md: -------------------------------------------------------------------------------- 1 | # White Paper Outline: AI Security Telemetry 2023 2 | 3 | This doc is currently in draft [here](https://docs.google.com/document/d/1J8M1F5ev9tXzMpA3dAFXqXYs2t-T10xt0_ObXnqNN04/edit). 4 | 5 | * [Introduction](Introduction.md) 6 | * [AI_and_Its_Impact](AI_and_Its_Impact.md) 7 | * [AI_and_Its_Impact_Google_Docs](AI_and_Its_Impact_Google_Docs.md) 8 | * [AI_Security_and_Telemetry](AI_Security_and_Telemetry.md) 9 | * [AI_Regulations](AI_Regulations.md) 10 | * [AI_Standards_and_Best_Practices](AI_Standards_and_Best_Practices.md) 11 | * [AI_Research_and_Development](AI_Research_and_Development.md) 12 | * [AI_Community_and_Collaboration](AI_Community_and_Collaboration.md) 13 | * [Understanding_the_AI_Technical_Stack:_Large_Language_Models_LLMs](Understanding_Large_Language_Models_LLMs.md) 14 | * [Understanding_the_AI_Technical_Stack:_Vision_Language_Models_VLMs](Understanding_Vision_Language_Models_VLMs.md) 15 | * [Understanding_the_AI_Technical_Stack:_Vision_Language_Action_Models](Understanding_Vision_Language_Action_Models.md) 16 | * [Understanding_the_AI_Supply_Chain](Understanding_the_AI_Supply_Chain.md) 17 | * [AI_Chips_and_Hardware](AI_Chips_and_Hardware.md) 18 | * [Security_in_AI_Applications](Security_in_AI_Applications.md) 19 | * [Conclusion](Conclusion.md) 20 | * [Appendix](Appendix.md) 21 | --------------------------------------------------------------------------------