LLMs: Is NIST’s AI Safety Consortium Relevant Amid California’s SB 1047?

Having more than 200 organizations is the opportunity for more than 200 different technical approaches to AI safety. The abundance would be useful to combat AI threats, misuses, and risks, both present and future, from all sources and destinations, not just major sources, commercial ones, or from friendly locations. There is no evidence that all of the companies are currently developing technical research in AI safety. There is no aggregation of their work, or collaborative collection or lookout for technical approaches to safety on display for all of them, to show that there is a major difference to the consortium. Those who are working in AI safety would have been doing it anyway without the consortium, limiting the approach to solutions that is necessary in the unknowns ahead for AI safety, alignment, and governance.


This content originally appeared on HackerNoon and was authored by Stephen

How big of a problem is digital piracy? It can be argued that regulation and litigation have ensured mitigation from the mainstream. However, when some AI companies needed training data, why were pirated copies of contents allegedly [accessible to be] crawled and scraped?

\ Why are several other unlawful events possible online, in ways that are not in the physical world—so to speak? What is peculiar about digital that makes it difficult for regulation and litigation to be decisive in eradicating problems?

\ One easy-to-identify issue, especially with the internet—in recent decades—is that development has been ahead of safety. Safety is often added by some of the initializing companies, then tests in the real world with flaks that have resulted in adjustments.

\ Though policy, framework, governance, terms, best practices, and so forth have been necessary to internet platforms, what would have been efficient is broad possibilities for technical angles to safety. Technical safety with evolving approaches from several corners would have done better than anything else, for the internet, than reliance on a traditional law-led approach.

\ The same problem may follow AI safety, which regulations are trying early to correct—after the errors from social media and the internet. Regulating social media earlier may have worked in many forms, but there would have been numerous lapses that would have resulted in some of the same problems that eventually happened. Social media is as harmful as the technical shortages for safety.

\ Assuming for some social media users, there were plugins that went through pages for them, even before seeing what was there, or there were plugins that sort to reinterpret what they were seeing or hearing, not just to fact-check, or there were plugins that developed usage parallels so that going to social media would have been unnecessary in some instances, they may have helped much.

\ These are not just descriptions of what to do in retrospect, but things that are still necessary, which do not seem like any team is doing. Reliance is generally on the companies for safety, but they may do as it is possible or as it benefits the business. Assuming there are efforts from too many sources on technical solutions, maybe more would have been possible against past and current disadvantages of social media.

\ In a way, seeking to correct that led to the establishment of the US AI Safety Institute, which followed the UK AISI and was followed by the EU AI Office. The purpose, in part, is to do technical research for AI safety.

\ The US AISI also has a safety consortium, consisting of several organizations whose CRADA—Consortium Cooperative Research and Development Agreement, states that, "The purpose of the NIST Artificial Intelligence Safety Institute Consortium (“Consortium”) is to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote safe and trustworthy development and use of Artificial Intelligence (AI), particularly for the most advanced AI (“Purpose”). Technical expertise, Models, data, and/or products to support and demonstrate pathways to enable safe and trustworthy AI systems,  Infrastructure support for Consortium projects in the performance of the Research Plan, Facility space and hosting of Consortium Members’ participants, workshops, and conferences."

\ The consortium was announced in February, which is enough time for all the organizations included to have an AI safety department, lab, or desk, with a link available on the consortium's list. It is probably not enough to have a mandate, but out of necessity to actually follow through actively with work and varying approaches for the objectives. AI companies on the list are developing products and safety.

\ Others may have their paths, but it appears that it is the same pattern where engineering and safety are bundled by major firms as they did for social media.

\ Having more than 200 organizations is the opportunity for more than 200 different technical approaches to AI safety. The abundance would be useful to combat AI threats, misuses, and risks, both present and future, from all sources and destinations, not just major sources, commercial ones, or friendly locations. There is no evidence that all of the companies are currently developing technical research in AI safety.

\ There is no aggregation of their work, collaborative collection, or lookout for technical approaches to safety on display for all of them, to show that there is a major difference to the consortium. Those who are working in AI safety would have been doing it anyway without the consortium, limiting the approach to solutions that are necessary for the unknowns ahead for AI safety, alignment, and governance.

\ California lawmakers just passed SB 1047, which is symbolic but unlikely to be efficient without thorough technical solutions. The targets may comply because it is what they can also do and because they want to survive. But risks from others that the regulations would not encase would remain and may overwhelm compliant models.

\ There are several angles that AI safety can proceed from, from theoretical neuroscience of how the human mind is cautious to novel technical areas, away from the regimentation of the paths of some AI corporations. There should be as many AI safety institutes as possible and as many AI safety labs and departments as possible, for broad approaches against an intelligence that is capable and not natural which, for now, can be used for some negative purposes, with uncertainties as it improves.

\ There is a recent story on Reuters, Contentious California AI bill passes legislature, awaits governor's signature,%20particularly%20for%20the%20most%20advanced%20AI%20(%E2%80%9CPurpose%E2%80%9D).%20Technical%20expertise,%20Models,%20data,%20and/or%20products%20to%20support%20and%20demonstrate%20pathways%20to%20enable%20safe%20and%20trustworthy%20AI%20systems,%20%20Infrastructure%20support%20for%20Consortium%20projects%20in%20the%20performance%20of%20the%20Research%20Plan,%20Facility%20space%20and%20hosting%20of%20Consortium%20Members%E2%80%99%20participants,%20workshops,%20and%20conferences.%22%20%20The%20consortium%20was%20announced%20in%20February,%20which%20is%20enough%20time%20for%20all%20the%20organizations%20included%20to%20have%20an%20AI%20safety%20department,%20lab%20or%20desk,%20with%20a%20link%20available%20on%20the%20consortium%27s%20list.%20It%20is%20probably%20not%20enough%20to%20have%20a%20mandate,%20but%20of%20necessity%20to%20actually%20follow%20through%20actively,%20with%20work%20and%20varying%20approaches.%20The%20AI%20companies%20on%20the%20list%20are%20developing%20products%20and%20the%20safety.%20Others%20may%20have%20their%20paths,%20but%20it%20may%20appear%20that%20the%20same%20pattern%20when%20developers%20and%20safety%20are%20bundled%20like%20for%20social%20media%20continues.%20%20Having%20more%20than%20200%20organizations%20is%20the%20opportunity%20for%20more%20than%20200%20different%20technical%20approaches%20to%20AI%20safety.%20The%20abundance%20would%20be%20useful%20to%20combat%20AI%20threats,%20misuses%20and%20risks,%20both%20present%20and%20future,%20from%20all%20sources%20and%20destinations,%20not%20just%20major%20sources%20or%20friendly%20locations.%20There%20is%20no%20evidence%20that%20all%20of%20the%20companies%20are%20currently%20developing%20technical%20research%20in%20AI%20safety.%20There%20is%20no%20aggregation%20of%20their%20work,%20or%20collaborative%20collection%20or%20lookout%20for%20technical%20approaches%20to%20safety%20on%20display,%20for%20all%20of%20them,%20to%20show%20that%20there%20is%20a%20major%20promise%20to%20the%20consortium.%20Those%20that%20maybe%20working%20in%20AI%20safety%20would%20have%20been%20doing%20it%20anyway%20without%20the%20consortium,%20it%20may%20appear,%20limiting%20the%20approach%20to%20solutions%20that%20is%20necessary%20in%20the%20unknowns%20ahead%20for%20AI%20safety,%20alignment%20and%20governance.%20%20%20California%20lawmakers%20just%20passed%20an%20AI%20safety%20bill,%20which%20is%20symbolic%20but%20unlikely%20to%20be%20efficient%20without%20thorough%20technical%20solutions.%20The%20targets%20would%20likely%20comply%20because%20it%20is%20what%20they%20can%20also%20do,%20and%20because%20they%20want%20to%20survive.%20But%20risks%20from%20others%20that%20the%20regulations%20would%20not%20catch%20would%20remain.%20%20There%20are%20several%20angles%20that%20AI%20safety%20can%20be%20proceed,%20from%20theoretical%20neuroscience%20of%20how%20the%20human%20mind%20is%20cautious,%20to%20novel%20technical%20areas,%20away%20from%20the%20regimentation%20of%20some%20AI%20corporations.%20There%20should%20be%20as%20many%20AI%20Safety%20institutes%20as%20possible%20and%20as%20many%20AI%20safety%20labs%20and%20department%20as%20possible,%20for%20broad%20approaches,%20against%20this%20major%20problem.%20%20There%20is%20a%20recent%20story%20on%20Reuters,%20Contentious%20California%20AI%20bill%20passes%20legislature,%20awaits%20governor%27s%20signature,%20stating%20that,%20%22California%20lawmakers%20passed%20a%20hotly%20contested%20artificial-intelligence%20safety%20bill%20on%20Wednesday,%20after%20which%20it%20will%20need%20one%20more%20process%20vote%20before%20its%20fate%20is%20in%20the%20hands%20of%20Governor%20Gavin%20Newsom,%20who%20has%20until%20Sept.%2030%20to%20decide%20whether%20to%20sign%20it%20into%20law%20or%20veto%20it.%20Tech%20companies%20developing%20generative%20AI%20-%20which%20can%20respond%20to%20prompts%20with%20fully%20formed%20text,%20images%20or%20audio%20as%20well%20as%20run%20repetitive%20tasks%20with%20minimal%20intervention%20%E2%80%93%20have%20largely%20balked%20at%20the%20legislation,%20called%20SB%201047,%20saying%20it%20could%20drive%20AI%20companies%20from%20the%20state%20and%20hinder%20innovation.%20The%20measure%20mandates%20safety%20testing%20for%20many%20of%20the%20most%20advanced%20AI%20models%20that%20cost%20more%20than%20$100%20million%20to%20develop%20or%20those%20that%20require%20a%20defined%20amount%20of%20computing%20power.%20Developers%20of%20AI%20software%20operating%20in%20the%20state%20also%20need%20to%20outline%20methods%20for%20turning%20off%20the%20AI%20models%20if%20they%20go%20awry,%20effectively%20a%20kill%20switch.%20The%20bill%20also%20gives%20the%20state%20attorney%20general%20the%20power%20to%20sue%20if%20developers%20are%20not%20compliant,%20particularly%20in%20the%20event%20of%20an%20ongoing%20threat,%20such%20as%20the%20AI%20taking%20over%20government%20systems%20like%20the%20power%20grid.%20As%20well,%20the%20bill%20requires%20developers%20to%20hire%20third-party%20auditors%20to%20assess%20their%20safety%20practices%20and%20provide%20additional%20protections%20to%20whistleblowers%20speaking%20out%20against%20AI%20abuses.%22), stating that, "California lawmakers passed a hotly contested artificial-intelligence safety bill on Wednesday, after which it will need one more process vote before its fate is in the hands of Governor Gavin Newsom, who has until Sept. 30 to decide whether to sign it into law or veto it. Tech companies developing generative AI - which can respond to prompts with fully formed text, images or audio as well as run repetitive tasks with minimal intervention – have largely balked at the legislation, called SB 1047, saying it could drive AI companies from the state and hinder innovation. The measure mandates safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state also need to outline methods for turning off the AI models if they go awry, effectively a kill switch. The bill also gives the state attorney general the power to sue if developers are not compliant, particularly in the event of an ongoing threat, such as the AI taking over government systems like the power grid. As well, the bill requires developers to hire third-party auditors to assess their safety practices and provide additional protections to whistleblowers speaking out against AI abuses."

\ There is a new press release, U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing and Evaluation With Anthropic and OpenAI, stating that, "Today, the U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on AI safety research, testing and evaluation with both Anthropic and OpenAI. Each company’s Memorandum of Understanding establishes the framework for the U.S. AI Safety Institute to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks. Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute."


Feature image source


This content originally appeared on HackerNoon and was authored by Stephen


Print Share Comment Cite Upload Translate Updates
APA

Stephen | Sciencx (2024-08-29T20:58:47+00:00) LLMs: Is NIST’s AI Safety Consortium Relevant Amid California’s SB 1047?. Retrieved from https://www.scien.cx/2024/08/29/llms-is-nists-ai-safety-consortium-relevant-amid-californias-sb-1047/

MLA
" » LLMs: Is NIST’s AI Safety Consortium Relevant Amid California’s SB 1047?." Stephen | Sciencx - Thursday August 29, 2024, https://www.scien.cx/2024/08/29/llms-is-nists-ai-safety-consortium-relevant-amid-californias-sb-1047/
HARVARD
Stephen | Sciencx Thursday August 29, 2024 » LLMs: Is NIST’s AI Safety Consortium Relevant Amid California’s SB 1047?., viewed ,<https://www.scien.cx/2024/08/29/llms-is-nists-ai-safety-consortium-relevant-amid-californias-sb-1047/>
VANCOUVER
Stephen | Sciencx - » LLMs: Is NIST’s AI Safety Consortium Relevant Amid California’s SB 1047?. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/08/29/llms-is-nists-ai-safety-consortium-relevant-amid-californias-sb-1047/
CHICAGO
" » LLMs: Is NIST’s AI Safety Consortium Relevant Amid California’s SB 1047?." Stephen | Sciencx - Accessed . https://www.scien.cx/2024/08/29/llms-is-nists-ai-safety-consortium-relevant-amid-californias-sb-1047/
IEEE
" » LLMs: Is NIST’s AI Safety Consortium Relevant Amid California’s SB 1047?." Stephen | Sciencx [Online]. Available: https://www.scien.cx/2024/08/29/llms-is-nists-ai-safety-consortium-relevant-amid-californias-sb-1047/. [Accessed: ]
rf:citation
» LLMs: Is NIST’s AI Safety Consortium Relevant Amid California’s SB 1047? | Stephen | Sciencx | https://www.scien.cx/2024/08/29/llms-is-nists-ai-safety-consortium-relevant-amid-californias-sb-1047/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.