My new best friend ChatGPT has so far helped me to write a blender plug-in even when I have no python experience and I know that it works, but I can’t test it or understand if any of this code is secure in terms of python idiom.
So there in lies the “why” we need experienced people to operate factory machines, it’s one thing to spew out code but you still need experience to quality control and sanity check, something AI still has to work hard on.
However here’s the issue, I’m happy to release my blender plugin without that experience, for all I know I have to trust this AI is not injecting malicious code unintentionally and that’s interesting isn’t it.
There are no human errors in AI because there are no humans but it may still be possible for a bad actor to inject nasty bits of code that might not be checked to the same degree… this code suffers from the aging product problem, I didn’t write it but I must trust my peers and that’s the trust that could be exploited and it’s new and kind of scary 😦
Adam Crockett | Sciencx (2022-12-31T18:10:10+00:00) AI generated code creates a new security attack vector. Retrieved from https://www.scien.cx/2022/12/31/ai-generated-code-creates-a-new-security-attack-vector/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.