I have grave doubts about LLMs and their ilk: there is no "there" there. But that will have to wait for a future blog post. One thing they are known for is "hallucinating", or generating things that have plausible form, but no denotation, such as citing non-existent legal cases, scientific references, and ... code modules.
This last one is new to me, discovered in a comment on Ask a Manager. I write my own code, of course. But apparently some people are happy to have AI write code for them. I suppose yet another form of AI slop shouldn't surprise me. However, the associated security vulnerability is even more worrying than some random bogus citations. It's the problem of slopsquatting:
Like other forms of gen AI, coding AI makes up references to non-existent code libraries. “Slopsquatting” is when a malicious actor creates malicious code with the name of one of these non-existent code libraries. When you run your AI-generated code, instead of throwing an error it automatically downloads and runs that malicious code.
This is why we can't have nice things.
No comments:
Post a Comment