👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993. I was on USENET extensively then; I confirm the disruption was indeed similar. I urge you to read his essay, think about it, & join Denver, me, & others at the following datetimes…
$ date -d '2026-04-21 15:00 UTC'
$ date -d '2026-04-28 23:00 UTC'
…in https://bbb-new.sfconservancy.org/rooms/welcome-llm-gen-ai-users-to-foss/join
#AI #LLM #OpenSource
Post
No replies yet
Be the first to share your thoughts.
(1/5) [ Meta-info to start the thread. Here and the posts that follow reply to lots of people's comments (from various threads) together here. Can we consolidate this conversation into this single thread to discuss https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ ? ]
Cc: @wwahammy @silverwizard @mjw @cwebber @josh @jamey @mason @spencer @rootwyrm @drwho @mmu_man @mathieui @beeoproblem
(2/5) … In https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ ,
Denver's key points are: we *have* to (a) be open to *listening* to people who want to contribute #FOSS with #LLM-backed generative #AI systems, & (b) work collaboratively on a *plan* of how we can solve the current crisis.
Nothing ever got done politically that was good when both sides become more entrenched, refuse to even concede the other side has some valid points, & each say the other is the Enemy. …
@bkuhn@fedi.copyleft.org @wwahammy@social.treehouse.systems @silverwizard@convenient.email @cwebber@social.coop Way to ignore the entire copyright point…
Unfortunately, this is what always has been done by LLM proponents: Whenever the copyright question comes up, it just gets ignored.
I guess that is the same way the AI techbros operate: "Let's just ignore the copyright for now, get AI-tainted code into everything and then hopefully AI code tainted so much that judges don't want to open that can of worms!". Until they finally do because some big companies with enough lawyer money start to fight it all the way.
With the current rate of AI tainting everything, maybe it's time to look for hobbies and jobs that don't involve computers…
@js The intent of the post was not to enumerate the issues with LLMs, which I think most of us here know well. Rather, we want to think about how to engage with people about their newfound ability to make software, and how to use that to benefit others. If that means we need to make models trained only on copylefted software, so be it. But let's have that as a separate discussion.
@bkuhn @ossguy I have to admit that I am pretty surprised by this post. Not in terms of being welcoming to newcomers, which is something I have advocated for and made the center of all of my FOSS work.
However, the post says the following:
> I encourage all of us in the FOSS community to welcome the new software developers who've adopted these tools, investigate their motivations, and seriously consider cautiously and carefully incorporating their workflows with ours.
While the sentence which follows acknowledges that "seasoned software developers understand the benefits and limitations of LLM-assisted coding tools", there are two big things I expected at least acknowledged:
- Many maintainers are facing *burnout* over the situation. However, I agree that addressing this in terms of norms is something we can consider
- The biggest thing I am surprised to not see addressed at all is the licensing and copyright implications
(cotd)
@bkuhn @ossguy The surprising thing about saying "seriously consider cautiously and carefully incorporating their workflows with ours" is that it doesn't address at all my *biggest* fear: the copyright status of LLM generated contributions seems currently unsettled.
I know there's been assertions to the contrary floating around: the Supreme Court deferred to a lower court in the US. However that is not the same thing as the Supreme Court making a specific decision. And internationally, the copyright situation of output is even murkier... it will take a long time for this to settle.
Does Conservancy not think this is the case? I would be surprised if so, but perhaps you all have an interpretation that I am not currently aware of.
If there *is* concern, then we hit a serious risk: we may be seeing many contributions with legal status which has *yet to be determined* entering seasoned codebases. And this worries me a lot.
@richardfontana @bkuhn @ossguy In which of the 5 million ways I could parse that sentence do you mean it?
@cwebber I think maybe you missed https://sfconservancy.org/blog/2026/mar/04/scotus-deny-cert-dc-circuit-thaler-appeal-llm-ai/ where #SFC analyzed that situation?
Also, follow @ai_cases & see the *firehose* of litigation on this & remember the “Work Based on the Program” issue under GPLv2 has still never been litigated directly but lots of cases about 100% proprietary software have bolstered GPL's strength.
Big Content has legal battles with Big Tech on 100s of fronts rn. Yes, we're adrift on their sea, but the situation is not as dire as you imagine.
@bkuhn @ossguy @richardfontana Continuing here, because it's the relevant subthread.
I am sympathetic to choosing to narrow a topic. However, the post, in implying that we should start accepting partially AIgen contributions, inherently pulls in the topic of whether or not that is legally safe.
Yes, I have read the previous Conservancy post about the existing cases. This partly contributes to my surprise and confusion about the post.
Acknowledging that the plan is to have continued conversations and meetings about this, I still feel it is important to lay down my current concerns, even before such a meeting. I am leaving the "quality of contributions" and many other details out of here, and instead focusing on whether of not it is *safe to accept* contributions on copyright grounds at the moment, and what the implications of thinking on that are.
(cotd)
@bkuhn @ossguy @richardfontana So the question is: is it safe, from a legal perspective, given the current state of uncertainty of copyright of such contributions, to encourage accepting such contributions into repositories?
Now clearly, many projects are: the Linux kernel most famously is, and their recent policy document says effectively, "You can contribute AI generated code, but the onus is on you whether or not you legally could have".
Which is not very helpful of a handwave, I would say, since few contributors are equipped to assess such a thing. I've left myself and three others addressed in this portion of the thread, and all of us *have* done licensing work, and my suspicion is, *especially* based on what's been written, that none of us could confidently project where things are going to go.
@bkuhn @ossguy @richardfontana Part of the problem here is that the AI companies have set the stage themselves. Their presumption is that it's fine to absorb effectively all open and "indie" content, and that this is entirely fair to pull into a model without any legal implications, whereas potentially yes, you may need to "license" something that looks like a Disney character. In the land of code, I also sense that Microsoft is perfectly fine with the idea that you can "copyright launder" a codebase from the GPL to perhaps the public domain, but if someone did that to their own leaked source code, they would be very upset.
Meanwhile, a friend of mine who works in films has said that he keeps hearing rumors that OpenAI would like a cut of stuff made with their stuff. We should presume tthat true.
Regardless, I'm sure everyone on this thread wants an *equitable* situation for proprietary and FOSS licensing. I'll expand on that more in a moment though.
@bkuhn @ossguy There are other things I am less worried about. genAI tools used to probe for software vulnerabilities does not lead to contributions of unknown status. Same for using LLMs to explore a codebase. However, there isn't any distinction made in the post, only a "seriously consider cautiously and carefully incorporating their workflows with ours".
Does this mean Conservancy currently believes that the matter of genAI output by contemporary LLM tools is a settled matter, in terms of either a) being fully in the public domain or b) being the copyright status of the "prompter"?