On Nov. 13, 2025, the Electronic Frontier Foundation (EFF) hosted a live stream in its EFFecting Change series titled This Title Was Written by a Human. Katharine Trendacosta, EFF’s director of policy and advocacy, moderated.The panelists were:
- Pam Samuelson, co-director of the Berkeley Center for Law & Technology
- Tori Noble, an EFF staff attorney
- Serife Wong, consultant and advocate in the art and technology space
Trendacosta said it’s difficult to know what’s real about generative AI (gen AI) and what’s hype. She introduced the three main questions that would guide the conversation:
- What is AI good for?
- What are the concerns about AI and copyright?
- What should we do about AI governance?
What Good Can AI Do?
Wong went first in answering the first question, saying there are a lot of poor uses of AI, although there are Indigenous communities that have built their own systems because the big technology companies will take your data and sell it back to you. They aren’t concerned about making AI inclusive.
Noble sees a lot of positive uses of AI, but they’re often overlooked because they may be less profitable or may be focused on marginalized communities. AI is making strides in accessibility for people with disabilities. Noble also pointed to speech-to-text and text-to-speech tools and live captioning that make it easier for everyone to communicate. AI translation reduces language barriers. And AI can help scientists design new drugs for hard-to-treat diseases.
Samuelson sees a lot of hand-wringing about deepfakes and misinformation, but about 40% of Americans use AI, which is a sign that it’s a technology with a lot of different uses. People can leverage it for good or ill. New technologies have displaced jobs in the past, and new jobs get created, so she’s optimistic.
Wong noted that AI often comes into the last stage of scientific research, so it’s not doing the bulk of the work. Trendacosta agreed, saying that it’s a tool; don’t put the people doing the actual work in the background. She asked how each panelist is using AI.
How the Panelists Use AI
Noble said she thinks about what AI is good at and how it can save her time. For research, she approaches it as a search engine. It’s useful for creating outlines, not a polished final draft of writing, although it can check for grammar mistakes and typos. It’s not the best use of AI to try to automate entire processes.
Samuelson uses AI for planning trips, as an ideation system, and for making task lists.
Wong noted that AI is divisive for artists: Is it just another tool or is it destroying everything art stands for? Context is important, she said. Look at specific use cases before judging it. Employees can feel pressured to use it, and that shouldn’t be the case. Wong calls AI a surveillance mechanism. It’s confusing, and we should work together to push for governance.
Trendacosta agreed that no one should be required to use it. She moved on to the second main question: When it comes to copyright, what are the arguments?
AI and Copyright: Ongoing Lawsuits
Samuelson discussed a few of the 59 current lawsuits brought against gen AI companies. Two cases so far were decided that AI using copyrighted works as training data is fair use. She believes it’s hard to predict what will happen with the other suits.
Noble shared EFF’s hope that the cases are decided like the Google Books ruling and that they’ll be a continuation of existing case law that prioritizes fair use. However, rightsholders want to see copyright expanded to cover any harm caused to their bottom line.
Trendacosta asked the panelists to discuss the outputs, not only the training data.
Noble said that if AI spits something out exactly as it was fed into it, that feels more like traditional copyright infringement.
Samuelson wondered whether a user is the infringer if they ask an AI for something that is copyrighted, such as an image of Batman fighting Superman. Or is that fair use as long as they don’t sell the image? There’s no simple answer.
How We Should Talk About AI
Trendacosta asked the third main question, about how to have productive conversations around AI.
Wong insisted that having diverse voices at the conversational table is necessary; it must be an environment where everyone can speak openly, but that’s difficult to achieve. She noted that we’re in an environment of fear, and that needs to be acknowledged.
Noble agreed that there’s a legitimate fear about AI taking away jobs. But overemphasizing how many jobs it’s taking is hindering productive conversations. We shouldn’t magnify that fear, she advised.
Trendacosta built on that point by saying that AI is sold to corporations as a way for them to save money by hiring fewer people.
Samuelson cautioned that we don’t know how many jobs will be affected because we’re still figuring out how AI will be beneficial. It’s too early to panic. Right now, we should only be collecting data.
Fact-Checking AI
The moderator and the panelists engaged in an audience Q&A, and then Trendacosta asked a final question about how to fact-check AI.
Wong said the term “hallucination” is a misnomer. It’s really “error.” Everything an AI offers is a hallucination of real information that is based on guessing. It’s how AI functions. Companies haven’t been able to solve the problem of the errors because it just isn’t solvable right now.
Noble suggested being intentional about what you’re using AI for. You can use it for something innocuous, like asking where to have dinner when in a new city, and you don’t have to worry about errors.
Samuelson believes that although there are fewer fact-checkers in journalism these days, it’s an important job. Fact-checkers will never go out of style, she said.