Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



 



News & Events > NewsBreaks
Back Index Forward
Threads bluesky LinkedIn FaceBook Instagram RSS Feed
Weekly News Digest

April 4, 2023 — In addition to this week's NewsBreaks article and the monthly NewsLink Spotlight, Information Today, Inc. (ITI) offers Weekly News Digests that feature recent product news and company announcements. Watch for additional coverage to appear in the next print issue of Information Today.

CLICK HERE to view more Weekly News Digest items.

CCC Town Hall Explores the Current State of AI Tools

On March 30, CCC hosted a town hall via LinkedIn, ChatGPT & Information Integrity. Chris Kenneally, CCC’s senior director of content marketing, moderated the live event, inviting the speakers to share their experiences with ChatGPT and other AI tools and to express their concerns and questions about this rapidly changing technology. AI tools are bound to change scholarship and research, he said, but we don’t yet know in what ways.

The speakers were:

Each speaker provided introductory discussion points. Crovitz said that NewsGuard ran tests using ChatGPT, and it displayed false information, including that the children who were killed at Sandy Hook Elementary School were actually paid actors. With its latest release, ChatGPT repeated 100 out of 100 false narratives that NewsGuard fed it.

Brown asserted that we are underprepared to have a conversation about AI. Society is still at the level of wondering whether a customer service representative is a chatbot or a real person. She said that we need to be focused on who is going to take responsibility for each AI tool.

Chua called AI tools “extremely good autocomplete machines.” Semafor has been using them for basic proofreading and copy editing, which has been going well. They are language models and are not particularly smart on their own yet.

Brill said the key to moving forward with AI is accountability plus transparency. The newest version of ChatGPT is good at reading and mimicking language, and that makes it more persuasive in perpetrating hoaxes. He cited the example of cancer.org, the official site of the American Cancer Society, and cancer.news, a site rife with misinformation. ChatGPT reads the information on the .org site with the same regard as the .news site, not differentiating the veracity of the information on each.

Bates believes that the transition away from traditional information gathering isn’t a bad thing; for example, she finds Google Maps to be much more effective at keeping her from getting lost than paper maps. She likened AI tools to teenagers: She wouldn’t trust a 17-year-old to do her research for her, but they could give her a good start. AI tools will never be a substitute for a professional researcher, she said.

Brill noted that while ChatGPT has been proven to be able to pass a legal bar exam, it isn’t great at discerning misinformation. Crovitz talked about NewsGuard for AI, a new solution that provides data for AI tools to train them to be able to recognize false information, thus minimizing the risk of spreading misinformation. He said that in the responses chatbots generate, there needs to be a way to access information about whether the answer that was given is likely to be true.

Brown’s Sense about Science advocates for a culture of questioning: Ask where data comes from and whether it can bear the weight someone is putting on it. One of the key questions that gets missed with machine learning is, How is the machine doing the learning? Also, what is its accuracy rate? Does it push people toward extreme content? What kind of wrong information is tolerable to receive?

Kenneally reinforced these ponderings by saying that there is no question that AI models are amazing, but we need to examine how well they perform.

Brown cited the Nature policy that AI language models will not be accepted as co-authors on any papers. She said more organizations need to say they won’t accept AI authors because AI can’t be held accountable. There is a lack of maturity in AI discussions, she believes, and not enough thought put into the real-world context they’ll be released into. There needs to be a clearer sense of who is signing off on what when it comes to AI developers.

Chua underscored her earlier point that AI tools are not actually question-and-answer machines, they’re language machines. They don’t have any sense of verification; they only mimic what they’ve been fed. She noted that they say what is plausible, not what is true or false. We can use them to help us formulate questions because of their attention to written style. She did an experiment with one of the AI tools: She created a news story and asked it to write in the style of The New York Times, then The New York Post, then Fox News. Each time, it mimicked that outlet’s style well. This type of usage is currently the best way to employ AI tools, she said.

Bates said researchers should keep in mind that the tools are doing simple text and data mining, looking for patterns. They can’t infer something you’re not asking; only real people can take context into account. A chatbot doesn’t know what you’re planning to do with your research, it doesn’t ask follow-up questions, and it’s not curious like a human researcher is. A chatbot is a helpful paraprofessional, but anything it provides needs to be reviewed by a professional, she said.

The presenters continued their discussion and addressed some comments from attendees. Access the recording of the town hall at youtube.com/watch?v=RF3Gs-BNOtM.



Send correspondence concerning the Weekly News Digest to NewsBreaks Editor Brandi Scardilli

Related Articles

11/7/2024LinkedIn Creates an AI Hiring Assistant to Help With Job Recruitment
6/25/2024Learn About AI and Publishing at a LinkedIn Live Program
1/9/2024The Verge Studies Website Optimization on Google
8/31/2023CCC and GO FAIR Foundation Plan FAIR Forum in September
8/15/2023AI on the Library Shelf: How the Hollywood Strikes Exposed a Battle for the Creative Soul
7/20/2023Professor Compares and Contrasts ChatGPT With Wikipedia
6/20/2023False Information From ChatGPT Prompts a Defamation Lawsuit
6/8/2023'Putting ChatGPT to The Test: Will It Help Your Library With Promotions? …' by Angela Hursh
6/6/2023CCC Explores Standards Development With a Panel of Experts
4/27/2023OpenAI Plans Subscription Version of ChatGPT
4/25/2023Class Online Learning Company Plans ChatGPT-Based Teaching Assistant
4/18/2023CCC Partners With Malaysia Reprographic Rights Centre
3/23/2023CCC Plans Town Hall on Generative AI Tools
2/7/2023CCC Releases a Study of How Knowledge Workers Handle Information
9/22/2022CCC Rolls Out the OA Agreement Intelligence Solution
8/18/2022CCC Updates Marketplace to Better Serve Medical Communications Professionals
5/5/2022CCC Unveils New Knowledge Graph, Buys Ringgold


Comments Add A Comment

              Back to top