|
Weekly News Digest
|
April 6, 2023 — In addition to this week's NewsBreaks article and the monthly NewsLink Spotlight, Information Today, Inc. (ITI) offers Weekly News Digests that feature recent product news and company announcements. Watch for additional coverage to appear in the next print issue of Information Today.
CLICK HERE to view more Weekly News Digest items.
|
Annual Reviews Converted Its First 15 Journal Volumes of 2023 to OA
Annual Reviews states that it “has successfully converted the first fifteen journal volumes of the year to open access (OA) resulting in substantial increases in downloads of articles in the first month.”The company continues, “Through the innovative OA model called Subscribe to Open (S2O), developed by Annual Reviews, existing institutional customers continue to subscribe to the journals. With sufficient support, every new volume is immediately converted to OA under a Creative Commons license and is available for everyone to read and re-use. In addition, all articles from the previous nine volumes are also accessible to all. If support is insufficient, the paywall is retained.” The S2O model does not involve authors paying fees to publish in supported journals. “This means that researchers in relatively poorly funded areas of research, such as the social sciences, and researchers in [low- or middle-income countries] have the same opportunity to publish OA as their colleagues from better-funded disciplines or institutions,” Annual Reviews notes. For more information, read the press release.
'The Climate Governance Initiative and 67 Bricks Embark on the Creation of a New Global Knowledge Hub'
This press release is also available at 67bricks.com/the-climate-governance-initiative-and-67-bricks-embark-on-the-creation-of-a-new-global-knowledge-hub.The Climate Governance Initiative and 67 Bricks Embark on the Creation of a New Global Knowledge Hub The Climate Governance Initiative and 67 Bricks are pleased to announce their new partnership as part of their ongoing mission to support board directors to lead the conversation on climate change. Following an initial discovery phase to establish critical requirements for the creation of a flexible and scalable ‘global knowledge hub’, we are now delighted to be working on the build phase of this important resource. The Initiative will utilise 67 Bricks’ comprehensive experience in building customer-centric solutions to design a truly user-focused platform. The hub will be designed to work for both current needs and future plans, to ensure the Initiative and its global network of Chapters have the right tools in place to continue to develop and evolve their strategy to address climate change by building board director capacity. Jennifer Schivas, 67 Bricks, said: “Climate change is an issue that affects us all, and we are delighted to have the opportunity to work with CGI to tackle it at a global scale. We are excited by their plans for future development and look forward to designing a first-class solution to ensure they achieve.” Emily Farnworth, Director, Centre for Climate Engagement, Hughes Hall, University of Cambridge and Head of Secretariat, Climate Governance Initiative, said: “Working with 67 Bricks presents an opportunity to build a powerful and scalable knowledge-sharing platform that will empower board directors around the world to find the content and resources they need to drive climate action—today and into the future.” About the Climate Governance Initiative The Climate Governance Initiative mobilises boards of directors around the world to address climate change in their businesses. It does this by developing and supporting national associations that equip their members with the skills and knowledge needed to make climate a boardroom priority, building on the World Economic Forum’s Principles for Effective Climate Governance. Jack Cooper | Communications Manager | Climate Governance Initiative jc2177@hughes.cam.ac.uk About 67 Bricks 67 Bricks is a technology consultancy based in Oxford, UK. Founded in 2007, we’ve worked with some of the most innovative names in publishing—The Economist Intelligence Unit, De Gruyter, The Royal Society of Chemistry and many others. Our team of talented developers and market experts partner with companies to build information products for the data-driven world. The result? Customer experiences fit for the digital age and the ability to innovate continuously and build new revenue streams. For more information, visit 67bricks.com.
CCC Town Hall Explores the Current State of AI Tools
On March 30, CCC hosted a town hall via LinkedIn, ChatGPT & Information Integrity. Chris Kenneally, CCC’s senior director of content marketing, moderated the live event, inviting the speakers to share their experiences with ChatGPT and other AI tools and to express their concerns and questions about this rapidly changing technology. AI tools are bound to change scholarship and research, he said, but we don’t yet know in what ways. The speakers were: Each speaker provided introductory discussion points. Crovitz said that NewsGuard ran tests using ChatGPT, and it displayed false information, including that the children who were killed at Sandy Hook Elementary School were actually paid actors. With its latest release, ChatGPT repeated 100 out of 100 false narratives that NewsGuard fed it. Brown asserted that we are underprepared to have a conversation about AI. Society is still at the level of wondering whether a customer service representative is a chatbot or a real person. She said that we need to be focused on who is going to take responsibility for each AI tool. Chua called AI tools “extremely good autocomplete machines.” Semafor has been using them for basic proofreading and copy editing, which has been going well. They are language models and are not particularly smart on their own yet. Brill said the key to moving forward with AI is accountability plus transparency. The newest version of ChatGPT is good at reading and mimicking language, and that makes it more persuasive in perpetrating hoaxes. He cited the example of cancer.org, the official site of the American Cancer Society, and cancer.news, a site rife with misinformation. ChatGPT reads the information on the .org site with the same regard as the .news site, not differentiating the veracity of the information on each. Bates believes that the transition away from traditional information gathering isn’t a bad thing; for example, she finds Google Maps to be much more effective at keeping her from getting lost than paper maps. She likened AI tools to teenagers: She wouldn’t trust a 17-year-old to do her research for her, but they could give her a good start. AI tools will never be a substitute for a professional researcher, she said. Brill noted that while ChatGPT has been proven to be able to pass a legal bar exam, it isn’t great at discerning misinformation. Crovitz talked about NewsGuard for AI, a new solution that provides data for AI tools to train them to be able to recognize false information, thus minimizing the risk of spreading misinformation. He said that in the responses chatbots generate, there needs to be a way to access information about whether the answer that was given is likely to be true. Brown’s Sense about Science advocates for a culture of questioning: Ask where data comes from and whether it can bear the weight someone is putting on it. One of the key questions that gets missed with machine learning is, How is the machine doing the learning? Also, what is its accuracy rate? Does it push people toward extreme content? What kind of wrong information is tolerable to receive? Kenneally reinforced these ponderings by saying that there is no question that AI models are amazing, but we need to examine how well they perform. Brown cited the Nature policy that AI language models will not be accepted as co-authors on any papers. She said more organizations need to say they won’t accept AI authors because AI can’t be held accountable. There is a lack of maturity in AI discussions, she believes, and not enough thought put into the real-world context they’ll be released into. There needs to be a clearer sense of who is signing off on what when it comes to AI developers. Chua underscored her earlier point that AI tools are not actually question-and-answer machines, they’re language machines. They don’t have any sense of verification; they only mimic what they’ve been fed. She noted that they say what is plausible, not what is true or false. We can use them to help us formulate questions because of their attention to written style. She did an experiment with one of the AI tools: She created a news story and asked it to write in the style of The New York Times, then The New York Post, then Fox News. Each time, it mimicked that outlet’s style well. This type of usage is currently the best way to employ AI tools, she said. Bates said researchers should keep in mind that the tools are doing simple text and data mining, looking for patterns. They can’t infer something you’re not asking; only real people can take context into account. A chatbot doesn’t know what you’re planning to do with your research, it doesn’t ask follow-up questions, and it’s not curious like a human researcher is. A chatbot is a helpful paraprofessional, but anything it provides needs to be reviewed by a professional, she said. The presenters continued their discussion and addressed some comments from attendees. Access the recording of the town hall at youtube.com/watch?v=RF3Gs-BNOtM.
Access Partnership Explores the Results of a New Climate Change Report
Access Partnership issued an Access Alert, IPCC Releases 2023 Report on Climate Change, which states the following:On 20 March 2023, the Intergovernmental Panel on Climate Change (IPCC) released its latest report, which is likely to be the final edition before a potential ‘game over’ scenario. Spanning 8,000 pages, the report presents undeniable evidence of the challenges we face with regard to climate change. Despite the grim outlook, the report is written in a manner that is accessible to all audiences and includes comparisons of the state of our climate over the past 2 million years, an abundance of risks across various warming scenarios, and opportunities for sceptics. The Access Alert shares the three key takeaways from the World Resources Institute’s summary of the report’s top 10 findings. For more information, read the news item.
vLex and Fastcase Merge to Create the Largest Global Law Library
vLex shared the following via its blog:vLex and Fastcase, two of the largest, fastest-growing legal technology companies, announced today that they are merging to form the world’s largest law firm subscriber base with more than one billion legal documents from more than 100 countries. As part of the merger, Oakley Capital and Bain Capital Credit are investing in the combined business to expand its global reach and accelerate the company’s legal artificial intelligence (AI) lab, which develops AI tools that streamline research, tracking, writing, and filing documents for the legal industry. … The new combined entity will be called vLex Group, and its products will retain the name of vLex in global markets and Fastcase in the U.S. The company, which has offices in the United States, Europe, the U.K., Asia, and Latin America, will combine management teams and invest in unified global products based on the complementary strengths of both firms. It will maintain headquarters offices in Washington, D.C., Miami, and Barcelona. For more information, read the press release.
Nextgov: 'NIST Debuts Trustworthy and Responsible AI Resource Center'
Alexandra Kelley writes the following for Nextgov:The new Trustworthy & Responsible Artificial Intelligence Resource Center built by the National Institute of Standards and Technology will now serve as a repository for much of the current federal guidance on AI, featuring easy access to previously issued materials to help public and private entities alike create responsible AI systems. [T]he new AI Resource Center builds upon NIST’s AI Resource Management Framework and AI Playbook to support industry best practices in researching and developing socially responsible AI and machine learning systems absent overarching federal law. For more information, read the article.
Simba Information Releases Report on Scholarly Ebook Publishing Trends
Simba Information announced the following:Launching today from Simba is the new [for-a-fee] report, “Global Professional and Scholarly E-Book Publishing 2022-2026.” This study assesses business performance and emerging trends in the five major professional and scholarly e-book segments: Scientific & Technical; Medical; Legal; Business; and Social Science & Humanities. … The new Simba report chronicles strategic directions at major professional e-book publishers, including RELX, Springer Nature, Wiley, and Thomson Reuters. Analysis of the competitive environment, market structure, and growth drivers forms the foundation of this report. Its coverage encompasses business performance, corporate growth strategies, e-book product strategies, acquisitions and divestitures, and new product launches. For more information, read the press release.
A Take on ChatGPT From The Scholarly Kitchen Blog
Avi Staiman, founder and CEO of Academic Language Experts, writes the following in “Academic Publishers Are Missing the Point on ChatGPT” for The Scholarly Kitchen:From the moment ChatGPT was released in November, researchers began experimenting with how they could use it to their benefit to help write systematic reviews, complete literature searches, summarize articles, and discuss experimental findings. I was therefore surprised to see that when addressing the use of GPT, a number of major publishers ignored the far-reaching implications and plethora of use cases, instead zeroing in on one particularly obscure issue, namely, ‘ChatGPT as Author’. … However, publishers seemed to be answering a question that few were asking while avoiding other (more?) important use cases. Can or should authors use ChatGPT or other AI tools in the development and writing of their own research writing? Can it be used for conducting a literature review? What about analyzing results? Or maybe for drafting an abstract from an existing article? These are the important questions authors are asking where publishers seem to leave (too much?) room for interpretation. For more information, read the blog post.
Microsoft Has a New AI Tool to Help Cybersecurity Professionals
Sabrina Ortiz writes the following in “Microsoft Security Copilot Harnesses AI to Give Superpowers to Cybersecurity Fighters” for ZDNET:Microsoft is leveraging the power of GPT-4 to launch a new, generative AI security product called Microsoft Security Copilot. The main idea is to use conversational AI to enhance the capabilities of security professionals, who are often overwhelmed by the sheer numbers and the sophistication of today’s attacks. … Microsoft Security Copilot combines the power of OpenAI’s most advanced large language model (LLM), GPT-4, with a security-specific model from Microsoft. When Microsoft Security Copilot gets a prompt from a security professional, it uses its LLM and security-specific model to deploy skills and queries to help detect and respond to a security threat more quickly and accurately. … With Microsoft Security Copilot, defenders can respond to incidents within minutes, get critical step-by-step guidance through natural language-based investigations, catch what would otherwise go undetected, and get summaries of any process or event. For more information, read the article.
OCLC Helps Facilitate New National Cataloging Platform in Japan
OCLC announced the following:The National Institute of Informatics has successfully launched a new national cataloging platform for 1,300 libraries in Japan. OCLC’s Syndeo metadata software services were implemented to modernize the NACSIS-CAT/ILL service, which supports cataloging and interlibrary loan in Japan, and to facilitate national and international library collaboration. The launch marks the conclusion of a two-year implementation project supported by Kinokuniya Company, OCLC’s distributing partner in Japan, that has delivered on time and to plan. The new system accommodates multiple metadata types, including MARC21 and CAT-P, a unique format used in Japan. For more information, read the press release.
Send correspondence concerning the Weekly News Digest to NewsBreaks Editor
Brandi Scardilli
|