Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



News & Events > NewsBreaks
Back Index Forward
Threads bluesky LinkedIn FaceBook Instagram RSS Feed
 



OpenAI and Microsoft Work to Balance Open Ideals With Complex Realities
by
Posted On January 19, 2021
OpenAI, “a non-profit artificial intelligence research company,” was founded in 2015 by four brilliant entrepreneurs: Tesla and SpaceX’s Elon Musk; Sam Altman, the then president of startup funder Y Combinator; Greg Brockman, the then CTO of financial services company Stripe; and deep-learning scientist Ilya Sutskever. This San Francisco startup now employs about 100 people and is actively working to achieve its aim of collaborating with organizations to make its patents and research open to the public. On April 27, 2016, OpenAI released a public beta of Gym, its “toolkit for developing and comparing reinforcement learning algorithms,” to support teaching agents.

COMMITTED TO OA

OpenAI’s initial mission statement said that its goal was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” Its updated statement says its mission is “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity” [emphasis added].

In a recent interview, OpenAI’s chairman and CTO Brockman discussed the work of the past 5 years and a pivotal change in the company’s approach—from true OA to a more controlled release, in particular for language models GPT-2 and GPT-3. He notes, “We realized that as these things get powerful, they’re dual-use … and that we as technology developers have a responsibility to not just say, ‘Hey, we built this thing, it’s up to the world to decide how to use it.’ … There’s no undo button for open source.” By continuing to control the source code, OpenAI can guarantee that it remains an open but stable AI development system.

In July 2019, OpenAI established a collaboration with Microsoft that included a $1 billion investment in the startup with the goal of “building … AGI with widely distributed economic benefits.” In the announcement, OpenAI shares the history of AI-related advances: “Each year since 2012, the world has seen a new step function advance in AI capabilities. Though these advances are across very different fields like vision (2012), simple video games (2013), machine translation (2014), complex board games (2015), speech synthesis (2016), image generation (2017), robotic control (2018), and writing text (2019), they are all powered by the same approach: innovative applications of deep neural networks coupled with increasing computational power. But still, AI system building today involves a lot of manual engineering for each well-defined task.”

The announcement notes that OpenAI “will be a system capable of mastering a field of study to the world-expert level, and mastering more fields than any one human—like a tool which combines the skills of Curie, Turing, and Bach. An AGI working on a problem would be able to see connections across disciplines that no human could. We want AGI to work with people to solve currently intractable multi-disciplinary problems, including global challenges such as climate change, affordable and high-quality healthcare, and personalized education. We think its impact should be to give everyone economic freedom to pursue what they find most fulfilling, creating new opportunities for all of our lives that are unimaginable today.”

It sounds almost like science fiction. Even just 50 years ago, who could have imagined the incredible advances we now take for granted?

MICROSOFT ALLIANCE SPARKS CRITICISM

“The investment will make Microsoft the ‘exclusive’ provider of cloud computing services to OpenAI,” reports James Vincent in The Verge. OpenAI “was intended to match the high-tech R&D of companies like Google and Amazon while focusing on developing AI in a safe and democratic fashion. But earlier this year, OpenAI said it needed more money to continue this work, and it set up a new for-profit firm to seek outside investment.”

Mohanbir Sawhney writes in Forbes that “Microsoft is playing a long game with this investment. The OpenAI partnership cements the position of Microsoft’s Azure cloud infrastructure as the platform of choice for the next generation of artificial intelligence (AI) supercomputing applications. The partnership also positions Microsoft as the thought leader in responsible and safe AI, insulating it from the regulatory backlash that Facebook and Google are facing in Europe and the United States.”

Frederic Lardinois explains in TechCrunch that the collaboration aims “to create one of the world’s fastest supercomputers on top of Azure’s infrastructure. … Since Microsoft’s massive investment, OpenAI has made Azure its cloud of choice and this supercomputer was developed ‘with and exclusively for OpenAI.’”

In MIT Technology Review, Karen Hao reflects, “The lab was supposed to benefit humanity. Now it’s simply benefiting one of the richest companies in the world.” She adds, “OpenAI will continue to offer its public-facing API, which allows chosen users to send text to GPT-3 or OpenAI’s other models and receive its output. Only Microsoft, however, will have access to GPT-3’s underlying code, allowing it to embed, repurpose, and modify the model as it pleases.” The concern over concentrated AI power is growing, Hao notes. “The most advanced AI techniques require an enormous amount of computational resources, which increasingly only the wealthiest companies can afford. This gives tech giants outsize influence not only in shaping the field of research but also in building and controlling the algorithms that shape our lives.”

Musk officially left OpenAI’s board of directors 3 years ago, but has continued to support the company. His reaction to the Microsoft deal was a strongly worded tweet: “This does seem like the opposite of open. OpenAI is essentially captured by Microsoft.”

In the collaboration announcement, OpenAI has a pre-emptive answer for the critics:

We believe that the creation of beneficial AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity. We have a hard technical path in front of us, requiring a unified software engineering and AI research effort of massive computational scale, but technical success alone is not enough. To accomplish our mission of ensuring that AGI (whether built by us or not) benefits all of humanity, we’ll need to ensure that AGI is deployed safely and securely; that society is well-prepared for its implications; and that its economic upside is widely shared. If we achieve this mission, we will have actualized Microsoft and OpenAI’s shared value of empowering everyone.

OpenAI assures The Verge’s Nick Statt that “the deal has no impact on continued access to the GPT-3 model through OpenAI’s API, and existing and future users of it will continue building applications with our API as usual.” However, the issue is certainly more complex and deeper than this rather simple explanation. Statt says, “So while other companies and researchers may be able to access GPT-3 through OpenAI’s API, only Microsoft will reap the benefits of getting to make use of all the AI advancements that went into making it such a sophisticated program. It’s not clear what that will look like right now, but what is clear is that Microsoft sees immense value in OpenAI’s work and likely wants to be the first (and, in this case, only) company to take its largely experimental research work and translate it into real-world product advancements.”

As OpenAI’s work begins to influence more research areas, the coming years will be most important to the hope for an open solution to harness Big Data for the masses. And the tension between OpenAI’s open development roots and the apparent requirement for the deep pockets of tech giants to pay the bills will be something that all information professionals need to follow closely.

RISKY BUSINESS

This past year has seen some serious questions arise about how AI systems are being developed. Timnit Gebru, co-leader of Google’s Ethical AI team, was allegedly forced out of the company in December 2020 after being chastised for a paper she co-authored, which WIRED describes as follows: “The paper discussed ethical issues raised by recent advances in AI technology that works with language, which Google has said is important to the future of its business. Gebru says she objected because the process was unscholarly. … [S]he said she was fired. A Google spokesperson said she was not fired but resigned, and declined further comment.” Gebru was called “one of the most high-profile Black women in her field and a powerful voice in the new field of ethical AI” by The Washington Post (subscription required).

Google’s Jeff Dean publicly defended the company’s position on Twitter, sharing an email he wrote stating that the paper was reviewed “as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research—for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.”

Many in the software industry have questioned (or condemned) Google’s alleged firing of Gebru. The Washington Post (subscription required) notes that “her abrupt firing shows that Google is pushing back on the kind of scrutiny that it claims to welcome, according to interviews with Gebru, current Google employees, and emails and documents viewed by The Post. It raises doubts about Silicon Valley’s ability to self-police, especially when it comes to advanced technology that is largely unregulated and being deployed in the real world despite demonstrable bias toward marginalized groups. Already, AI systems shape decision-making in law enforcement, employment opportunity and access to health care worldwide.”


Nancy K. Herther is a research consultant and writer who recently retired from a 30-year career in academic libraries.

Email Nancy K. Herther

Related Articles

5/23/2024The Verge Shares Updates From Microsoft Build 2024
11/14/2023Preservica Enterprise Customers Get Access to the Microsoft Azure Cloud Platform
9/14/2023Artefacto Helps Libraries Opt Out of GPTBot
7/6/2023OpenAI Under Fire for Allegedly Scraping Personal Data
11/1/2022Reuters Reports on Elon Musk Buying Twitter
10/6/2022The Elon Musk-Twitter Dance Continues
9/29/2022Omneky Improves on Artificial Intelligence-Based Art Generation
9/29/2022Testing Out Artificial Intelligence-Based Art Generation
2/24/2022'Artificial Intelligence Challenges What It Means to Be Creative' by Richard Moss
2/8/2022Data Colonialism: A New Way to Look at the Complex World of Information
10/5/2021'UK's 10 Year Plan to Become Global AI Superpower' by Adam Collins
9/21/2021New Study Underlines the Importance of Human Involvement With AI
5/20/2021Denodo's New Survey Shows Increased Usage of Cloud Computing
3/2/2021UNESCO and World Economic Forum Plan Roundtable on Gender Bias in AI
2/23/2021Business Insider: 'Google Fires Top AI Ethicist 2 Months After Co-Leader's Ousting, Leaving Its Ethical AI Team Headless'
12/8/2020New Executive Order Outlines Principles for Using AI in Federal Agencies
7/16/2020UNESCO's AI Draft Recommendation Is Available for Public Comment
6/4/2020'Microsoft Lays Off Journalists to Replace Them With AI' by Tom Warren
3/10/2020Facing the AI Copyright Conundrum
12/10/2019'How NIH Is Using Artificial Intelligence to Improve Operations' by Patti Brennan
11/12/2019White Paper Explores AI and the Publishing Industry
10/1/2019The Good and Bad Sides of AI
9/17/2019ULC Publishes Leadership Brief on Libraries and AI


Comments Add A Comment

              Back to top