Given the ubiquity of AI image generators such as DALL-E, and now the entrace of ChatGPT (AI chatbot), there is an increased need to develop keen(er) information assessment skills. Librarians are poised to provide ongoing training on disinformation, information appraisal, and the larger implications for publishing policy and practice.
What are the issues pertaining to how we will discern between human and non-human generated content? In the publishing arena, it's critical for us to prepare for the onslaught of AI-generated research articles: it's already a thing as ChatGPT has already been credited as an "author" (although subsequently corrected and removed) in some preprint publications.
Appearing in JAMA in January 2023, "Non-Human Authors and Implications for the Integrity of Scientific Publication and Medical Knowledge" delves deeper into the ethical implications for publishing. Scientific Misconduct and Research Ethics are the larger contexts into which the discussion is situated, the former an official Medical Subject Heading (MeSH) in PubMed. Of course, Banner Health Library Services provides full-text access to this article and much more and we're happy to teach you how to get it.
When you open Pandora's Box, you get everything -- the dream, the nightmare, and pretty much anything and everything in-between. Have you tried typing a query into ChatGPT? Go for it. I entered "Tell me about The Singularity" and the response was eerily on-point and entirely appropos for the occasion.
Commenting on blog posts requires an account.
Login is required to interact with this comment. Please and try again.
If you do not have an account, Register Now.