Large Language Models of Artificial Intelligence
Large language models of artificial intelligence are trained on massive amounts of text data from books, articles, and websites. They are built on statistics that predict results. They are not contextual, and they are not search engines. Small language models use smaller, more focused datasets for targeted tasks. Programs, such as Google's NotebookLM, use language models with uploaded documents or websites to produce more informed responses (Retrieval Augmented Generation: RAG).
Generative AI Product Tracker is a free, living document of descriptions, features, pros, and limitations of GAI tools marketed to higher education or widely used for teaching, learning, and research. It is provided by Ithaka S+R. Ithaka is the non-profit that publishes JSTOR.
EBSCO, ProQuest, and JSTOR provide beta versions of AI Research Assistants within the results screen. These tools may highlight the focus of an article, chapter, or book so you can assess its relevance to your research. Results vary by database, so check the specific features available in each.
If you choose to use AI programs or features, always fact-check, use critical thinking to assess results, and read excerpts in context.