Artificial Intelligence
Large language models of artificial intelligence are trained on massive amounts of text data from books, articles, and websites. They produce responses based on probability rather than understanding. They are not search engines, and their output may be incorrect or biased. Claude and ChatGPT are examples of large language models that do not reveal their training data.
Perplexity uses large language models to search and summarize current information on the internet; it includes source links.
Notebook LM, Elicit, and Undermind are research assistants. Google's NotebookLM organizes, summarizes, and provides notes on resources you upload. Elicit searches academic papers in Semantic Scholar using natural language to match concepts even without exact keywords. Undermind uses an iterative process to locate, extract, organize, and rank the most relevant papers primarily from Semantic Scholar.
EBSCO has introduced natural language into their search queries. EBSCO, ProQuest, and JSTOR provide beta versions of AI Research Assistants within the results screen. These tools may highlight the focus of an article, chapter, or book so you can assess its relevance to your research. Results vary by database, so check the specific features available in each.
If you choose to use AI programs or features always fact-check, use critical thinking to assess results, and read excerpts in context.