Generative AI (GenAI) - like ChatGPT - is becoming a common tool throughout the world, including in academic settings. These tools can help you brainstorm, draft, summarize, or explain ideas. But while they’re powerful, they also come with important limitations that affect how (and when) you can use them in research.
Check with your individual professors for any guidelines surrounding GenAI use in your classes, but here’s some good things to know about GenAI and research.
Some common forms of Generative AI (like ChatGPT) are a type of AI known as large language models.
What is a Large Language Model (LLM)?
Large Language Models (LLMs) are a type of AI trained on vast amounts of text - books, websites, articles, and more - to recognize patterns in language. They don’t “understand” information like humans do. Instead, they predict the most likely words or phrases to come next in a sentence based on the input you give them.
Think of it like a super-advanced autocomplete, but with no awareness of facts or truth.
Because LLMs don’t access live data or verify sources, they may generate content that sounds correct but is actually inaccurate, outdated, or entirely made up. These are sometimes called “AI Hallucinations.”
Using GenAI Responsibly in Research
Before using GenAI in coursework or research, check your:
Pro Tip:
Use GenAI as a starting point, not a source. Think of it like a tutor or brainstorming partner - not a replacement for academic research.