Skip to Main Content

Using Generative Artificial Intelligence (Gen AI) at MIT

This guide aims to provide guidance for students about using gen AI as part of their studies, as well as general information on Generative AI applications for students and staff, including providing options for various functions.

Gen AI Issues

Due to the limitations and the particular nature of text based Gen AI, as a student or researches, using it for providing opinions or answering questions can be problematical in a number of ways. These may be more or less prevalent, depending on the specific application, the type of application, and what it is being used for:
Students should be aware of these issues whenever considering using Gen AI tools, and clarify if these are issues with the particular application they are using:


Issues and Problems with text Gen AI

Hallucinations (aka Hallucinatory data)
A problem with some Gen AI applications is that they create hallucinations (or artificial hallucinations), that is they produce misinformation or fake information. Hallucinations can be fictitious/nonsensical/unfaithful, for example, fictitious information, fake photos, or information with some part of which the software may present as the truth but is factually incorrect. Examples could include a photo that looks real but isn't, or text-based hallucinations, including lists of realistic looking journal citations.

Plagiarism
Use of the applications may result in the user committing plagiarism in essay writing (affecting academic integrity). Where Gen AI has other people's work inputted by the user, that work is then re written in another form containing the original author's ideas. Where that work is blended with others, or not referenced, this can then become plagiarism. Some Gen AI tools may take ideas from an undeclared source and reproduce them in their own output. 

Privacy
Inputting information into a Gen AI tool may cause privacy issues. For example, sharing information with ChatGPT could see private information released to the other users by the software, this making the information generally available.

Bias
Some Gen AI tools may exhibit bias in the material they output, in particular where searches for information are conducted on a text generating AI Application. Generative AI applications may be created by a company from a particular country or may be sourcing answers from particular pools of information, that may carry some sort of bias. This could include producing material that stereotypes about specific genders, sexual orientation, races or occupations.

Over reliance
Students over reliance on Gen AI may make them accustomed to it, so that they don't build up their own abilities, or if they have them, they gradually lose them. Continual use of Gen AI may mean you aren't challenged, and it may be impeding your development of creative, critical thinking, and problem-solving skills.

Originality of the work
It can be hard for assessor to address the originality of the work, where AI is mixed, and it's not clear where the work originates from. Does it come from the student, or the AI?

Cheating
Use of Generative AI in some situations will definitely count as cheating. Students who have access to devices in exams may use Gen AI to cheat, giving the student access to other people's work which the student can then mis-represent as their own, and it can respond to an inputted question or essay topic in seconds.


 

Limitations of text GenAI applications

 

Can only formulate existing information – doesn’t analyse
Generative AI has access to different datasets, and can formulate conclusions that humans have come to already. However, it cannot analyze data and come to its own conclusions. Answers to prompts are put together from other answers that the gen AI itself can access and formulate. 

 

Can’t check currency – doesn’t provide information about when info was published
Generative AI doesn't provide information on the currency of the answer. Generative AI will access various pools of data, however it doesn't provide information to the user about that data. So, users are unaware how old the data is, and it may be out of date. 

 

Generally doesn’t provide recent information because of the way the information is gathered and made available.
Some chat type Generative AI applications, will only have information available at a certain date, and cannot respond to recent queries.
So for instance, ChatGPT only has information up to about 2 years ago, and Claude 3 cannot access the internet. Events after that it may not be aware of and may offer no opinion on.
However, Microsoft's Co-pilot and Google's Gemini accesses the internet for answers, so can respond to more recent events, and incorporate more recent information into its responses. 

Generally can’t access niche information – commercial niche information that will generally be paywalled won’t be provided in GenAI
Generative AI can't access paywalled information, so information that exists in commercial databases, journals, etc won't be able to be accessed.
IN this case, the user needs to access commercial information through whatever access they have to commercial subscriptions.

 

Doesn’t search everything – people assume it is searching absolutely everything, but it doesn’t search everything.
Generative AI will search widely for answers, but different applications will search different sources. For instance, they may not search wikipedia, for example.

 

It generally won’t tell you where the information comes from or credit the source
Generative AI tools generally lack transparency in terms of their sources. They don't reference the source, so unlike Wikipedia, which will often provide a reference at least, the source can't be judged for its authority, integrity, scholarship, currency or potential bias. This means it can't be used for serious research. 
Some generative AI (for instance Gemini) can be prompted to give a reference for its conclusions; however it can have limited ability to do this for niche subjects.

Will cover main subjects but have problems with niche areas
Niche area topics are sometimes only found in specific paywalled databases, rather than more commonly available sources. This means
generative AI may have issues in these area.

 

WMS Login