Skip to Main Content

Using Generative Artificial Intelligence (Gen AI) at MIT

This guide aims to provide guidance for students about using gen AI as part of their studies, as well as general information on Generative AI applications for students and staff, including providing options for various functions.

Glossary of Gen AI terms


Deep Belief Networks
"Deep belief nets are probabilistic generative models that are composed of multiple layers of stochastic, latent variables. The latent variables typically have binary values and are often called hidden units or feature detectors. The top two layers have undirected, symmetric connections between them and form an associative memory. The lower layers receive top-down, directed connections from the layer above. The states of the units in the lowest layer represent a data vector.

The two most significant properties of deep belief nets are:

  • There is an efficient, layer-by-layer procedure for learning the top-down, generative weights that determine how the variables in one layer depend on the variables in the layer above.
  • After learning, the values of the latent variables in every layer can be inferred by a single, bottom-up pass that starts with an observed data vector in the bottom layer and uses the generative weights in the reverse direction."

Reference: Hinton, G. E. (2009). Deep belief networks. Scholarpedia, 4(5), 5947.


Diffusion Probalistic Models
"These are a class of latent variable models that are common for some tasks in Gen AI such as image generation. 
Diffusion probably modles capture the image data by modelling the way data points diffuse through a latent space, which is inspired but statisticsl physics. Diffusion Probably models are used in such Gen AI tools as DALL-E and Midjourney."

Reference: Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in neural information processing systems33, 6840-6851.



Generative Adversarial Networks (GANS)
"A GAN consists of two neural networks that contest with each other, so that samples from a specific distribution can be generated."


Reference: Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative ai. Business & Information Systems Engineering66(1), 111-126.


Generative AI Modelling

“Generative modelling that is instantiation with a machine learning architecture (eg a deep neural network) and therefore can create new data samples based on learned patterns 

 

Reference: Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative ai. Business & Information Systems Engineering66(1), 111-126.


Hallucinations or Hallucinatory Data
"AI hallucination is a phenomenon where AI generates a convincing but completely made-up answer. OpenAI’s notes acknowledge that the answers generated by ChatGPT may sound plausible but be nonsensical or incorrect."
 
Reference: Athaluri, S. A., Manthena, S. V., Kesapragada, V. K. M., Yarlagadda, V., Dave, T., & Duddumpudi, R. T. S. (2023). Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus15(4).


Markov Chains
"Markov chains, which comprise Markov chains and Markov processes, have been successfully applied in areas as divers as biology, finance, manufacturing, telecommunications, physics and transport planning, and even for experts it is impossible to have an overview on the full richness of Markovian theory. Roughly speaking, Markov chains are used for modeling how a system moves from one state to another at each time point. Transitions are random and governed by a conditional probability distribution which assigns a probability to the move into a new state, given the current state of the system. This dependence represents the memory of the system."


Reference: Frigessi, A., Heidergott, B. (2011). Markov Chains. In: Lovric, M. (eds) International Encyclopedia of Statistical Science. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04898-2_347


Zero shot learning
Zero-Shot Learning (ZSL) is the challenge of learning a new concept, object, or a new medical disease recognition without receiving any examples beforehand.

Reference: Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative ai. Business & Information Systems Engineering, 66(1), 111-126.



WMS Login