Exploring Free-text Conditioning of Language Models to Generate Justification Questions

August 05, 2020 – ,

We are able to supervise students from FB18 or FB20.

Motivation

The advertised thesis aims to explore how an open-domain natural language generation (NLG) model such as GPT-2 can be conditioned to generated justification questions for learning texts in a variety of domains. Given e.g. a paragraph of a textbook about computer networks we aim to generate plausible, but not necessary right statements about the given text:


Justify if the following statement is true or false:
VPNs have the disadvantage of being difficult to set up and maintain and they can be compromised by bad actors. (generated statement)


Research in psychology suggests that such statements, if crafted carefully, encourage readers to reflect on the text increasing their comprehension. Also, recent research in NLP has shown that open-domain models can be conditioned by inputting an initial text that they must continue logically. Hence, the thesis aims to explore to what extent we can automatically create such conditioning texts resulting in meaningful justification questions.

Task

  1. In-depth review of the related work of open-domain text generation and information extraction methods
  2. Selecting a language model and designing a conditioning text generation algorithm for the selected model
  3. Implementing a justification question generator based on the model and the algorithm
  4. Empirical evaluation of your implemented approach
  • Sound and well justified study design
  • You will have to work with people! (N>=30; maybe MTurk)

Requirements

  • Interest in Educational Technologies
  • Experience in Information Extraction (e.g. Extractive Summarization, Keyphrase Extraction)
  • Experience in Statistical Learning (e.g. Transformer models)

Initial Literature

  • Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9.

  • Zhou, X., Zhang, Y., Cui, L., & Huang, D. (2019). Evaluating Commonsense in Pre-trained Language Models. arXiv preprint arXiv:1911.11931.
  • Witteveen, S., & Andrews, M. (2019). Paraphrasing with large language models. arXiv preprint arXiv:1911.09661.

download corresponding tendering

Keywords: automatic question generation; NLG; information extraction

Research Area(s):

Tutor: Steuer,

Open Theses